E-Book, Englisch, Band 147, 563 Seiten, eBook
Reihe: IFIP International Federation for Information Processing
IFIP 18th World Computer Congress TC11 19th International Information Security Conference 22–27 August 2004 Toulouse, France
E-Book, Englisch, Band 147, 563 Seiten, eBook
Reihe: IFIP International Federation for Information Processing
ISBN: 978-1-4020-8143-9
Verlag: Springer US
Format: PDF
Kopierschutz: Adobe DRM (»Systemvoraussetzungen)
contains the papers selected for presentation at the 19th IFIP International Conference on Information Security (SEC2004), which was held in August 2004 as a co-located conference of the 18th IFIP World Computer Congress in Toulouse, France. The conference was sponsored by the International Federation for Information Processing (IFIP).This volume is essential reading for scholars, researchers, and practitioners interested in keeping pace with the ever-growing field of information security.
Zielgruppe
Research
Autoren/Hrsg.
Weitere Infos & Material
An Abstract Reduction Model for Computer Security Risk.- Remediation Graphs for Security Patch Management.- Security Modelling for Risk Analysis.- Contrasting Malicious Applets by Modifying the Java Virtual Machine.- Analyzing Network Management Effects with Spin and cTLA.- Formal Reasoning of Various Categories of Widely Exploited Security Vulnerabilities Using Pointer Taintedness Semantics.- Meeting the Global Challenges of Security Incident Response.- Security in Globally Distributed Industrial Information Systems.- A Case for Information Ownership in ERP Systems.- Interactive Access Control for Web Services.- Identity-Based Key Infrastructures (IKI).- Modint: A Compact Modular Arithmetic Java Class Library for Cellular Phones, and its Application to Secure Electronic Voting.- Dependable Security by Twisted Secret Sharing.- A Language Driven Intrusion Detection System for Event and Alert Correlation.- Install-Time Vaccination of Windows Executables to Defend Against Stack Smashing Attacks.- Eigenconnections to Intrusion Detection.- Visualising Intrusions: Watching the Webserver.- A Long-Term Trial of Keystroke Profiling Using Digraph, Trigraph and Keyword Latencies.- Trusted Computing, Trusted Third Parties, and Verified Communications.- Maille Authentication.- Supporting end-to-end Security Across Proxies with Multiple-Channel SSL.- A Content-Protection Scheme for Multi-Layered Reselling Structures.- An Asymmetric Cryptography Secure Channel Protocol for Smart Cards.- IPsec Clustering.- Improving Secure Device Insertion in Home Ad Hoc Networks.- Spam Filter Analysis.- Collective Signature for Efficient Authentication of XML Documents.- Updating Encrypted XML Documents on Untrusted Machines.- Efficient Simultaneous Contract Signing.- DHCP Authentication Using Certificates.- Recursive Sandboxes: Extending Systrace to Empower Applications.- Fast Digital Certificate Revocation.- Masks: Managing Anonymity while Sharing Knowledge to Servers.- Security and Differentiated Hotspot Services Through Policy-based Management Architecture.- Key Management for Secure Multicast in Hybrid Satellite Networks.
2. PREVIOUS WORK (p. 311-312)
Kerberos (Steiner et al., 1988) is a centralized authentication system, designed to allow single-sign-on from trusted workstations. Kerberos based systems rely on a single or a small set of authentication servers. The Kerberos system uses a ticket scheme, which allows clients to authenticate against the Kerberos servers only once. Thereafter, for the lifetime of the ticket, no further authentication is required and services and other individuals can trust the ticket holder without having to know their key.
Kerberos does have several weaknesses. First, it is highly centralized, requiring one master server where all updates occur. Replication of the security information to other server will offload all authentication work, but cannot reduce the total amount of work the master server must do to update security information and to broadcast changes. Further, because Kerberos relies on a single master server for all changes, that server becomes a single point of failure from a hardware, software, security and political standpoint.
The KryptoKnight family of protocols (Bird et al., 1995) is designed for embedded devices and is optimized for speed and efficiency. It relies on a single, possibly replicated, authority to provide trusted keys and act as an intermediary during authentication for all clients. The main focus is on providing several protocols that allow the exchange of keys, challenges and responses to flow as efficiently as possible by allowing the use of information each of the parties may already have. The KryptoKnight protocol family does not address issues of scalability or how credentials are revoked. A Byzantine failure in an authority is catastrophic for all parties using that authority.
Public key infrastructure (PKI) (Adams and Lloyd, 1997) has become very popular for Internet commerce. It is also widely used in grid computing as the basis for the Globus Security Infrastructure (GSI) (Foster et al. 1998). PKI relies on a hierarchy of certificate authorities (CA) for scalability. At the top is the root CA, which signs certificates for servers in the second level and so on, until the lowest-level CAs are used to establish the identity of outside entities such as web servers. Revocations are handled through certificate expiration dates and revocation lists. Replication of CA ensures that most authentications will not be affected by a single failure. However, the higher up the hierarchy an authentication is required to go, the more likely a single failure is to prevent successful authentication. Caching prevents most interactions from requiring the root CA and other high level CA servers. Nevertheless, a Byzantine failure at the root level will lead to a complete loss of security. Failures at lower levels will result in security breach for only part of the system.
Politically, the root CA is a single point of failure. PGP (Zimmermann, 1995) is a system designed to let many individuals authenticate each other without a central authority. It provides a method of creating and distributing keys among small clique of users and for deciding to trust a key acquired from a third party. How much trust can be placed in a public key is directly related to how many intermediaries it went through.