Securing Networked Applications with SESAME
Security is becoming increasingly important for Intranet and Internet applications. We hear almost daily of new security incidents on the Internet, sites being penetrated or sensitive data being captured by attackers. There are a number of solutions that together can secure your network.
The first is a firewall with the aim being to control access between networks. Typically they are used at a site's Internet connection to control who can get in and out between the site's internal network and the Internet. In some intranets, firewalls are also used to control access within the network.
However, firewalls are not the end of the solution. Firewalls do not protect applications behind the firewall from internal attack or secure applications that need to communicate through the firewall. We require a method to secure applications; the client and server portions of an application need to verify each other's identity (authentication), and data being transferred across networks must be protected from unauthorised viewing (confidentiality protected) or modification (integrity protected).
We have recently ported SESAME to Linux, and it provides a wonderful tool for anyone who needs to secure their applications on Linux. SESAME is also available on a range of other Unix platforms, so your Linux systems will be able to work with these, too.
The SESAME Security Architecture is an extension of Kerberos, arguably the most famous and most successful architecture to date for securing networks (http://web.mit.edu/kerberos/www/index.html). Kerberos was developed at MIT, starting around 1985 as part of project Athena. Kerberos was designed to be a network authentication service. It also provides confidentiality and integrity services to data during transit.
Kerberos works using a trusted third-party model. That is, somewhere on the network a Kerberos Authentication Server exists which can verify the identity of the client and server and provide proof to the other party of that authentication. Authentication is actually achieved by the client (acting on behalf of a user) and the server providing proof to the Kerberos Authentication Server that they know a DES cryptographic key (the user and server share their DES cryptographic key with the Kerberos Authentication Server).
Kerberos is very successful if industry acceptance is any guide. It is now being used widely for securing networks, although in many cases you may not realize you are actually using it. For example, Kerberos is built into a number of operating systems providing security to the rtools and NFS on Linux and Solaris, and it is being used as the basis of the cryptographic library on Windows NT. It is also part of a number of firewall implementations.
Kerberos was designed for a closed Intranet environment, where the users and applications are managed by one administration. Although Kerberos is a good security architecture for such an isolated network, it has problems scaling to a large inter-networked environment (e.g., the Internet). Kerberos has two main problems that limit its scalability: the use of DES cryptographic keys for authentication and the lack of a user privileges (authorization) service.
The lack of scalability through the use of DES cryptographic keys for authentication is known as a key management problem. For authentication to be possible between client or server and the Kerberos Authentication Server in such an arrangement, the system administrator needs to generate a DES cryptographic key and share it securely with the two parties involved. In a small network with a single group of system administrators it is a feasible solution, but in a large inter-network, securely sharing keys is difficult.
Kerberos also lacks an authorization service. When a client application working on behalf of a user connects to the server, the server needs to know the privileges of the user. Kerberos depends on the fact that the user's privileges are already stored on the server computer. For example, the user has an account on the server, or the user's privileges are known to the server application. This arrangement is not scalable, because in a large inter-network it is certainly not possible for every server computer to be configured with every possible user's privileges. We require another method for the server to gather the user's privileges.
SESAME is the Secure European System For Applications in a Multi-Vendor Environment (http://www.esat.kuleuven.ac.be/cosic/sesame.html). Some people call it the European equivalent of Kerberos, but in reality it is a much improved security architecture. SESAME provides the complete authentication service of Kerberos, including confidentiality and integrity during transit, and adds to it an authorization service and public key cryptography. Both additions allow SESAME to be much more scalable to a large networked environment than Kerberos. It also adds an excellent auditing system and provides a privilege delegation scheme.
SESAME was also designed to be completely platform independent. It uses a role-based, access-control model that allows SESAME to easily interoperate with many operating systems.
SESAME development began around 1990 by ICL, Bull and Siemens Nixdorf, with the current release SESAME Version 4 (which is the version we have made available for Linux).
SESAME has an Authentication Server similar to Kerberos, but adds to it a Privilege Attribute Server. The idea here is that a user can not only be authenticated, but also provided with privileges, which can be presented to the server when required. This allows a user to access a server that has no knowledge of a user but is able to verify the privileges provided. SESAME also allows authentication based on public key technologies (as well as the standard Kerberos DES cryptographic key method).
SESAME is also gaining industry acceptance. The ICL Access Manager (http://www.icl.co.uk/access/) and ISM Access Master (http://www.ism.bull.net/) are commercial products based on SESAME. These products are being used to secure large Internet wide networks.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
22 min 54 sec ago
25 min 26 sec ago
27 min 36 sec ago
- Bought photoshop CS5 for developing a website :(
3 hours 40 min ago
- What the author describes
5 hours 6 min ago
- Reply to comment | Linux Journal
9 hours 16 min ago
- Reply to comment | Linux Journal
10 hours 1 min ago
- Didn't read
10 hours 12 min ago
- Reply to comment | Linux Journal
10 hours 17 min ago
- Poul-Henning Kamp: welcome to
12 hours 27 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?