DSI: Secure Carrier-Class Linux
The interest in clustering from the telecommunications industry originates with the fact that clusters address carrier-class characteristics, such as guaranteed service availability, reliability and scaled performance, using cost-effective hardware and software. These carrier-class requirements now include advanced levels of security. There are few efforts, however, to build a coherent distributed framework to provide advanced security levels in clustered systems.
At Ericsson Research, our work targets soft real-time distributed applications running on large-scale Linux carrier-class clusters. These clusters must operate nonstop and must allow operators to upgrade hardware and software during operation, without disturbing the applications that run on them. In such clusters, communications between the nodes inside the cluster and to external computers are restricted.
In this article, we present the rationale behind developing a new secure architecture, the DSI (Distributed Security Infrastructure). DSI supports different security mechanisms to address the needs of telecom applications running on carrier-class Linux clusters. DSI provides these telecom applications with distributed mechanisms for access control, authentication, auditing and integrity of communications.
Many security solutions exist for clustered servers, but no solution is dedicated to clusters.
The most commonly used security approach is to package several existing solutions. Nevertheless, the integration and management of these different packages is complex and often results in the absence of interoperability between different security mechanisms. Additional difficulties also are raised when integrating many packages, issues like ease of system maintenance and upgrade, and difficulty keeping up with numerous security patches.
Carrier-class clusters have very tight restrictions on performance and response time, making the design of security solutions difficult. In fact, many security solutions cannot be used due to their high-resource consumption.
Currently implemented security mechanisms are based on user privileges and do not support authentication and authorization checks for interactions between two processes belonging to the same system on different processors. However, for telecom applications, only a few users run the same application for a long period without any interruption.
Applying the above concept will grant the same security privileges to all processes created on different nodes. This would lead to no security checks for many actions through the distributed system.
As part of a carrier-class Linux cluster, DSI must comply with the carrier-class requirements of reliability, scalability and high availability. Furthermore, DSI supports the following requirements: 1) Coherent framework: security must be coherent across different layers of heterogeneous hardware, applications, middleware, operating systems and networking technologies. All mechanisms must fit together to prevent any exploitable security gap in the system. 2) Process-level approach: DSI is based on a fine-grained basic entity, the process. 3) Minimal performance impact: the introduction of security features must not impose high-performance penalties. Performance can be expected to degrade slightly during the initial establishment of a security context; however, the impact on subsequent accesses must be negligible. 4) Preemptive security: changes in the security context will be reflected immediately on the running security services. Whenever the security context of a subject changes, the system will re-evaluate its current use of resources against this new security context. 5) Dynamic security policy: it must be possible to support runtime changes in the distributed security policy. Carrier-class server nodes must provide continuous and long-term availability; thus, it is impossible to interrupt the service to enforce a new security policy. 6) Transparent key management: cryptographic keys are generated in order to secure connections. This results in numerous keys that must be stored and managed securely.
DSI has two types of components: management and service. DSI management components define a thin layer that includes a security server, security managers and a security communication channel (Figure 1). The service components define a flexible layer that can be modified or updated by adding, replacing or removing services according to the needs.
The security server is the central point of management in DSI, the entry point for secure operation and management and intrusion detection systems. It also defines the dynamic security environment of the whole cluster by broadcasting changes in the distributed policy to all security managers.
Security managers enforce security at each node of the cluster. They are responsible for locally enforcing changes in the security environment. Security managers only exchange security information with the security server.
The secure communication channel provides encrypted and authenticated communications between security agents. All communications between the security server and the world outside of the cluster take place through the secure communication channel. Two nodes (to avoid a single point of failure) host the security server and different security service providers, such as the certification authority.
The security mechanisms are based on widely known, proved and tested algorithms. Users must not be able to bypass these mechanisms; therefore, the best place to enforce security is at the kernel level. All security decisions, when necessary, are implemented at the kernel level, the same as for the main security manager component, which has stubs into the kernel. These stubs are implemented through load modules.
The DSI architecture at each node is based on a set of loosely coupled services. Each service, upon its creation, sends a presence announcement to the local security manager, which registers these services and provides their access mechanisms to the internal modules. Two types of services, security services (access control, authentication, integration, auditing) and security service providers (for example, secure key management), run at user level and provide services to security managers.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide