Kernel Korner - Filesystem Labeling in SELinux
Although it is possible to assign security context labels to NFS mounted filesystems, they operate only locally for access control decisions within the kernel. No labels are transmitted across the network with files. Work has been advancing in this area, with SELinux-specific modifications being made to the NFSv2/v3 protocols and code. Further down the track, NFSv4 integration is expected to involve labeling over the wire by way of named attributes, which are part of the more extensible NFSv4 specification. This would allow both the NFS client and server to implement SELinux security for networked files. Support for other networked filesystems also would be useful, as would interoperability with Trusted BSD's SELinux port.
Backup and Restoration
One of the many tasks that change for system administrators using SELinux is backup and restoration. When creating an archive, how will the security context labels be preserved within the archive? The answer is to use the highly flexible star(1) utility, which has extended attribute support.
To manipulate archives with security context labels, use the xattr option. When creating archives, you also need to specify the exustar format. For example:
$ star -xattr -H=exustar -c -f cups-log.star /var/log/cups
creates an archive of the /var/log/cups directory, retaining security context labels on the files.
To extract, simply use the xattr option:
$ star -xattr -x -f cups-log.star $ ls -Z var/log/cups/ -rw-r--r--+ root sys system_u:object_r:cupsd_log_t error_log -rw-r--r--+ root sys system_u:object_r:cupsd_log_t error_log.1
As you can see, the security context labels have been preserved.
Resources for this article: /article/7689.
James Morris (email@example.com) is a kernel hacker from Sydney, Australia, currently working for Red Hat in Boston. He is a kernel maintainer of SELinux, Networking and the Crypto API; an LSM developer and an Emeritus Netfilter Core Team member.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Home, My Backup Data Center
- Tech Tip: Really Simple HTTP Server with Python
- Please correct the URL for Salt Stack's web site
1 hour 16 min ago
- Android is Linux -- why no better inter-operation
3 hours 31 min ago
- Connecting Android device to desktop Linux via USB
4 hours 24 sec ago
- Find new cell phone and tablet pc
4 hours 58 min ago
6 hours 27 min ago
- Automatically updating Guest Additions
7 hours 35 min ago
- I like your topic on android
8 hours 22 min ago
- This is the easiest tutorial
14 hours 58 min ago
- Ahh, the Koolaid.
20 hours 36 min ago
- git-annex assistant
1 day 2 hours ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?