Building a Linux-Based Appliance
To create the backup and restore utilities, it was critical to determine which files needed to be backed up. For this purpose we used the utility FCheck, a popular and useful Perl script by Michael A. Gumienny. FCheck makes it is possible to take a snapshot of the files before changes are made, and then view the differences after the changes are completed. FCheck is available at www.geocities.com/fcheck2000/fcheck.html. (It is also extremely useful for performing intrusion detection.)
Setup and configuration is performed by modifying the fcheck.cfg file, in which you can specify both paths and individual files to be monitored for changes. You can exclude individual files or directories and specify whether a monitored directory should be recursively scanned.
Before making changes to the configuration files, we ran FCheck as follows:
This created a baseline file, which stores all of the original states of the files, including file size and time of last modification. After modifying the configuration files and loading a new policy from the Windows-based Policy Editor, we ran FCheck as follows:
./fcheck -ad | grep WARNING
This displayed the files changed during the policy modification process.
Our original undo capability simply created copies of the files that would be changed and then copied them back if an undo was required. However, customer feedback showed that an iterative undo capability was highly desirable, due to the number of changes an administrator might make to a firewall configuration before finalizing it. The backup portion of the undo functionality is called from all other scripts that make modifications to the firewall rules, fox example, the scripts that perform rule addition or deletion.
The undo script works as follows:
Maintains a list of all the files to be backed up.
Backup files are stored as numbered files, e.g. the original file, Standard.W, is stored in the "undo" file, Standard.W.00.
Each time a backup is required, the script determines the highest numbered existing backup file.
If it has exceeded the allowed number of undos, it deletes the highest numbered undo files.
It copies each file to a next higher numbered file.
And finally, the actual copy of the original file is made.
The undo script performs similar actions, reducing the file number, rather than incrementing it.
Creating an undo capability is not exactly like the Undo you might use in a word processor, since changes to the firewall configuration may occur over a lengthy period of time. Thus, the administrator needs to be able to undo to a particular date and to determine the date of the next available undo.
Although the complete code sample is not shown, here are two useful takeaways for those writing their own scripts. First, Perl makes it easy to run an external program or script by using the ` character. For example:
$var = `./printdir`;
Second, for improved error handling, the script checks to make sure the directory that holds the undo files exists. Perl makes this easy through the use of the -d option for directories and -f option for files, for example:
if ( -d $dirname )
Iterative undo and restore is a feature that really should be included in every product. In our case, it makes a demo that is quite compelling for existing administrators. Every administrator knows in the back of their mind that they should backup configuration files after each individual change. But we have all gone through the process of making multiple changes without making a backup and then realizing that one of those changes has caused a problem--but we don't know which one! To be able to demonstrate that the administrator can then go back, step by step, through each individual change is extremely useful.
When making an appliance or bringing any networked machine on-line, you should turn on only the absolute minimum set of services required. As disk space today is not typically a cost issue, we recommend installing most if not all of the optional services you think you might need down the road. Services you do not need after setup should be disabled to narrow the chance of having security holes opened on your machine and network. To disable services, you can rename the symbolic links associated with the services in /etc/rc.d/rcX.d, where X is the system run level, or rename individual services in /etc/rc.d/init.d.
To ensure a secure system when the appliance is initially turned on, all remote services are blocked by default. An administrator must perform the initial configuration of the box from within the network or on the appliance directly. If desired, the administrator can change the rules to allow secure remote administration sessions. In either case, it is ultimately up to the administrator to determine which security settings are optimal for their desired implementation. But out of the box, the appliance defaults to the most secure configuration possible.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- RSS Feeds
- It is quiet helping
2 hours 4 min ago
2 hours 21 min ago
- Reachli - Amplifying your
3 hours 37 min ago
4 hours 26 min ago
- good point!
4 hours 29 min ago
- Varnish works!
4 hours 38 min ago
- Reply to comment | Linux Journal
5 hours 8 min ago
- Reply to comment | Linux Journal
7 hours 34 min ago
- Reply to comment | Linux Journal
11 hours 33 min ago
- Yeah, user namespaces are
12 hours 50 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?