Building a Linux-Based Appliance

 in
For both ease of development and cost-effectiveness, Linux offers a perfect platform for building a robust, standalone appliance.
Architecture: API/Application Model

From a usability perspective, we wanted administrators to have easy access to the functionality we were delivering. This meant the product needed to support multiple interfaces:

  • A menu-based wizard that would allow easy access to common setup and configuration functions. It would take users through common tasks in a directed, step-by-step fashion rather than requiring them to read tons of documentation to run command-line utilities. The wizard would need to be accessible from a remote terminal interface, since the appliance itself would have little or no display interface built-in. Lastly, the wizard would need to be administrable from the local machine or from a remote console, whether or not a web browser was available. The Wizard also was a good compromise between command-line or scripted functionality and a somewhat more cumbersome web-based interface (at least for those used to command lines).

  • For ease of use and for a more esthetically pleasing experience, a web-based interface that would provide access to similar functionality.

  • Command-line interfaces to the core functions for adding, deleting and displaying rules.

With these requirements in mind, a key aspect of the design of our architecture was to separate the user interface support from the base configuration functionality. From a development perspective, this modular design made it easy to separate our development efforts, allowing us to work independently on the configuration code and the user interface code. Moreover, it made it easy to wrap multiple user interfaces around the same set of base functions, for example, a command-line interface, a terminal interface and a web-based interface. Thus a core set of base programming interfaces was exposed, and the user interface developer was able write to those interfaces.

The second part of making the code modular was to separate core functions into separate, self-contained script files. This way, someone else could eventually use our command-line programs for their own, separate purposes, without having to write all the base functionality themselves.

An additional benefit of this modularization was ease of testing. That is, because each function is self-contained, the individual components can be unit tested, making it much easier to quickly find and fix bugs in the code. All of this is basic good programming sense, but it is especially easy to ignore or forget when you're in the mode of creating system administration scripts for clients, rather than programs for broader distribution as products.

Developing the Base Functionality

Nearly all of the base configuration functionality is provided through a set of complex file-parsing scripts. We initially explored the possibility of using the Check Point APIs in C. But the development of our first program using these APIs showed that while it could return information about the firewall policies, it was somewhat limited in its ability to make changes to the firewall rulesets. Although the file parsing we eventually had to write ourselves was intricate, it turned out to be the only way to get everything that we needed. And it also turned out to be the fastest way to retrieve information and make configuration changes.

We developed two general types of file parsing functions: those used to display configuration information to the administrator and those used to modify policies (e.g., add or delete rules). As described above, each of these functions was developed as a standalone script called by a set of command-line parameters.

The configuration files were not at all documented. So the method we used to determine how they worked was to make changes with the Check Point Policy Editor (a Windows application) and then determine what had changed in the files on the firewall machine. In some cases, dependencies between files must be taken into account when making the file changes directly. We also created a template file that was the basis for creating and adding new rules to the rules file.

While Check Point FW-1 uses a number of files to maintain configuration, policy and object information, the two key files we had to deal with were Standard.W, which contains the firewall rules, and objects_5_0.C, which contains the network objects, protocols and encryption information. (Note that these filenames are specific to Check Point FW-1 Next Generation or "NG", the most recent version of the Check Point software available.) Our scripts first modify the necessary information in these files and then run fw load Standard.W localhost, which causes Check Point FW-1 to generate compiled output from the source rule and object files.

Both Standard.W and objects_5_0.C use a syntax that consists of tabs, colons and parentheses, with one item on each line. For example, here is part of a rule contained in Standard.W:

        :rule (
                :AdminInfo (
                        :chkpf_uid
("{7034DEB7-3558-F694-B3CA-EAB180757E7E}¨)
                        :ClassName (security_rule)
                )
                :action (
                        : (accept
                                :AdminInfo (
                                        :chkpf_uid
("{29985208-1F06-3377-B732-F9414E49DBE1}¨)
                                        :ClassName (accept_action)
                                        :table (setup)
                                )
                                :action ()
                                :macro(RECORD_CONN)
                                :type (accept)
                        )
                )
        ?
        ?
                :track (
                        : None
                )
                :comments (samplerule)

A single rule is shown in the above example. AdminInfo is a common field associated with every rule: action indicates the type of action to be taken (e.g., accept, drop); track indicates whether to log the action taken; and comments indicates a comment associated with the rule. The firewall processes the rules in the order in which they are listed in the file.

The following example shows a snippet of the objects_5_0.c file:

        :network_objects (network_objects
                ?
                : (evrtwa1-test
                        ?
                :ClassName (gateway_ckp)
                :object_permissions (
                        ?
                        :owner ()
                        :read (
                                : (any)
                        )
                        :use (
                                : (any)
                        )
                        :write (
                                : ()
                        )
                )
                :table (network_objects)
        ?
        :protocols (protocols
                ?
                : (FTP
                        ?
                        :handler (ftp_code)
                        :match_by_seqack (true)
                        :res_type (ReferenceObject
                                ?
                        )
                        :type (tcp_protocol)
                )

The above example shows a network object, evrtwa1-test, which is an administrator-defined object (as opposed to a built-in object). Below the network object is an example of a protocol description contained in the objects file. In this case it is the FTP protocol, which is of class TCP (being able to recognize this, for example, enables the display of protocols sorted by class).

The consistent formatting of the files made them fairly easy to parse. The real difficulties came in understanding the contents of the files and the interdependencies between them and in getting the parsing exactly right (e.g., determining when the full contents of an object must be included, if and when quotes must be used, the differences between handling standard versus user-defined objects and so on). Check Point uses UIDs to identify each object uniquely and even to identify items within a particular rule.

To display policy information, we created three scripts: Print Rules, Print Objects and Print Services. The biggest design win was, again, a result of modularization--separating the input of policy information from its display. We did this by creating an in-memory map of the rules table using Perl's built-in hashing support. For those who have only ever programmed in C, this and regular expression handling alone make Perl a language you will fall in love with right away. This separation allowed for easy formatting of the rules once they had been read in.

The first example below shows how a rule is read in. Each rule is added to the rulelist array, and each rule has a number of keys (such as disabled, comments, src, dst, services) that are stored in a hash. Each key then can have associated with it one or more values, which are stored, in order, in an array. So the rulelist array contains hashes, and each key in each hash points to an array containing one or more values. This seems like a complicated arrangement, but in Perl it is actually quite simple:

        if (/^\t\t\t: \(?\¡¨?([^\(]+)\¡¨?\s/) {
                ..
                $temp = $1;
                ..
                @fields = @ { $hash->{$type} };
                push(@fields, $temp);
                $hash->{$type} = [ @fields ];
                ..              
        }
        ..
        push @rulelist, $hash;

Printing the rules once they are stored in this fashion is fairly straightforward (most of the pretty-print formatting code has been removed here):

        ..
        @printorder =
('disabled','src','dst','services','action','track','track','install','install','time','comments');
        ..
        for $href ( @rulelist ) {
                ..
                for $item ( @printorder ) {
                        ..
                        $val = shift (@ {href->{$item} });
                        ..
                        print substr($val, 0, $maxlen);
        
                }
        }

The Delete Rule function simply takes as input the number of the rule to delete, reads in the rules file and outputs it to a secondary file. Once the specified rule number is reached, output is suspended until the end of that specific rule, at which time it continues.

The Add Rule function is significantly more complex. It must read in a template rule file and then create the necessary elements for the specified rule, including creating new UIDs where necessary. It also must look up existing UIDs and object information in the object file and incorporate that into the new Standard.W file. Lastly, it must intelligently determine where to place the rule in the new file (e.g., before/after an existing rule or at the top or bottom of the ruleset). The following code demonstrates outputting the source and destination for a rule to the new Standard.W file:

        ? /:dst/ || /:src/) {
                print $tabs, $_;
                PrintHeader ($rule_table[$type], $elts[0]);
                foreach $elem (@elts) {
                        if ($elem=~/^!/) {
                                $elem = substr($elem, 1);
                        }
                        ?
                        if (GetType($elem) eq "dynamic_net_obj¨) {
                                print "$tabs¨, "$elem\n¨;
                        } else {
                                InsertObject($elem);
                        }
                ?
                }
        ?

Thus the scripts are able to take the line-by-line information contained in the policy and object files, parse it and display it in easy-to-read table format. Alternatively, the scripts can take user-input information and convert that into the format required by the Check Point configuration files, including bringing in any necessary external information.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Ruby on Rails Appliance

MichaelByrd's picture

Great info! Myself and a coworker recently built a remote backup appliance using a standard linux distro and ruby on rails. We've thought about turning it into a kit complete with source code to help give others a springboard to develop from. Any interest out there for something like this?

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState