LiS: Linux STREAMS

Not all networking software is based on BSD sockets. System V UNIX systems and most commercial networking code use STREAMS. The LiS project was developed to make STREAMS available for Linux, with the aim of making Linux the best UNIX platform for developing, debugging and using STREAMS software.
LiS Features and Implementation

A complete STREAMS description would be too large for this article. Some books you might read to learn more about STREAMS can be found in Resources. In a few words, LiS features include:

  • support for typical STREAMS modules and drivers

  • ability to use binary-only drivers

  • convenient debugging facilities

Typical STREAMS Facilities

Many similarities exist between the implementation of LiS and SVR4 STREAMS. This is because initial project members followed the “Magic Garden” (see Resources 2) as a design guideline. Current maintainers were also heavily influenced by SVR4 STREAMS, because they had been writing STREAMS drivers for SVR4 since 1990. Thus, the stream head structure, queue structure, message structure, etc., follow the SVR4 model.

Differences between the two do exist. SVR4 disallows STREAMS multiplexors to use the same driver at more than one level of the stack. For example, if we had a STREAMS multiplexor driver called “DLPI” and another called “NPI”, the SVR4 STREAMS would disallow the stack: NPI(SNA) <-> DLPI(QLLC) <-> NPI(X.25) <-> DLPI(LAPB). LiS allows these combinations, since we could see no harm in such configurations.

The configuration file used for LiS is modeled after the SVR4 sdevice and mdevice files. However, LiS syntax is different and combines into a single file the functions that SVR4 used two files to specify. The LiS build process (Makefiles) allow individual drivers to have their own config file. They all get combined into one master config file, which is then used to configure LiS at build time.

In SVR4, the STREAMS executive is a linkable package for the kernel. It is not hard-wired into the kernel. With LiS, the STREAMS executive is actually a runtime, loadable module of the kernel, one step more dynamic than SVR4 STREAMS.

A quick overview of the LiS implementation would reveal a STREAM as a full-duplex chain of modules (see Figure 4). Each one consists of a queue pair: one for data being read and another one for data being written. Each module has several data structures providing those operations (i.e., functions) needed, as well as statistics and other data.

Figure 4. Queues in a STREAM

Module operations are provided by the programmer and include procedures used to process upstream and downstream messages. Messages can be queued for deferred processing, as LiS guarantees to call service procedures when queued messages could be processed.

Most of the LiS implementation deals with these queues and also with the message data structures used to send data through the STREAM. Messages carry a type code and are made of one or more message blocks. Only pointers to messages are passed from one module to the next, so there is no data copy overhead.

The head of the STREAM is another interesting piece of software. In Figure 5, you can see how it is reached from the Linux VFS (Virtual File System) layer which interfaces the kernel with the file systems. Note that even though Linux does not have a clean and isolated VFS layer, Linux i-nodes are v-nodes in spirit and its file system layer can be considered to be a VFS. For an actual description of the implementation, read Chapter 7 of the “Magic Garden” (Resources 2).

Figure 5. The STREAM Head

Binary-Only Drivers

LiS also makes provision for linking with binary-only drivers. This allows companies such as Gcom which have proprietary drivers to port their driver code to LiS and distribute binaries. This is an important feature if we expect companies to port their existing SVR4 STREAMS drivers to LiS. The more of these available, the more the Linux kernel functionality is enhanced.

Debug Facilities

LiS debugging features are especially convenient and show another departure point from SVR4.

Of course, these facilities include some general-purpose debug utilities such as message printers, but also included are significant aids that can really help with debugging, such as the ability to selectively trace; for example, getmsg calls.

The memory allocator keeps file and line numbers close to allocated memory areas. Combine that with the ability to print out all the in-use memory areas, and you have a tool for finding memory leaks in your drivers.

Usage statistics are designed to help, not overload the user with unnecessary information. The streams command prints out a good deal of useful information about LiS operation. There is even a debug bitmap to cause LiS to trigger different debug facilities. One of them is the ability to time various operations using the high-resolution timer. Thus, the user can get fine-grain driver timings for those drivers using LiS tools with no extra code in the driver.

Last but not least, LiS allows module debugging in user space by emulating the whole STREAMS framework. A module can be easily developed in user space and then downloaded into the kernel when it works. That is achieved by a “port” of LiS which runs in user space on Linux (in a dummied-up manner).

STREAMS modules can be tested by surrounding them with test modules and then driving known sequences of messages through the module under test. The LiS loop driver is suitable to place below the driver being tested, as it behaves like a simple echo server. The stream head may very well be all that is needed above.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Any kernel network module work using Stream

Nhan's picture

Does any network module working in Linux kernel 2.4 use Stream? Could anyone show me an example about kernel network module using Stream for communication?
Thanks

See strxnet.

Anonymous's picture

See strxnet.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState