Networking with the Printer Port

One of the strengths of Linux is its ability to serve both as engine for powerful number-crunchers and as effective support for minimal computer systems. The PLIP implementation is an outstanding resource in the latter realm, and this article shows its internals at the software level.
Plugging the Driver in Linux

As far as a network driver is concerned, being able to transmit and receive data is not the whole of its job. The driver needs to interface with the rest of the kernel in order to fit with the rest of the system. The PLIP device driver devotes more or less one quarter of its source code to interface issues, and I feel it worth introducing here.

Basically, a network interface needs to be able to send and receive packets. Network drivers are organized into a set of “methods”, as character drivers are (see Dynamic Kernels: Discovery, LJ April 1996). Sending a packet is easy; one of the methods is dedicated to packet transmission, and the driver just implements the right method to transmit data to the network.

Receiving a packet is somehow more difficult, as the packet arrives through an interrupt, and the driver must actively manage received data. Packet reception for any network interface is managed by exploiting the so-called “bottom halves”.

In Linux, interrupt handling code is split into two halves. The top half is the hardware interrupt, which is triggered by a hardware event and is executed immediately. The bottom half is a software routine that gets scheduled by the kernel to run without interfering with normal system operation. Bottom halves are run whenever a process returns from a system call and when “slow” interrupt handlers return. When a slow handler runs, all of the processor registers are saved and hardware interrupts are not disabled; therefore, it's safe to run the pending bottom halves when such handlers return. It's interesting to note that new kernels in the 2.1 hierarchy no longer differentiate between fast and slow interrupt handlers.

A bottom-half handler must be “marked”; this consists of setting a bit in a bit-mask register, so that the kernel will check whether any bottom half is pending or not. The immediate task queue, used by the PLIP driver, is implemented as a bottom half. When a task is queued, the caller must call mark_bh(IMMEDIATE_BH), and the queue will be run as soon as a process is done with a system call or a slow handler returns.

Getting back to network interfaces, when a driver receives a network datagram, it must make the following call:

netif_rx(struct sk_buff *skb)

where skb is the buffer hosting the received packet; PLIP calls netif_rx from plip_receive_packet. The netif_rx function queues the packet for later processing and calls:

mark_bh(NET_BH)
Then, when bottom halves are run, the packet is processed.

In practice, something more is needed to fit a network interface into the Linux kernel; the module must register its own interfaces and initialize them. Moreover, an interface must export a few house-keeping functions that the kernel will call. All of this is performed by a few short functions, listed below:

  • plip_init: This function is in charge of initializing the network device; it is called when init_module registers its devices. The function checks to see if the hardware is installed in the system and assigns fields in the struct device that describes the interface.

  • plip_open: Whenever an interface is brought up, its open function is called by the kernel. The function must prepare to become operative (similar to the open method for character devices).

  • plip_close: This function is the reverse of plip_open.

  • plip_get_stats: This function is called whenever statistical information is needed. For example, the printout of ifconfig shows values returned by this function.

  • plip_config: If a program changes the hardware configuration of the device, this function is called. PLIP allows you to specify the interrupt line at run time, because probing can't be safely performed when a module is loaded. Most of the parallel ports are configured to use the default interrupt line.

  • plip_ioctl: Any interface that needs to implement device-specific ioctl commands must have an ioctl method. PLIP allows changing its timeout values, although I never needed to play with these numbers. The plipconfig program allows changing the timeouts.

  • plip_rebuild_header: This function is used to build an Ethernet header in front of the IP data. Ethernet interfaces that use ARP don't need to implement this function, as the default one for the Ethernet interface does all of the work.

  • init_module: As you probably already know, this is the entry point to the modularized driver. When a network interface is loaded to a running system, its init_module should call register_netdev, passing a pointer to struct device. Such a structure should be partly initialized and should include a pointer to an init function which completes initialization of the structure. For PLIP, such a function is plip_init.

These functions, along with hw_start_xmit, the one function responsible for actual packet transmission, are all that's needed to run a network interface within Linux. Although I admit there's more to know in order to write a real driver, I hope the actual sources can prove interesting to fill the holes.

______________________

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState