Linux as a Telephony Platform
The Panasonic DBS is a widely used digital Hybrid Key system (not to be confused with the Panasonic KXT). The choice of creating a model server around the DBS was predicated on both immediate familiarity with the internals of this switch and availability of hardware on which to test.
The DBS has two interfaces that can be used. The first, a built-in serial port, is used for both programming and call detail recording. For this reason, the server must be sensitive to the current “operating state” of the port (i.e., whether it is currently being used for programming). As mentioned earlier, commands to the DBS programming port are represented in a manner analogous to the way one would program the switch from a digital key telephone.
To simplify the DBS serial programming for application use, I created a simple protocol to send the “change value”, “get value” and “clear value” requests which make up the conceptual heart of the operations used for programming the DBS from a telephone. These commands, and the specification of an address for the values being affected, make up the essential elements of my DBS programming protocol.
The DBS also includes an optional API card. This card has a serial port used for call control and switching. This low-level protocol is unidirectional and involves a message packet and access negotiation protocol. The interface is serial, with no provision for hardware flow control. Messages are essentially exchanged between the host machine and the switch, similar to Fujitsu TCSI, Harris HIL or other equipment. Later versions of the DBS API include two serial ports, one of which has extended features to support a Netware TSAPI server.
With all these protocol interfaces to maintain and the need to connect a single point of network entry, a monolithic server cannot be used. Since the points of connection (API interface and programming interface) can be used as a shared resource by multiple applications simultaneously, a simple “fork()ing” server common in Unix is inappropriate. Instead, there is a set of protocol servers connected to a single “traffic cop”. This multi-protocol traffic cop binds itself to a single network port address providing universal access from a single network session and acts as the primary TServer.
In the simplest approach, each protocol (interface) server is run as a child process using anonymous pipes. The TServer accepts and manages user application sessions. Anonymous pipes for the protocol server are very fast and require low overhead, especially with Linux.
In the DBS are several layers of servers for the API portion of the interface. This layering behaves a little like a stream driver stack, and it could probably be implemented as such under a streamable Unix.
At the lowest layer is the DBS packet link driver (papild). This driver negotiates the unidirectional packet exchange with the DBS and normalizes DBS messages to a single fixed packet. The reason for a fixed size packet is to allow transfer of all messages to occur between the different protocol layers by performing atomic read() and write() calls. This method allows clean tracking of “split” messages on a non-blocking pipe that has been overwritten past its buffer.
The DBS packet channel state driver (papicsd), is a second stream layer built from pipes above papild. This layer is aware of the peculiar nature of the DBS wherein all message events are initiated with regard to one or more channels, which are virtual station ports associated with the DBS API card. Requests that may generate reply messages are queued within papicsd. If a request with an expected reply fails, the papicsd layer can generate timeout events. Similarly, the papicsd layer is responsible for glare resolution, a situation that occurs when attempting to take a station off hook at the same time that an incoming call occurs. Finally, the papicsd layer can trap ridiculous or meaningless commands, such as hanging up a port that is already known to be on hook and returning generated error messages back to the application.
By having a separate process for papild and papicsd, they become easier to write and debug. Also, since the DBS has no flow control, papild can be dedicated to I/O specific operations and, perhaps, even run within real-time scheduling constraints in Linux (2.0 and above). This, in conjunction with the non-blocking atomic pipe as a means to drive packets up and down the protocol layers, requires less overhead and works much more smoothly than some of Panasonic's original QNX application products for the DBS.
On the programming interface (labeled SMDR on the DBS), a single process is used. When no programming requests are active, it simply spools CDR data for call accounting purposes. When a programming request occurs, the server changes state to a programming port interface server and remains in that state until 30 seconds pass without any program requests being made.
In addition to the TServer itself, there is one more protocol layer on the papi stack. This utility layer maps DBS API channel states to station card and trunk states based on the live DBS API data. This mapping layer, papiltu, is essentially a SPI-like layer, in that it represents the DBS's current state in an intermediary form in terms of line and trunk activity that may make more sense than the DBS API channel architecture. This layer also holds the DBS station directory map, which is learned during initialization from the programming interface. The information it maps and maintains is shared with other DBS related protocol servers (such as the SMDI server) through a shared memory block.
By using the programming port and papiltu, the papi protocol stack is able to automatically configure itself by reading current DBS programming without user application programming and setup. This replaces the clumsy manual approach of a “channel configuration” screen to separately program the API port configuration, as used in earlier Panasonic application products. The papiltu functionality was originally an integral part of the primary TServer application server, but with the restrictions on publication of DBS API material, it was made into a separate layer to facilitate publication of the primary TServer source code, if none of the papi stack sources can be published. This matter is still under discussion. (See comments at the very end of article.)
The TServer line/trunk state maps actually make the TServer behave a little like the SPI layer of TAPI or TSAPI, as it represents an intermediate form between a generic PBX model and the actual physical model. In this case, in addition to driving direct hardware interface protocols, our TServer can communicate with logical protocol drivers which are not directly connected to any physical hardware.
One such logical protocol driver is the DBS SMDI driver. Connected through anonymous pipes, like the papicsd and smdr layer drivers, the smdi protocol driver talks to the shared papicsd by going back through its connection to the TServer. Since it does not depend on a single shared hardware resource, as does a real (physical) driver, an instance of the SMDI driver is created (forked) for each client application. This driver inherits the TServer socket accepted for the client connection and the pipe back to TServer; hence, communications between the SMDI emulation driver and the client application is always direct (i.e., it has no TServer routing overhead).
Based on the SMDI example, one can easily create protocol modules and matching application interfaces to fully emulate TSAPI or TAPI services directly. One can also talk to a vendor-specific hardware protocol directly (such as the papi layers or SMDR interface) or create entirely new arbitrary integration protocols. The TServer, as a single point of contact traffic cop, allows the client to perform authentication and to select the interface protocol that the client application wishes to use.
|Happy Birthday Linux||Aug 25, 2016|
|ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs||Aug 24, 2016|
|Updates from LinuxCon and ContainerCon, Toronto, August 2016||Aug 23, 2016|
|NVMe over Fabrics Support Coming to the Linux 4.8 Kernel||Aug 22, 2016|
|What I Wish I’d Known When I Was an Embedded Linux Newbie||Aug 18, 2016|
|Pandas||Aug 17, 2016|
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Happy Birthday Linux
- Updates from LinuxCon and ContainerCon, Toronto, August 2016
- ContainerCon Vendors Offer Flexible Solutions for Managing All Your New Micro-VMs
- New Version of GParted
- What I Wish I’d Known When I Was an Embedded Linux Newbie
- Tor 0.2.8.6 Is Released
- NVMe over Fabrics Support Coming to the Linux 4.8 Kernel
- Blender for Visual Effects
- All about printf
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide