How to Build LSB Applications
The Linux Standard Base (LSB) specifies an interface between an application and a runtime environment. Many distributions have achieved certification for their runtime environments. This article outlines the steps needed to build applications that adhere to the LSB interface.
The LSB Project was founded in 1997 to address the application compatibility problem that was beginning to emerge. Different distributions were using different versions of upstream software and building them with different options enabled. The result was that an application built on one distribution might not run on another distribution. Worse yet, the application often would not work on a different version of the same distribution.
Originally, the LSB was intended to create a common reference implementation for the base of a GNU/Linux system. In addition to the reference implementation, a written specification was to be developed. This idea wasn't well received by many of the distributions that had considerable investments in their own base software, which they perceived as being a competitive advantage.
After further discussion among the interested parties, the LSB Project underwent a fundamental shift in focus in order to achieve consensus among the entire community. The shift gave priority to the written specification over the implementation, and it defined the LSB as a behavioral specification instead of a list of upstream feature/version pairs. This new focus was realized as a three-prong approach: a written specification, which defines the behavior of the system; a formal test suite, which measures an implementation against the specification; and a sample implementation, which provides an example of the specification.
The LSB Specification actually is made up of a generic portion, the gLSB, and an architecture-specific portion, archLSB. The gLSB contains everything that is common across all architectures; we try hard to define as much as possible in the gLSB. The archLSBs contain the things that are unique to each processor architecture, such as the machine instruction set and C library symbol versions.
As much as possible, the LSB builds on existing standards, including the Single UNIX Specification (SUS), which has evolved from POSIX, the System V Interface Definition (SVID) and the System V Application Binary Interface (ABI). The LSB uses the ELF definitions from the ABI and the interface behaviors from the SUS. It adds the formal listing of what interfaces are available in which library as well as the data structures and constants associated with them. See the “Linux Standard Base Libraries” sidebar for the list of libraries currently specified.
Linux Standard Base Libraries
As of LSB 1.3, the following shared libraries are specified in the LSB. All other libraries must be linked statically into the application.
Base libraries: libc, libm, libpthread, libpam, libutil, libdl, libcrypt, libncurses and libz.
Graphics libraries: libX11, libXt, libXext, libSM, libICE and libGL.
As the LSB continues to grow in future versions, so will this list of libraries.
In addition to the ABI portion of the LSB, the specification also specifies a set of commands that may be used in scripts associated with the application. It also mandates that the application adhere to the filesystem hierarchy standard (FHS).
One additional component of the LSB is the packaging format. The LSB specifies the package file format to be a subset of the RPM file format. The LSB does not specify that the distribution has to be based on RPM, however, only that it has some way of correctly processing a file in the RPM format.
One final item to mention is the name of the program interpreter. The program interpreter is the first thing executed when an application is started, and it is responsible for loading the rest of the program and shared libraries into the process address space. Traditionally, /lib/ld-linux.so.2 has been used, but the LSB specifies /lib/ld-lsb.so.1 instead on IA32. Generally, /lib/ld-arch-lsb.so.1 is used for other architectures. This provides the operating system with a hook early in the process execution in case something special needs to be done to provide the correct runtime environment to the application. You can pass the following to GCC to change the program interpreter:
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Paranoid Penguin - Building a Secure Squid Web Proxy, Part IV
- SUSE LLC's SUSE Manager
- Google's SwiftShader Released
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide