Code in the Linux kernel can be executed in two basic ways. One is to be called by an interrupt, and the other is to be called from a user program (that's my required “white lie” for this column). User programs call code in the kernel through a system call, which is essentially an unusual type of function call.
Of course, when user code calls privileged kernel code, the kernel has to very carefully check the validity of its arguments in order to avoid accidentally doing harm of any sort. If the code is not safe for anyone but the superuser to execute, there are routines for checking that, too.
Creating a system call is more difficult than creating a normal C language function, but not too difficult. There is certainly more to it than declaring a function in a header file—and for system calls, the only change that is needed to a header file is not a function declaration.
The first thing that you need to do is either modify an existing file in the kernel, or create a new file to be compiled. If you create a new file, we will assume that you are able to add it to the appropriate Makefile and use the proper #include statements for the code you are writing. You will want to make sure that <linux/errno.h> is included, because system calls need to be able to return error codes, and those error codes are all defined in errno.h.
You will need to create a function called sys_name, where name is the name of the system call you are creating. The function must have the return specification asmlinkage int, and it may have any number of arguments between 0 and 5, inclusive. The arguments must all be the same size as a long; they may not be structures. (Or, at least, not structures larger than a long. It would not be wise to make structures the same size as a long because integer arithmetic is done on them. What is a “signed” structure? If you don't want to think about that question, do not use small structures. In truth, don't use them at all.)
The function will return errors as -ENAME. Negative numbers are treated as error values on return (we will see how later) and positive numbers are considered normal return values. This means that on systems with 32-bit long values, only 31 bits are available for passing back return values. On 64-bit systems like Linux/Alpha, only 63 bits are available. This makes it difficult to pass addresses in the high half of the range back to user programs.
There are two ways around this. One is to make one of the function's arguments be the address of a user-space variable in which to place the return value. The other is to find some other way of returning an error and making a special way of handling the return value. The first way is, to the best of my knowledge, always preferable, so I will not explain the second way.
Before reading or writing any area in a user program from the kernel, the verify_area() function must be called. In normal use on a 486 or Pentium, it is less important for kernel stability than on the 386 (although it helps detect errors much more cleanly and avoids having processes die in kernel mode), but on the 386 it is absolutely essential to system stability, because the 386 does not honor memory protection when it is in “supervisor” mode, which is the mode the kernel runs in. This means, for instance, that the CPU will happily write to read-only user-space memory from the kernel.
The verify_area() function takes three variables. First is one of VERIFY_READ or VERIFY_WRITE. Second is the address in the current user program that is to be verified. Third is the length of the memory area you wish to read or write. It returns 0 if the memory area is valid, and -EFAULT if the memory is not valid. A common phrase is something like this:
int error; error = verify_area(VERIFY_WRITE, buf, len); if (error) return error; ...
Please note that verify_area only verifies addresses in user memory space, not kernel memory space. Memory in kernel space is never swapped out, and is always readable and writable. On the i86 family, the fs segment register is used in the kernel to select the user-space memory of the current process. Other architectures do this differently. This functionality is abstracted out into a few useful functions, explained below.
Your work when writing your system call will be much easier if you do as much testing as possible before committing any resources to the task at hand. As a general rule, tests are done in this order:
Run all necessary verify_area tests.
Do (almost) all other tests in an appropriate order, including normal permission testing.
Do suser() or fsuser() tests if appropriate. These should only be called after other tests have succeeded, because BSD-style root-privilege accounting may be added to the kernel at some point. See the comments in include/linux/kernel.h.
The suser() function is used to determine if the process has root permissions for most activities. However, the fsuser() function must be used for all filesystem-related permissions. This difference allows servers to assume the file permissions of a user without “becoming” the user, even briefly. This is important because if the server exchanges uid's such that it “becomes” the user for even a moment, the user can disturb the process in various ways, potentially breaching security in many ways. By simply using the fsuid and fsgid functions instead, the server avoids this security nightmare. For this to work, all kernel filesystem permissions testing must use the fsuser() function to test for superuser status, and must look at current->fsuid and current->fsgid for normal permissions on filesystem objects. (For more details on the current pointer, see the definition of task_struct in include/linux/sched.h.)
A good example of a program that needs this ability is the nfs server. Early versions of the nfs server were not able to use this functionality (because it didn't yet exist), and there were several security holes. The most common nuisance was users noticing that they could kill the server.
After you check permissions and any other possible error conditions, you probably want to actually get something done. Unless you simply want to return a value that can fit in a 31-bit (or 63-bit for Linux/Alpha) return value, you will need to write to the user memory that you checked with the verify_area function at the beginning of function. You can't just use the pointer to user-space memory as a normal pointer. Instead, you have to use a set of special functions to access it. And if you want to read any user-space memory in order to do your system call, you will need to use a similar set of functions to do so.
In older versions of Linux (through 1.2.x), you had to specify what kind of memory access you were making. There were 6 functions for single memory access: get_fs_byte, get_fs_word, get_fs_long, put_fs_byte, put_fs_word, and put_fs_long. These names (and names with the fs replaced with user) are still supported in newer kernels, but starting with Linux 1.3, they are deprecated. The get_user and put_user functions are to be used instead. They are easier to read and for the most part easier to use, but because they depend on the type of the pointer being passed to them, they are not tolerant of sloppy pointer use. (This is probably a good thing, since Linux now runs both on little- and big-endian computers, and big-endian computers are not tolerant of sloppy pointer use either.)
The memory block access routines have stayed the same since the earliest versions, even though their names still contain the letters “fs”; memcpy_tofs is used to copy a block of memory to user space, and memcpy_fromfs is used to copy a block of user memory to memory in kernel space.
All of the memory access routines are defined in include/asm/segment.h—even on architectures without segmentation. On all of the non-Intel architectures, these functions are essentially null functions, since they do not implement segmentation.
Up to this point, you have simply implemented a new function in the kernel. Simply prepending the name with sys_ will not make it possible to call the function from user code.
You need to make two additions within the kernel. The first is in include/linux/unistd.h, right near the end. You need to look for the last line that starts with #define __NR and add your own:
#define __NR_name ###
where ### is the number one greater than the previous last system call number. In version 1.2.9, that would be 141.
The second change will have to be made in multiple files, one for each architecture that Linux runs on. Each file arch/*/kernel/entry.S will need an additional entry in its system call table. The system call table is kept at the end of the file, and you will simply need to add an entry at the end of the table before the .space line and change the .space formula at the very end to reflect the new number of system calls.
Fast/Flexible Linux OS Recovery
On Demand Now
In this live one-hour webinar, learn how to enhance your existing backup strategies for complete disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible full-system recovery solution for UNIX and Linux systems.
Join Linux Journal's Shawn Powers and David Huffman, President/CEO, Storix, Inc.
Free to Linux Journal readers.Register Now!
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- ServersCheck's Thermal Imaging Camera Sensor
- The Italian Army Switches to LibreOffice
- Linux Mint 18
- Chris Birchall's Re-Engineering Legacy Software (Manning Publications)
- Petros Koutoupis' RapidDisk
- Oracle vs. Google: Round 2
- The FBI and the Mozilla Foundation Lock Horns over Known Security Hole
- Privacy and the New Math
Until recently, IBM’s Power Platform was looked upon as being the system that hosted IBM’s flavor of UNIX and proprietary operating system called IBM i. These servers often are found in medium-size businesses running ERP, CRM and financials for on-premise customers. By enabling the Power platform to run the Linux OS, IBM now has positioned Power to be the platform of choice for those already running Linux that are facing scalability issues, especially customers looking at analytics, big data or cloud computing.
￼Running Linux on IBM’s Power hardware offers some obvious benefits, including improved processing speed and memory bandwidth, inherent security, and simpler deployment and management. But if you look beyond the impressive architecture, you’ll also find an open ecosystem that has given rise to a strong, innovative community, as well as an inventory of system and network management applications that really help leverage the benefits offered by running Linux on Power.Get the Guide