Introduction to Multi-Threaded Programming
The purpose of this article is to provide a good foundation of the basics of threaded programming using POSIX threads and is not meant to be a complete source for thread programming. It assumes the reader has a good strong foundation in C programming.
A thread is sometimes referred to as a lightweight process. A thread will share all global variables and file descriptors of the parent process which allows the programmer to separate multiple tasks easily within a process. For example, you could write a multi-threaded web server, and you could spawn a thread for each incoming connection request. This would make the network code inside the thread relatively simple. Using multiple threads will also use fewer system resources compared to forking a child process to handle the connection request. Another advantage of using threads is that they will automatically take advantage of machines with multiple processors.
As I mentioned earlier, a thread shares most of its resources with the parent process, so a thread will use fewer resources than a process would. It shares everything, except each thread will have its own program counter, stack and registers. Since each thread has its own stack, local variables will not be shared between threads. This is true because static variables are stored in the process' heap. However, static variables inside the threads will be shared between threads. Functions like strtok will not work properly inside threads without modification. They have re-entrant versions available to use for threads which have the format oldfunction_r. Thus, strtok's re-entrant version would be strtok_r.
Since all threads of a process share the same global variables, a problem arises with synchronization of access to global variables. For example, let's assume you have a global variable X and two threads A and B. Let's say threads A and B will merely increment the value of X. When thread A begins execution, it copies the value of X into the registers and increments it. Before it gets a chance to write the value back to memory, this thread is suspended. The next thread starts, reads the same value of X that the first thread read, increments it and writes it back to memory. Then, the first thread finishes execution and writes its value from the register back to memory. After these two threads finish, the value of X is incremented by 1 instead of 2 as you would expect.
Errors like this will probably not occur all of the time and so can be very hard to track down. This becomes even more of a problem on a machine equipped with multiple processors, since multiple threads can be running at the same time on different processors, each of them modifying the same variables. The workaround for this problem is to use a mutex (mutual exclusion) to make sure only one thread is accessing a particular section of your code. When one thread locks the mutex, it has exclusive access to that section of code until it unlocks the mutex. If a second thread tries to lock the mutex while another thread has it locked, the second thread will block until the mutex is unlocked and is once more available.
In the last example, you could lock a mutex before you increment the variable X, then unlock X after you increment it. So let's go back to that last example. Thread A will lock the mutex, load the value of X into the registers, then increment it. Again, before it gets a chance to write it back to memory, thread B gets control of the CPU. It will try to lock the mutex, but thread A already has control of it, so thread B will have to wait. Thread A gets the CPU again and writes the value of X to memory from the registers, then frees the mutex. The next time thread B runs and tries to lock the mutex, it will be able to, since it is now free. Thread B will increment X and write its value back to X from the registers. Now, after both threads have completed, the value of X is incremented by 2, as you would expect.
Now let's look at how to actually write threaded applications. The first function you will need is pthread_create. It has the following prototype:
int pthread_create(pthread_t *tid, const pthread_attr_t *attr, void *(*func)(void *), void *arg)
The first argument is the variable where its thread ID will be stored. Each thread will have its own unique thread ID. The second argument contains attributes describing the thread. You can usually just pass a NULL pointer. The third argument is a pointer to the function you want to run as a thread. The final argument is a pointer to data you want to pass to the function. If you want to exit from a thread, you can use the pthread_exit function. It has the following syntax:
void pthread_exit(void *status)This will return a pointer that can be retrieved later (see below). You cannot return a pointer local to that thread, since this data will be destroyed when the thread exits.
The thread function prototype shows that the thread function returns a void * pointer. Your application can use the pthread_join function to see the value a thread returned. The pthread_join function has the following syntax:
int pthread_join(pthread_t tid, void **status)
The first argument is the thread ID. The second argument is a pointer to the data your thread function returned. The system keeps track of return values from your threads until you retrieve them using pthread_join. If you do not care about the return value, you can call the pthread_detach function with its thread ID as the only parameter to tell the system to discard the return value. Your thread function can use the pthread_self function to return its thread ID. If you don't want the return value, you can call pthread_detach(pthread_self()) inside your thread function.
Going back to mutexes, the following two functions are available to us: pthread_mutex_lock and pthread_mutex_unlock. They have the following prototype:
int pthread_mutex_lock(pthread_mutex_t *mptr) int pthread_mutex_unlock(pthread_mutex_t *mtr)
For statically allocated variables, you must first initialize the mutex variable to the constant PTHREAD_MUTEX_INITIALIZER. For dynamically allocated variables, you can use the pthread_mutex_init function to initialize a mutex variable. It has the following prototype:
int pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *mutexattr)Now we can look at actual code as shown in Listing 1. I have commented the code to help the reader follow what is being done. I have also kept the program very basic. It does nothing truly useful, but should help illustrate the idea of threads. All this program does is initiate 10 threads, each of which increments X until X reaches 4,000. You can remove the pthread_mutex_lock and unlock calls to further illustrate the uses of mutexes.
A few more items need to be explained about this program. The threads on your system may run in the order they were created, and they may run to completion before the next thread runs. There is no guarantee as to what order the threads run, or that the threads will run to completion uninterrupted. If you put “real work” inside the thread function, you will see the scheduler swapping between threads. You may also notice, if you take out the mutex lock and unlock, that the value of X may be what was expected. It all depends on when threads are suspended and resumed. A threaded application may appear to run fine at first, but when it is run on a machine with many other things running at the same time, the program may crash. Finding these kinds of problems can be very cumbersome to the application programmer; this is why the programmer must make sure that shared variables are protected with mutexes.
What about the value of the global variable errno? Let's suppose we have two threads, A and B. They are already running and are at different points inside the thread. Thread A calls a function that will set the value of errno. Then, inside thread B, it will wake up and check the value of errno. This is not the value it was expecting, as it just retrieved the value of errno from thread A. To get around this, we must define _REENTRANT. This will change the behavior of errno to have it point to a per-thread errno location. This will be transparent to the application programmer. The _REENTRANT macro will also change the behavior of some of the standard C functions.
To obtain more information about threads, visit the LinuxThreads home page at http://pauillac.inria.fr/~xleroy/linuxthreads/. This page contains links to many examples and tutorials. It also has a link where you can download the thread libraries if you do not already have them. Downloading is necessary only if you have a libc5-based machine; if your distribution is glibc6-based, LinuxThreads should already be installed on your computer. The source code for threaded application that I wrote, gFTP, can be downloaded from my web site at http://www.newwave.net/~masneyb/. This code makes use of all concepts mentioned in this article.
Brian Masney is currently a student at Concord College in Athens, WV. He also works as a computer technician at a local computer store. In his spare time, he enjoys the outdoors and programming. He can be reached at email@example.com.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Google's SwiftShader Released
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Interview with Patrick Volkerding
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide