POSIX Thread Libraries

The authors have studied five libraries that can be used for multi-thread applications and herein present the results.
Performance

This section describes performance metrics used to evaluate the POSIX-thread libraries, the results obtained for all, and the hardware platform used in the evaluation.

Performance Metrics

Performance metrics are an essential element in the evaluation and use of a system. To this end, a set of performance metrics was defined to evaluate and compare the two main features of all POSIX thread libraries: thread management and synchronization management.

Thread Management

The thread management metrics were aimed at evaluating the efficiency of the creation and termination of threads. These metrics are:

  • Thread creation: measurement for thread creation time, e.g., time to perform the pthread_create operation.

  • Join a thread: time needed to perform the pthread_join operation on a terminated thread.

  • Thread execution: time needed to execute the first instruction of a thread. This time includes the thread creation time and the time to perform a seched_yield operation, as shown below:

        thread_1()
        { . . .
        start_time();
        pthread_create(...);
        sched_yield();
        . . . }
        thread_2()
        {
        end_time();
        ... }
  • Thread termination: time interval from when a pthread_exit operation is performed until the pthread_join operation on this thread is finalized, as shown below:

        thread_1()
        { . . .
        start_time();
        pthread_exit(...);
        . . . }
        thread_2()
        { . . .
        pthread_join();
        end_time();
        ... }
  • Thread creation versus process creation: compares the time needed to create a process with the time needed to create a thread within a process.

  • Join a thread versus wait for a process: compares the time needed to perform a wait operation on a finalized process with the time needed to perform a pthread_join operation on a finalized thread.

  • Granularity of parallelism: this is the minimum number of iterations of a null loop to be executed in n threads simultaneously before the time needed by the n threads is less than the time needed for a single thread to execute the total number of iterations by itself. The time for the n thread case has to include the time to create all n threads and wait for them to terminate. This number can be used by a programmer to determine when it might be advantageous to divide a task into n different pieces which can be executed simultaneously. This metric is provided for n, with n being the number of processors on the machine.

Synchronization Management

These metrics are concentrated in the mutex and condition-variables operations performance.

  • Mutex init: time interval needed to perform the pthread_mutex_init.

  • Mutex lock: time interval needed to perform the pthread_mutex_lock on a free mutex.

  • Mutex unlock: time interval needed to perform the pthread_mutex_unlock.

  • Mutex lock/unlock with no contention: time interval needed to call pthread_mutex_lock followed immediately by pthread_mutex_unlock on a mutex that is being used only by the thread doing the test. This test is shown below:

        thread()
        { . . .
        start_time();
        pthread_mutex_lock(...);
        pthread_mutex_unlock(...);
        end_time();
        . . . }
  • Mutex destruction: time needed to perform the pthread_mutex_destroy operation.

  • Condition init: time interval needed to perform the pthread_cond_init.

  • Condition destroy: time needed to perform the pthread_cond_destroy operation.

  • Synchronization time: measures the time it takes for two threads to synchronize with each other using two condition variables, as shown below:

        thread_1()
        { . . .
        start_time();
        pthread_cond_wait(c1,...);
        pthread_cond_signal(c2);
        end_time();
        . . . }
        thread_2()
        { . . .
        pthread_cond_signal(c2);
        pthread_cond_wait(c1,...);
        . . . }
  • Mutex lock/unlock with contention: time interval between when one thread calls pthread_mutex_unlock and another thread that was blocked on pthread_mutex_lock returns with the lock held.

        thread_1()
        { . . .
        pthread_mutex_lock(...);
        start_time();
        pthread_mutex_unlock(...);
        . . . }
        thread_2()
        { . . .
        pthread_mutex_lock(...); < Blocked >
        end_time();
        pthread_mutex_unlock(...);
        . . . }
  • Condition variable signal/broadcast with no waiters: time needed to execute pthread_cond_signal and pthread_cond_broadcast if there are no threads blocked on the condition.

  • Condition variable wake up: time from when one thread calls pthread_cond_signal and a thread blocked on that condition variable returns from its pthread_cond_wait call. The condition and its associated mutex should not be used by any other thread.

        thread_1()
        { . . .
        pthread_mutex_lock(...);
        start_time();
        pthread_cond_signal(...);
        pthread_mutex_unlock(...);
        . . . }
        thread_2()
        { . . .
        pthread_mutex_lock(...);
        pthread_cond_wait(...);
        end_time();
        pthread_mutex_unlock(...);
        . . . }
______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: POSIX Thread Libraries

Anonymous's picture

The authors have summarized a fair chunk of discussion and used at least 1 picture from a standard OS text book -- I would have thought they should have at least cited it as a reference.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState