Embedding Python in Multi-Threaded C/C++ Applications

Python provides a clean intuitive interface to complex, threaded applications.
Extending Python

When you embed Python within your application, it is often desirable to provide a small module that exposes an API related to your application so that scripts executing within the embedded interpreter have a way to call back into the application. This is done by providing your own Python module, written in C, and is exactly the same as writing normal Python modules. The only difference is your module will function properly only within the embedded interpreter.

Extending Python requires some understanding of how the Python interpreter manipulates objects from C. All function arguments and return values are pointers to PyObject structures, which are the C representation of real Python objects. You can make use of various function calls to manipulate PyObjects. Listing 2 is a simple example of a Python module extension written in C. This is the source to the Python crypt module, which provides one-way hashing used in password authentication.

Listing 2

All C implementations of Python-callable functions take two arguments of type PyObject. The first argument is always “self”, the object whose method is being called (similar to the infamous “this” pointer in C++). The second object contains all the arguments to the function. PyArg_Parse is used to extract values from a PyObject containing function arguments. You do this by passing, in the PyObject which contains the values, a format string which represents the data types you expect to be there, and one or more pointers to data types to be filled in with values from the PyObject. In Listing 2, the function takes two strings, represented by "(ss)". PyArg_Parse is similar to the C function sscanf, except it operates on a PyObject rather than a character buffer. In order to return a string value from the function, call PyString_FromString. This helper function takes a char* value and converts it into a PyObject.

Python, C and Threads

C programs can easily create new threads of execution. Under Linux, this is most commonly done using the POSIX Threads (pthreads) API and the function call pthread_create. For an overview of how to use pthreads, see “POSIX Thread Libraries” by Felix Garcia and Javier Fernandez at http://www.linuxjournal.com/lj-issues/issue70/3184.html in the “Strictly On-line” section of LJ, February 2000. In order to support multi-threading, Python uses a mutex to serialize access to its internal data structures. I will refer to this mutex as the “global interpreter lock”. Before a given thread can make use of the Python C API, it must hold the global interpreter lock. This avoids race conditions that could lead to corruption of the interpreter state.

The act of locking and releasing this mutex is abstracted by the Python functions PyEval_AcquireLock and PyEval_ReleaseLock. After calling PyEval_AcquireLock, you can safely assume your thread holds the lock; all other cooperating threads are either blocked or executing code unrelated to the internals of the Python interpreter, and you may now call arbitrary Python functions. Once acquiring the lock, however, you must be certain to release it later by calling PyEval_ReleaseLock. Failure to do so will cause a thread deadlock and freeze all other Python threads.

To complicate matters further, each thread running Python maintains its own state information. This thread-specific data is stored in an object called PyThreadState. When calling Python API functions from C in a multi-threaded application, you must maintain your own PyThreadState objects in order to safely execute concurrent Python code.

If you are experienced in developing threaded applications, you might find the idea of a global interpreter lock rather unpleasant. Well, it's not as bad as it first appears. While Python is interpreting scripts, it periodically yields control to other threads by swapping out the current PyThreadState object and releasing the global interpreter lock. Threads previously blocked while attempting to lock the global interpreter lock will now be able to run. At some point, the original thread will regain control of the global interpreter lock and swap itself back in.

This means when you call PyEval_SimpleString, you are faced with the unavoidable side effect that other threads will have a chance to execute, even though you hold the global interpreter lock. In addition, making calls to Python modules written in C (including many of the built-in modules) opens the possibility of yielding control to other threads. For this reason, two C threads that execute computationally intensive Python scripts will indeed appear to share CPU time and run concurrently. The downside is that, due to the existence of the global interpreter lock, Python cannot fully utilize CPUs on multi-processor machines using threads.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Still getting crashes...

Anonymous's picture

Thanks for the article, helped to understand the GIL a little more.
Since python 2.3 you can do the whole GIL lock things with the GILState_Ensure and Release functions. Look at my code:

class CExecuteHandler {
public:
	CExecuteHandler(CHandler *, PyObject *);
	~CExecuteHandler();

	CHandler *handler; 
	/* a class where python functions are saved in a vector*/
	PyObject *args;
};

void _ExecuteHandler(void *_ExecHandler) {
	CExecuteHandler *ExecHandler = (CExecuteHandler *)_ExecHandler;
	CHandler *Handler = ExecHandler->handler;

	for( vector::iterator j = Handler->m_PyFunctions.begin(); 
	     j != Handler->m_PyFunctions.end(); 
	     j++ ) {
		PyGILState_STATE gilState = PyGILState_Ensure();
		PyObject *result = PyObject_CallObject( *j, Thread->args );
		if(!result) PyErr_Print();
		else Py_DECREF(result);
		PyGILState_Release(gilState);
	}

	delete Thread;
#ifdef WIN32
	_endthread();
#else
	pthread_exit(NULL);
#endif
}

void ExecuteHandler(CHandler *i, PyObject *args) {
	CExecuteHandler *ExecHandler = new CExecuteHandler( *i, args );

#ifdef WIN32
	_beginthread( _ExecuteHandler, 0, (void *)ExecHandler );
#else
	pthread_create( &thread, NULL, 
	                _ExecuteHandler, (void*)ExecHandler );
#endif
}

kay this was the code basically. So again the handler class saves a python function pointer of a certain event. E.g. if i want to call a python function when (lets suppose you coded a chat program) some sends a message to others, you call the CHandler fitting to "ChatMessage" with arguments built like Py_BuildValue("(ss)", playerName, message) and call ExecuteHandler(handler, args /* built with above BuildValue */). The problem is then if someone excessively spams and there are many many threads which call the function, the program crashes sometime.

Full code can be seen at:
http://pyghost.googlecode.com

Using PyGILState_Ensure/PyGILState_Release

Gwang-Ho Kim's picture

contructor:
-----------
PyGILState_Ensure ONLY ensure that one thread use the same PyThreadState;
if two threads call PyGILState_Ensure,
one thread might invalidate the PyThreadState of the other WITHOUT locking!
(See the source of Python; Python/pystate.c)
PyGILState_Ensure:

tcur = (PyThreadState *)PyThread_get_key_value(autoTLSkey);
if (tcur == NULL) {
	/* Create a new thread state for this thread */
	tcur = PyThreadState_New(autoInterpreterState);
	if (tcur == NULL)
		Py_FatalError("Couldn't create thread-state for new thread");
	/* This is our thread state!  We'll need to delete it in the
	    matching call to PyGILState_Release(). */
	tcur->gilstate_counter = 0;
	current = 0; /* new thread state is never current */
}
else
	current = PyThreadState_IsCurrent(tcur);
if (current == 0)
	PyEval_RestoreThread(tcur);

Locking is done in PyEval_RestoreThread(See the source in Python/ceval.c),
which called only if current = 0, i.e.,
there is no saved PyThreadState(_PyThreadState_Current in terms of pystate.c).
So one MUST have to call PyEval_SaveThread not just PyEval_ReleaseLock!!!

mainThreadState = PyEval_SaveThread();

Destructor:
-----------
Since there is no explicit PyThreadState in main thread,(See the contructor above.)
one MUST restore PyThreadState of main thread by PyEval_RestoreThread.
Otherwise there is segmentation fault because Py_Finalize use the current PyThreadState!
(See the source of Python; Python/pythonrun.c)
Py_Finalize:

tstate = PyThreadState_GET();
interp = tstate->interp;        // <- At this point.
PyEval_RestoreThread(mainThreadState);

Note that there is one pair; one is PyEval_SaveThread in contructor,
the other PyEval_RestoreThread in destructor.
There is another pair in PyGILState_Ensure(PyEval_RestoreThread) and
PyGILState_Release(PyEval_SaveThread).
The overall structures for multi-threaded Python/C API calling look like:
Main thread:

// Constructor
Py_Initialize();
PyEval_InitThreads();
PyThreadState*  mainThreadState = PyEval_SaveThread();

......
PyGILState_STATE        gilState = PyGILState_Ensure(); // PyEval_RestoreThread
// Call Python/C API...
PyGILState_Release(gilState);                           // PyEval_SaveThread
......

// Create new thread...

......
PyGILState_STATE        gilState = PyGILState_Ensure(); // PyEval_RestoreThread
// Call Python/C API...
PyGILState_Release(gilState);                           // PyEval_SaveThread
......

// Destructor
PyEval_Restore(mainThreadState);
Py_Finalize();

New thread:

......
PyGILState_STATE        gilState = PyGILState_Ensure(); // PyEval_RestoreThread
// Call Python/C API...
PyGILState_Release(gilState);                           // PyEval_SaveThread
......

How does this code look if you use PyGILState_Ensure/Release?

freesteel's picture

I wonder how you implement this example using the PyGILState API that was introduced in version 2.3? Does the PyGILState_Ensure replace this, for example:


...
#idfef USE_GILSTATE
PyGILState* state = PyGILState_Ensure();
#else
// get the global lock
PyEval_AcquireLock();
// get a reference to the PyInterpreterState
PyInterpreterState * mainInterpreterState = mainThreadState->interp;
// create a thread state object for this thread
PyThreadState * myThreadState = PyThreadState_New(mainInterpreterState);
// free the lock
PyEval_ReleaseLock();
#endif

and likewise


...
#ifdef USE_GILSTATE
PyGILState_Release(state);
#else
// grab the lock
PyEval_AcquireLock();
// swap my thread state out of the interpreter
PyThreadState_Swap(NULL);
// clear out any cruft from thread state object
PyThreadState_Clear(myThreadState);
// delete my thread state object
PyThreadState_Delete(myThreadState);
// release the lock
PyEval_ReleaseLock();
#endif // USE_GILSTATE
...

I also found that if you run the original example in version 2.4 and have python compiled with Py_DEBUG defined, you will get fatal errors in pystate.c. The reason is that we can't have more than one thread state per thread:
The exception is thrown from
pystate.c, line 306:
Py_FatalError("Invalid thread state for this thread");

Has anybody else tried it?

agree, (PyGILState_*) is much simpler

vvk's picture

This much more simplier locking model (PyGILState*())was introduced in the python2.3.

In my app each call of the embeded python code is locked by object of this class:

class PythonThreadLocker
{
PyGILState_STATE state;
public:
PythonThreadLocker() : state(PyGILState_Ensure())
{}
~PythonThreadLocker() {
PyGILState_Release(state);
}

};

It works safely. I must confess, that at first I wrote special singleton, which stored interpreted states for each thread (with API which is described in article), and then I found this very handy PyGILState_(Ensure/Realise).

I found this usage on koders.com, while quering "PyGILState_Ensure", thanks for the aiming:)

example.. missing?

F's picture

Seems like the example code contains only code snippets from the article. Am I missing something? :) (i.e. no mentioned "http server with embedded python" thing :))

Done to perfection

Anonymous's picture

Thanks for this useful article. We're embedding into a Win32 C++ multi-threaded app.

Ditto on the above comment -- needed to add a step to shutdown: swap the main thread state back in before shutting down the interpreter.

extending instead of embedding

mathgenius's picture

With python 2.2, I am using an audio library (portaudio) that uses callbacks for
audio buffer filling. This is extending rather than embedding.

First of all:

PyInterpreterState * mis;
PyThreadState * mts;
mts = PyThreadState_Get();
mis = mts->interp;
ts = PyThreadState_New(mis); /* stored away somewhere */

Note: we don't need to PyEval_AcquireLock, as we already have the lock.

Inside the callback:

PyEval_AcquireLock();
PyThreadState_Swap(ts);
/* call python code here */
PyThreadState_Swap(NULL);
PyEval_ReleaseLock();

Finishing up:

PyThreadState_Swap(NULL);
PyThreadState_Clear(ts);
PyThreadState_Delete(ts);

Also, I found it necessary to do

PyEval_InitThreads();

before all the above.

Simon.

Re: extending instead of embedding

Anonymous's picture

Thanks :-)

extending instead of embedding

mathgenius's picture

With python 2.2, I am using an audio library (portaudio) that uses callbacks for
audio buffer filling. This is extending rather than embedding.

First of all:

PyInterpreterState * mis;
PyThreadState * mts;
mts = PyThreadState_Get();
mis = mts->interp;
ts = PyThreadState_New(mis); /* stored away somewhere */

Note: we don't need to PyEval_AcquireLock, as we already have the lock.

Inside the callback:

PyEval_AcquireLock();
PyThreadState_Swap(ts);
/* call python code here */
PyThreadState_Swap(NULL);
PyEval_ReleaseLock();

Finishing up:

PyThreadState_Swap(NULL);
PyThreadState_Clear(ts);
PyThreadState_Delete(ts);

Also, I found it necessary to do

PyEval_InitThreads();

before all the above.

Simon.

Re: extending instead of embedding

mathgenius's picture

ok, i previewed OK this but it ignored pre markers in the final post...

doh!

Re: Embedding Python in Multi-Threaded C/C++ Applications

Anonymous's picture

excellent resource!

Good tutorial, forgot swap to main before Finalize

Anonymous's picture

Shutting down the interpreter should have

// shut down the interpreter

PyEval_AcquireLock();

PyThreadState_Swap(mainThreadState);

Py_Finalize();

otherwise you get this error message and segfault

Fatal Python error: PyThreadState_Get: no current thread

Thanks.

Re: Embedding Python in Multi-Threaded C/C++ Applications

Anonymous's picture

Excellent!

Re: Embedding Python in Multi-Threaded C/C++ Applications

Anonymous's picture

This article is so useful to make sense out of Python's involvement with threads that it should be added to the standard documentation shipping with the language.

It just helped me to sove a problem that I had been wrestling with for 24 hours.

Regards,
Fabien.

Re: Embedding Python in Multi-Threaded C/C++ Applications

Anonymous's picture

thank you - it was stright forward to create an extension with a separate thread and a callback...

it saved quite some time.

Very good article. Helped me

Anonymous's picture

Very good article. Helped me solve a problem i was investigating for two days now.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState