Mongoose: an Embeddable Web Server in C

Mongoose provides a Web server that can be embedded in your application, and it consists of a single C source file (and a header file) that you can compile and link with your application code. If you need a simple Web server, Mongoose may be the solution.

Web services are all the rage in development circles, but full-featured application servers like JBoss are terrible overkill for small system solutions. In many situations, simple RESTful interfaces suffice without the need for complex containers. Additionally, embedded single file implementations may be preferred over collections of scripts that rely on an external interpreter, such as Python or PHP.

Mongoose is an MIT-licensed, embeddable Web server contained within a single C module library that can be embedded in a program to provide basic Web services. Its lightweight approach hides the power of a fully threaded system capable of serving both static and dynamic content over multiple ports using standard and secure HTTP protocols. It supports CGI and SSI, access control lists and digest authorization. File transfers are supported using code from example server implementations.

Along with the embeddable C library, the Mongoose package includes a front end that turns the module into a full-fledged server capable of serving files from a user-specified document root. The ready-to-run server supports all configurable options from the command line as well as from a text configuration file. Although this configuration provides a method to bring Mongoose up quickly, Mongoose's power comes from writing custom front ends to the Mongoose library with callbacks for specific REST-styled URIs.

In this article, I introduce the Mongoose library API, show how it can be configured for multiple uses and provide an example implementation that serves up static pages utilizing digest authentication. This article is aimed at developers with a knowledge of C programming. Although not required, familiarity with HTML and CSS also is useful.

Structure of a Mongoose-Based Server

A Mongoose-based server provides a main() function that parses command-line arguments, initializes the Mongoose library and then spins in a loop waiting for an exit event. The command-line arguments are server-specific but typically represent values to be passed to the Mongoose library through the mg_set_option() function. Mongoose initialization requires creating an initial Mongoose context, setting library options and establishing callback functions for authorization, error and URI handling. The main function then spins forever while the Mongoose master thread handles incoming connections on the configured network ports. The front-end code is responsible for handling signals to support shutdown operations, including notifying the Mongoose library to stop gracefully.

The Mongoose library is threaded and extremely simple to use. The initial context starts the master thread, which waits for incoming connections. New connections are queued, and worker threads are started to handle them. Processing of the connection occurs in the worker thread. Here, the connection request is analyzed to determine how processing will proceed. For performance sake, worker threads are started as needed and remain after a connection closes to process any additional queued incoming requests without the overhead of starting new threads (Figure 1).

Figure 1. Green boxes are user functions, and all the rest are implemented in the Mongoose library.

In the worker thread, the incoming request is analyzed to determine what should happen next. Mongoose supports various HTTP requests, such as PUT, POST and DELETE. However, for simple REST services, the most important feature Mongoose supports is callbacks for URIs, error handling and authentication.

Hello, World: Initialization and Callbacks

Mongoose initialization is handled within a single user-defined function called from the main() function. This initialization function performs three mandatory operations: start the Mongoose initial context (mg_start), set options on that context (mg_set_option) and specify URI callbacks (mg_set_uri_callback):

void mongooseMgrInit()
  struct mg_context *ctx;
  ctx = mg_start();
  mg_set_option(ctx, "ports", port);
  mg_set_uri_callback(ctx, "/*", &uriHandler, NULL);

The Mongoose API documentation is sparse, and the available options are not obvious. Fortunately, the man page for the default front end documents command-line options, which in turn map directly to most of the available options that can be set with mg_set_option(). The -A option to the default server, used to edit a digest authentication file, is not supported by the Mongoose library. The default server supports most, but not all, available Mongoose library options via the command line. The known_options array in mongoose.c defines the list of options directly supported by the Mongoose library.

All arguments to mg_set_option() options are character strings. Mongoose converts them to appropriate formats as needed. For example, the port number for the ports option must be specified as one or more character strings separated by commas, with SSL ports identified with the letter s appended to the port number. Some options can be disabled at compile time. To disable SSL options, define NO_SSL. To disable CGI options, define NO_CGI.


White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState