At the Forge - Thinking about APIs
People are surprising and unpredictable. In the computer industry, you hear this in nearly every interview with the designer of a popular software package. For example, Perl originally was designed for network programming and text processing. This made it an obvious choice for Web programming, but Larry Wall certainly didn't know or expect that when he first wrote the language, years before the Web was invented.
Users of a software package almost always will push its limits. That's why some of the most successful and popular programs are those that encourage users to go beyond the author's imagination. In the early days of the software industry, such add-ons and plugins didn't exist, which meant that the only way to get new functionality was to lobby the authors and then hope the next release would include the needed features. In the world of open-source software, anyone is theoretically able to add new features, but between the generally small number of core developers and the famously loud debates that sometimes erupt, it can take surprisingly long to add a new feature. (And although it is always possible to fork a project, this has social and technical drawbacks that often outweigh the benefits.)
Some programs have a long tradition of encouraging add-ons. GNU Emacs is best known as a text editor, but it comes with a full-fledged version of the Lisp programming language. You can create just about anything you want in Emacs, and people have done so, including mail-reading programs, Web browsers and an extremely sophisticated calendar/diary. Photoshop caught on among graphic designers not only because of its powerful image editing features, but also because of the large number of plugins that were developed (and sold) for the platform. Microsoft Office, much as it might be reviled by many Linux and open-source advocates, became popular because of its built-in programming language (VBA), as much as for its built-in features. And, of course, the Firefox browser wouldn't be half as useful to me if it weren't for the half-dozen plugins that I have added to my software.
So, users push software to the limits, and software publishers have been able to make their offerings more useful by making it possible to extend their programs. How does this translate into an era of Web-based software? And, what does this mean to us as Web developers?
The answer is the increasingly ubiquitous application programming interface, or API. If you want your Web site to be taken seriously as a platform, and not just an application, you need to offer an API that lets users create, modify and extend your application. APIs are becoming an increasingly standard part of Web-based applications, but despite everyone's use of the term API, that acronym means different things to different people.
Starting next month, I'm going to look at the latest batch of APIs that sites such as Facebook are offering. But this month, I want to take a step back and consider the different types of APIs that software publishers offer. This is useful if you intend to extend, use and work with those APIs. Web development is increasingly a matter of tying together existing functionality from other sites, and understanding these APIs can be quite useful.
It's also important for Web developers to understand the nature of APIs. If you want to create the next Facebook or Google, you're going to need to create more than a winning product. You're going to need an ecosystem of developers and third-party software around your core product. One of the best ways to do this is to create and promote APIs, letting people use your application as a platform, rather than a standalone program. By looking around and seeing what others have done, we can get a better sense of just what the possibilities are and how we might use them.
In the beginning, when Tim Berners-Lee invented the Web, he imagined it as a read-write medium. But for most people who used the Web during the first decade, it was a read-only medium. You could view Web sites with your browser, fill out forms with your browser, and that was about it. There was no API for reading Web sites; if you wanted to read the content of a site programmatically—for example, in order to create a searchable index of all Web content—you needed to create your own “spider” program, as well as teach it to take apart the HTML.
This changed in the late 1990s, when a number of developers (most prominently, but not exclusively, including Dave Winer) created RSS, standing either for really simple syndication or RDF site summary. In either case, the idea was to create a machine-readable, frequently updated summary of a site's contents. By checking a site's RSS feed, you could learn whether there had been any updates. More significant, RSS feeds were formatted in a standard way, with standard tags, making it fairly easy for programs to poll a feed and extract only the information they wanted.
Unfortunately, the term RSS became both the generic term for syndication and the name for several incompatible (but similar) formats for syndication. A separate group of developers created the Atom protocol, which many people believe is superior to all of the various RSS formats.
RSS and Atom are still popular today. The most common use of these syndication feeds is for blog and news updates, allowing users to keep track of which sites have updated their content. But, RSS and Atom can be used in other ways as well, providing a simple, reliable and machine-readable version of various types of data from a Web site. If you are looking to broadcast regularly updated data, RSS and Atom probably are going to be a good starting point.
For example, the well-known development company 37Signals provides an Atom feed of recent activity in its Highrise contact management system. As helpful as it might be to look at your own feed, it would be even more helpful to aggregate multiple people's feeds into a single viewer, allowing, for example, a manager to get a sense of what (and how much) employees are getting done each day.
The idea that a Web site could provide a regularly updated, machine-parseable version of its content whetted the appetite of many developers for more. Many developers wanted a method to add and modify data, as well as retrieve it.
This came in several different forms, all of which still are used today. The first was XML-RPC, a simple RPC protocol that used HTTP to send an XML-encoded function invocation on a remote server. The server turned the XML-RPC request into a local function call and sent the result of that call (or an error message) in an XML-encoded response. The good news was (and is) that XML-RPC is simple to understand and use, that there are implementations in many different languages, and that they are generally compatible and interoperable.
At the same time, XML-RPC was unable to handle some of the more complex data types that people wanted to use. Plus, it didn't have the official seal of approval or complete standard that would have been necessary for it to enter the corporate arena. So, some of the original XML-RPC developers created SOAP (originally known as the Simple Object Access Protocol, but now an acronym that doesn't expand). SOAP is more sophisticated and complete than XML-RPC, but it had a great many issues with compatibility, implementation and complexity. Today, there are SOAP implementations for many languages, and it continues to be used in a variety of ways, despite some compatibility issues.
But, at the same time that XML-RPC and SOAP were being touted as the foundations for a new type of interactive, machine-parseable Web, along came Roy Fielding, who described the current state of affairs as unnecessarily complex. He proposed that instead of using XML in both the request and the response, we instead use Representational State Transfer (REST). In other words, the URL should contain everything needed to describe the request, without any additional XML markup or payload. The response, of course, could be in XML or any other format.
The idea of published Web services, procedures invoked via HTTP and URLs that transferred data in XML and other formats, soon became widespread. Creating and using Web services became the biggest thing, with every company talking about how it would take advantage of such Web services. Many standards were proposed for describing and finding Web services; for all I know, these standards still exist, but for the average programmer, they don't, and I'm not sure if and when they ever will.
Given two read-only protocols and three read-write protocols, it was a matter of time before people started to create applications that would take advantage of these. Amazon was one of the first companies to do so, opening up its catalog in a set of Web services now known as Amazon E-Commerce Services, or ECS. Amazon made its entire catalog available via ECS, and it allowed programmers to choose between SOAP and REST. Over the years, ECS has become an increasingly sophisticated and capable system, making it possible to retrieve particular slices of Amazon's catalog and pricing information.
But, retrieving information from Amazon is only half the story: Amazon also makes it possible to manage a shopping cart via ECS and even has some facilities for managing third-party products for sale. Amazon has made a huge commitment to ECS, and a large community of developers and third-party software vendors now exist around this facility. By turning Amazon into a platform for software development, rather than a simple on-line store, Amazon simultaneously has made a community of people dependent on ECS and has created opportunities for the creation of software applications that otherwise never would have been built.
eBay, Google and Yahoo! (among others) also have provided a number of APIs via Web services, which developers can use and access using various protocols. I've read reports, which I can't confirm but that I'm willing to believe, claiming the majority of requests submitted to eBay's servers are through its Web services. Given that most eBay users are not technically sophisticated enough to create their own HTTP clients, we may assume there are a number of software development shops that see eBay as a distribution platform, much as others might see Windows or Linux as their platform.
Google also has exposed a number of its applications to read-write APIs. Rather than use one of the existing protocols, Google uses a version of Atom for both requests and responses, along with a data format it calls GData. There are read-write APIs for a number of Google's applications, including the calendar, Blogger and the spreadsheet program. Programmers no longer are limited by the interface that Google provides to their spreadsheet data; they may create their own programs that use the spreadsheet for storage and retrieval. (One slightly far-fetched example would be the creation of a distributed database server that depended on Google's spreadsheet for locking and version control.)
Although new APIs of this sort constantly are being rolled out, the trend has seemed clear. Make the data easily available and downloadable by the users, in a variety of formats. And, make it possible for them to interact with your Web-based application either using your Web site or (alternatively) the command line or their own home-grown application.
Facebook, the social networking site started by Mark Zuckerberg, has become an extremely popular application on the Web. Facebook users can connect with friends, join groups of like-minded individuals and send messages to others. Early in 2007, Facebook became popular among developers, as well as users, for creating a developer API that goes far beyond the APIs I have described above. In a nutshell, Facebook invited developers to create and deploy new applications that are seamlessly integrated into the full Facebook experience.
Facebook isn't the only site that lets you incorporate your own code into the site. However, the style and methods of this integration are deeper on Facebook than I have seen elsewhere. In the Facebook model, your Web application still resides on your server, but its output is displayed inside the user's Facebook page, alongside other Facebook applications. This definitely is something new and exciting; I can't think of any other Web sites that make it possible for an independent developer to distribute code that integrates into the Web site. The fact that you can use whatever language and platform you prefer, communicating with Facebook in a certain way, marks the beginning of a new kind of API, one in which users can affect the Web service as seen by all users, not just one particular user. The only other site I can think of in this camp is Ning, Marc Andreessen's build-your-own-social-network site.
Moreover, Facebook has taken a page from Amazon and eBay, telling developers that they can go wild, using the Facebook network for commercial as well as nonprofit reasons. Google has had a long-standing policy of allowing access to its maps, for example, but only for publicly accessible Web sites and reasons. It remains to be seen whether Facebook's API will continue to be free of charge and open to all.
Something this sophisticated cannot use any one of the protocols that I mentioned above. Rather, Facebook uses a combination of protocols and techniques to communicate with your Web application, making it possible for your programs to display their output alongside other Facebook applications. Moreover, Facebook makes it possible for your application to grab certain pieces of the user's Facebook data, so even though your application doesn't have access to the back-end Facebook database, it still can know (and display) something about the user's friends. Your application even can send messages and notifications to the user's friends, although Facebook has discovered that this can lead to spamming, so it remains to be seen exactly what happens on this front.
Web sites used to be nothing more than an electronic method for publishing and reading basic information encoded in HTML. But, Web sites evolved into applications, which spawned the first generation of APIs that made it possible to read and write your data. Facebook is the first of the new generation of Web sites that look at themselves as a platform more than an application.
And, although Amazon, Google and eBay have demonstrated the importance and potential of a platform-centric view, Facebook is pioneering the incorporation of third-party applications. True, most Facebook applications created to date are simple or trivial. But, we can expect that these applications will become increasingly sophisticated and useful over time. Facebook's willingness to open up to third-party developers is good for everyone—except for competing sites, such as MySpace and LinkedIn, which still appear to see themselves as standalone sites, rather than platforms for new applications.
This month, I explained why I find Facebook's API to be new and exciting. Next month, we'll look at how you can create your own Facebook applications. Even if you aren't interested in creating applications for Facebook, you owe it to yourself to see how the latest generation of Web applications allow themselves to be modified, not just queried.
Reuven M. Lerner, a longtime Web/database developer and consultant, is a PhD candidate in learning sciences at Northwestern University, studying on-line learning communities. He recently returned (with his wife and three children) to their home in Modi'in, Israel, after four years in the Chicago area.