Amazon Web Services
Back when I was in college, there weren't many options for buying technical books. I could buy them new at the high-priced campus bookstore, I could buy them from a high-priced competitor around the corner, or I could buy used copies from other students, who advertised their wares at the end of every semester. Regardless, my ability to buy books was dictated by my location, coupled with my ability to learn what was available.
So, it probably won't surprise you to learn that I was an early customer of on-line bookstores, patronizing both Bookpool and Amazon before the summer of 1995. The combination of excellent prices and wide selection, along with convenience was a dream come true. Much as I might hate to admit it, I probably spent just as much on books from on-line stores as I would have at their brick-and-mortar counterparts. However, although my book-buying budget was unchanged, the number of books I could buy, as well as the variety that was available, was unparalleled in the physical world.
Things got even better when Amazon opened its doors to third-party booksellers. Now I could not only compare new book prices from the comfort of my living room, but I could browse and buy used books as well. The number of interesting books available for less than $1 US (plus shipping) has turned me into something of a book-buying monster; the shelves of my graduate-school office are filled with books that I hope will be useful in my research, but that I bought largely because the opportunity existed. When I hear about an interesting book, my first instinct now is to check at Amazon—or even better, at isbn.nu, which compares prices across multiple sites.
Over the years, Amazon has assembled a huge database of information about books. I'm sure that this database of books, buyers and sellers continues to be an important source for Amazon's decision-makers. But a few years ago, Amazon decided to do something surprising—they opened part of their internal database to third-party developers, in a program known as Amazon Web Services (AWS). Using AWS, developers can perform nearly every task they would normally be able to do on the Amazon site, using a client-side program rather than a Web browser. AWS also includes a number of features aimed at booksellers, for pricing and inventory management.
In the latter half of 2005, Amazon unveiled a number of new initiatives that fit under its “Web services” umbrella, only some of which are related directly to selling and buying books. At about the same time, eBay announced that it would no longer be charging developers to use its Web services, making it possible to query two of the largest databases of sales data. And, of course, Google has long offered Web services of its own; although data is currently limited to the main index, it is probably safe to assume that it is a great resource.
This month, we begin to explore the world of commercial Web services, looking especially at ways in which we can integrate data from external Web services into our own applications. Along the way, we'll see some of the different ways in which we can invoke Web services, some of the different offerings that are available to us and how we might be able to build on existing Web services to create new and interesting applications.
During the Web's first decade or so, it was mostly designed for user interaction. That is, most HTTP clients were Web browsers, and most of the content downloaded by those browsers was HTML-formatted text intended for people to read.
At a certain point, developers began to consider the possibility that they could use HTTP for more than just transmitting human-readable documents. They began using HTTP to transmit data between programs. The combination of HTTP as a transmission protocol and XML as a data format led to XML-RPC. Because XML and HTTP are platform-neutral, one did not have to write both the client and server programs in the same language, or even use the same operating system. XML-RPC thus provides a means for cross-platform RPC (remote procedure calls), with far less overhead than other similar approaches to the same problems (such as, CORBA middleware) might require.
XML-RPC was and is a good, clean and lightweight protocol, but it lacked some of the sophistication, error handling and data types that many developers wanted. Thus, SOAP (originally short for the Simple Object Access Protocol) introduced a number of extensions to make it more formal, including a separation between the message envelope and body.
XML-RPC and SOAP both assume that the server will be listening for method calls at a particular URL. Thus, a server might have an XML-RPC or SOAP server listening at /server, or /queries, or some such URL. The client is then responsible for indicating which method it needs in the request. In XML-RPC, we use the methodName tag. Parameters and metadata are all passed in the XML envelope, which is sent as part of an HTTP POST submission.
A different technique, known as REST, identifies the method calls in the URL itself. It passes parameters like a standard GET request. REST has a number of nice features, especially its simplicity of implementation and use. And, debugging REST is easy, because you can enter the URLs into a Web browser instead of a specialized program. However, a large number of people are still using SOAP and XML-RPC, especially when working with complex data structures.
Web services form the core of what is increasingly known as service-oriented architecture, or SOA, in the high-tech world. A Web service brings together all of the advantages of the Web—platform independence, language independence and the ability to upgrade and change the service without having to distribute a new version.
SOA makes it possible to create new services, or even to unveil new versions of existing services, either by replacing an existing implementation or by unveiling a new implementation in parallel with the old one. Those who use Web services can benefit from improved speed and efficiency, or from completely new APIs, without having to worry about incompatibilities or installation problems. In addition, as long as developers follow the service's published specification, they can use whatever language and platform they want, creating anything from an interactive desktop application to an automated batch job that crunches through gigabytes of data.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Elliptic Curve Cryptography
- Getting Help With Linux
- Remote Compilation Using ssh and make
- Mediated Reality: University of Toronto RWM Project
- Writing Real-Time Device Drivers for Telecom Switches, Part 1
- NLE Video Editors
- Memory Leak Detection in Embedded Systems
- Linux Powers Four-Wall 3-D Display
- ViaVoice and XVoice: Providing Voice Recognition
25 min 54 sec ago
- Kernel Problem
10 hours 28 min ago
- BASH script to log IPs on public web server
14 hours 55 min ago
18 hours 31 min ago
- Reply to comment | Linux Journal
19 hours 3 min ago
- All the articles you talked
21 hours 27 min ago
- All the articles you talked
21 hours 30 min ago
- All the articles you talked
21 hours 31 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 3 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?