Amazon Web Services
Back when I was in college, there weren't many options for buying technical books. I could buy them new at the high-priced campus bookstore, I could buy them from a high-priced competitor around the corner, or I could buy used copies from other students, who advertised their wares at the end of every semester. Regardless, my ability to buy books was dictated by my location, coupled with my ability to learn what was available.
So, it probably won't surprise you to learn that I was an early customer of on-line bookstores, patronizing both Bookpool and Amazon before the summer of 1995. The combination of excellent prices and wide selection, along with convenience was a dream come true. Much as I might hate to admit it, I probably spent just as much on books from on-line stores as I would have at their brick-and-mortar counterparts. However, although my book-buying budget was unchanged, the number of books I could buy, as well as the variety that was available, was unparalleled in the physical world.
Things got even better when Amazon opened its doors to third-party booksellers. Now I could not only compare new book prices from the comfort of my living room, but I could browse and buy used books as well. The number of interesting books available for less than $1 US (plus shipping) has turned me into something of a book-buying monster; the shelves of my graduate-school office are filled with books that I hope will be useful in my research, but that I bought largely because the opportunity existed. When I hear about an interesting book, my first instinct now is to check at Amazon—or even better, at isbn.nu, which compares prices across multiple sites.
Over the years, Amazon has assembled a huge database of information about books. I'm sure that this database of books, buyers and sellers continues to be an important source for Amazon's decision-makers. But a few years ago, Amazon decided to do something surprising—they opened part of their internal database to third-party developers, in a program known as Amazon Web Services (AWS). Using AWS, developers can perform nearly every task they would normally be able to do on the Amazon site, using a client-side program rather than a Web browser. AWS also includes a number of features aimed at booksellers, for pricing and inventory management.
In the latter half of 2005, Amazon unveiled a number of new initiatives that fit under its “Web services” umbrella, only some of which are related directly to selling and buying books. At about the same time, eBay announced that it would no longer be charging developers to use its Web services, making it possible to query two of the largest databases of sales data. And, of course, Google has long offered Web services of its own; although data is currently limited to the main index, it is probably safe to assume that it is a great resource.
This month, we begin to explore the world of commercial Web services, looking especially at ways in which we can integrate data from external Web services into our own applications. Along the way, we'll see some of the different ways in which we can invoke Web services, some of the different offerings that are available to us and how we might be able to build on existing Web services to create new and interesting applications.
During the Web's first decade or so, it was mostly designed for user interaction. That is, most HTTP clients were Web browsers, and most of the content downloaded by those browsers was HTML-formatted text intended for people to read.
At a certain point, developers began to consider the possibility that they could use HTTP for more than just transmitting human-readable documents. They began using HTTP to transmit data between programs. The combination of HTTP as a transmission protocol and XML as a data format led to XML-RPC. Because XML and HTTP are platform-neutral, one did not have to write both the client and server programs in the same language, or even use the same operating system. XML-RPC thus provides a means for cross-platform RPC (remote procedure calls), with far less overhead than other similar approaches to the same problems (such as, CORBA middleware) might require.
XML-RPC was and is a good, clean and lightweight protocol, but it lacked some of the sophistication, error handling and data types that many developers wanted. Thus, SOAP (originally short for the Simple Object Access Protocol) introduced a number of extensions to make it more formal, including a separation between the message envelope and body.
XML-RPC and SOAP both assume that the server will be listening for method calls at a particular URL. Thus, a server might have an XML-RPC or SOAP server listening at /server, or /queries, or some such URL. The client is then responsible for indicating which method it needs in the request. In XML-RPC, we use the methodName tag. Parameters and metadata are all passed in the XML envelope, which is sent as part of an HTTP POST submission.
A different technique, known as REST, identifies the method calls in the URL itself. It passes parameters like a standard GET request. REST has a number of nice features, especially its simplicity of implementation and use. And, debugging REST is easy, because you can enter the URLs into a Web browser instead of a specialized program. However, a large number of people are still using SOAP and XML-RPC, especially when working with complex data structures.
Web services form the core of what is increasingly known as service-oriented architecture, or SOA, in the high-tech world. A Web service brings together all of the advantages of the Web—platform independence, language independence and the ability to upgrade and change the service without having to distribute a new version.
SOA makes it possible to create new services, or even to unveil new versions of existing services, either by replacing an existing implementation or by unveiling a new implementation in parallel with the old one. Those who use Web services can benefit from improved speed and efficiency, or from completely new APIs, without having to worry about incompatibilities or installation problems. In addition, as long as developers follow the service's published specification, they can use whatever language and platform they want, creating anything from an interactive desktop application to an automated batch job that crunches through gigabytes of data.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- The Firebird Project's Firebird Relational Database
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide