At the Forge - Creating Mashups
Last month, we started to look at the Google Maps API, which allows us to embed dynamic (and Ajax-enabled) maps into our Web applications. That article demonstrated how easy it is to create such maps, with markers on the screen.
This month, we try something far more ambitious. Specifically, we're going to join the ranks of those creating mashups, combinations of Web services that often (but not always) have a mapping component. A mashup is a combination of two or more Web APIs in a novel way, making information more accessible and informative than it would be on its own.
One of the first mashups I saw was the Chicago crime map. The Chicago Police Department publishes a regular bulletin of crimes that have taken place within the city, and their approximate locations. Using this map, you can determine how safe your block is from crime, as well as look for patterns in other areas of the city. This mashup took information from the Chicago Police Department's public information and displayed it on a Google Maps page.
I was living in Chicago at the time it came out, and (of course) used the listing to find out just how safe my neighborhood was. The information always had been available from the police department, but it was only in the context of a mapping application that I really was able to understand and internalize this data. And indeed, this is one of the important lessons mashups have taught us—that the synthesis of information and an accessible graphic display, can make a great deal of difference to end users.
This month, I demonstrate a simple mashup of Google Maps with Amazon's used-book service. The application will be relatively simple. A user will enter an ISBN, and a Google map of the United States will soon be displayed. Markers will be placed on the map indicating several of the locations where used copies of the book are available. Thus, if copies of a book are available in New York City, Chicago and San Francisco, we will see three markers on the map, one in each city. In this way, we'll see how two different Web APIs, from two different companies, can be brought together to create an interesting and useful display for end users.
This month's code examples assume you already have signed up for an Amazon Web services ID, as well as for a Google Maps ID. Information on where to acquire these IDs is available in the on-line Resources for this article.
Our first challenge is to create a map that contains one graphic marker for each location in a list. We already saw how to do this last month using PHP. This month, we begin by converting the program to ERB, an ASP- or PHP-style template that uses Ruby instead of another language. You can see the file, mashup.rhtml, in Listing 1.
Listing 1. mashup.rhtml, the First (Simple) Version of Our Map
One way to parse ERB files correctly on a server is by running Ruby on Rails, which uses ERB as a default templating mechanism. But for a small mashup like this, using Rails would be overkill. So, I decided to use a simple ERB (Embedded Ruby, for HTML-Ruby templates) by itself.
To make this work, I installed eruby in the cgi-bin directory of my server (see Resources). I then told Apache that any file with an .rhtml extension should be parsed with eruby:
AddType application/x-httpd-eruby .rhtml Action application/x-httpd-eruby /cgi-bin/eruby
To demonstrate that we can indeed do this for two fixed points, the ERB file defines an array of two latitudes, both within a short distance of my home in Skokie, Illinois:
<% array = [-87.740070, -87.730000] %>
Next, we iterate over the elements of this array, using the each_with_index method to get both the array element and the index within the array that we are currently on:
<% array.each_with_index do |item, index| %>
var myMarker<%= index %> = new GMarker(new GPoint(<%= item%>, 42.037030)); map.addOverlay(myMarker<%= index %>);
The myMarkerX variable is then defined to be a new instance of GMarker—that is, a marker on the Google map—located at a point defined by the latitude (the item variable) and longitude (a fixed value, 42.037030).
Finally, so that the user can see exactly where all of the points are, we print some text at the bottom of the page. The result is a map with two markers on it, and the location of each marker is listed in text.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Rogue Wave Software's Zend Server
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide