At the Forge - Integrating with Facebook Data
For the past few months, we've been looking at the Facebook API, which makes it possible to integrate third-party applications into the popular social-networking site. Facebook is remarkable to users for the number of people already using it, as well as for the rapid pace at which new people are joining. But, it also is remarkable for software developers, who suddenly have been given access to a large number of users, into whose day-to-day Web experience they can add their own applications.
The nature of Facebook means that most developers are writing applications that are more frivolous than not. Thus, it's easy to find name-that-celebrity games, extensions to built-in Facebook functionality (such as, “SuperWall”) and various applications that ask questions, match people together and so forth. I expect we eventually will see some more serious applications created with the Facebook API, but that depends on the developer community. I would argue that the continued growth of Facebook applications depends on the ability of developers to profit from their work, but that is a business issue, rather than a technical one.
Regardless of what your application does, it probably will be quite boring if you cannot keep track of information about your users. This might strike you as strange—after all, if you are writing a Facebook application, shouldn't Facebook take care of the storage for you?
The answer is no. Although Facebook handles user authentication, gives you the ability to deploy your application within the Facebook site and even provides access to certain data about the currently logged-in user, it does not store data on your behalf. This means any data you want to store must be kept on your own server, in your own database.
This month, I explain how to create a simple application on Facebook that allows us to retrieve data from a user's Facebook profile or from our local relational database seamlessly. The key to this is the user's Facebook ID, which we will integrate into our own user database. Retrieving information about our user, or about any of their friends, will require a bit of thinking about where the data is stored. However, you will soon see that mixing data from different sources is not as difficult as it might sound at first, and it can lead to far more interesting applications.
Our application is going to be simple—a Facebook version of the famous “Hello, world” program that is the first lesson in oh-so-many books and classes. However, we'll add two simple twists: first, we will display the number of times that the user has visited our application to date. (So, on your fifth visit, you will be reminded that this is your fifth visit.) Moreover, you will be told how many times each of your friends has visited the site.
In a normal Web/database application, this would be quite straightforward. First, we would define a database to keep track of users, friends and visits. Then, we would write some code to keep track of logins. Finally, we would create a page that displayed the result of a join between the various pages to show when people had last visited. For example, we could structure our database tables like this:
CREATE TABLE People ( id SERIAL NOT NULL, email_address TEXT NOT NULL, encrypted_password TEXT NOT NULL, PRIMARY KEY(id), UNIQUE(email_address) ); CREATE TABLE Visits ( person_id INTEGER NOT NULL REFERENCES People, visited_at TIMESTAMP NOT NULL DEFAULT NOW(), UNIQUE(person_id, visited_at) ); CREATE TABLE Friends ( person_id INTEGER NOT NULL REFERENCES People, friend_id INTEGER NOT NULL REFERENCES People, UNIQUE(person_id, friend_id), CHECK(person_id <> friend_id) );
Our first table, People, contains only a small number of columns, probably fewer than you would want in a real system. We keep track of the users' primary key (id), their e-mail addresses (which double as their login) and their encrypted passwords.
We keep track of each visit someone makes to our site in a separate table. We don't need to do this; it would be a bit easier and faster to have a number_of_visits column in the People table and then just increment that with each visit. But, keeping track of each visit means we have more flexibility in the future, from collecting usage statistics to stopping people from using our system too much.
Finally, we indicate friendship in our Friends table. Keeping track of friends is a slightly tricky business, because you want to assume that if A is a friend to B, then B also is a friend to A. We could do this, but it's easier in my book simply to enter two rows in the database, one for each direction. To retrieve the friends of A, whose ID is 1, we look in the Friends table for all of the values of friend_id where person_id = 1.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide