We have now defined three tables: People, Appointments and Salaries. What happens if I want to create a table listing all appointments scheduled with people, along with their salary histories? (This would be useful to have before annual salary meetings.) The table in Listing 11 will perform such a request, listing appointments in chronological order.
Rather than having to create this query from scratch each time, we can create it as a “view”. Views are dynamically generated tables with names attached that can largely be treated as read-only tables. We can create a view of appointments and salary information, as shown in Listing 10.
Notice how there is almost no difference between the generic select statement and the select used to create a view. Indeed, the only change that we made was in dropping the order by clause, because PostgreSQL has not implemented this functionality as of version 7.0.2. So, we can list all appointments in chronological order (see Listing 11).
Views have a number of advantages over simple select statements:
They force more of the processing to take place on the database server. Since database servers are typically high-end machines, and because they have ready access to the data, they end up doing more of the work. The client spends less time creating dynamically generated SQL statements.
Views have their own permission structure. For example, the personnel department at a company, but no one else, should have access to salary information. With views, it is possible to hide information and allow particular users or IP addresses to access the base table, but keep the view open to the general public.
Perhaps most importantly, views let us think at a higher level of abstraction than basic tables. Views can perform a variety of calculations and manipulations on the values from our table, just as a simple select can. If you know that you will have to multiply all of the values in a particular column by three, you can create a new view which automatically performs the calculation. You no longer need to perform the calculation in the select statement, nor in the program that retrieves the values.
Views cannot take variable arguments, meaning that you cannot create a view that sometimes retrieves user names beginning with “A” and sometimes retrieves those that begin with “B”. To perform such an operation, you will need to create a procedure. PostgreSQL supports a variety of programming languages in which procedures can be written, including Pl/PGsql (which is similar to Oracle's PL/SQL language), PL-perl (procedures written in Perl), and Pl-tcl (procedures written in Tcl).
Once you have created procedures, you can then create “triggers”. A trigger is a procedure automatically executed when something happens in the system. For example, you can use a trigger to ensure that when a user is deleted from the People table, rows in the Salaries and Appointments tables refer to that user are also deleted. Without this, it is possible for a row in Salaries or Appointments to refer to a person_id that no longer exists. Triggers can be activated whenever someone performs an insert, update, or delete on a table, and can operate either before or after the action occurs.
Finally, views are normally read-only objects, since they are simply aliases for select queries. However, PostgreSQL has a sophisticated rule system that makes it possible to rewrite queries that fit certain criteria. Using rules, you can intercept an insert, update or delete that is targeted at a view, and rewrite it as a series of operations on one or more tables. Thus, an insert into ApptAndSalaryView could not be rewritten, since the user's e-mail address (from People) does not appear. However, an update would certainly make sense, and could be rerouted to modify the People, Appointments or Salaries tables as necessary.
While PostgreSQL is rapidly growing in popularity, it does have problems. The PostgreSQL development team is well aware of these problems and seems to be dealing with them quickly.
The most pressing issue is probably the 8KB limitation on each tuple, or database row. This means that no row can contain more than 8KB of data. This affects many parts of the database's operation, from providing only moderate support for BLOBs (binary large objects) to preventing developers from creating even moderate-sized tables.
Other issues, such as the combination of views with unions, bit me (and my clients) on a recent programming project. This, combined with the lack of outer joins, has meant a number of workarounds in some recent projects. It is possible to get around most or all of these at the application level, but I am anxiously awaiting the addition of these features.
Working with MySQL has spoiled me somewhat, since the number of built-in functions is large and allows me to create database applications without creating my own functions. PostgreSQL has fewer built-in functions but, as we saw earlier, does allow me to create any that I might like, in a variety of programming languages. However, the installation and use of these languages is poorly documented; while they might be very powerful, it takes a while to get started with them.
Despite some benchmarks that were recently released by a PostgreSQL consulting group, it's safe to say that PostgreSQL is slower than MySQL. At the same time, I was pleasantly surprised to discover that the speed difference was not as great as I expected. Of course, this speed difference exists because PostgreSQL includes transactions and referential integrity, both of which require more processing and record keeping than MySQL's table-level locks.
A final drawback to PostgreSQL is that not as many web-hosting services offer it. This may be unimportant when working with dedicated servers, but some of my clients have the budget to only rent a virtual server. Database capabilities should be a consideration when looking for a web server, but, in my experience, developers often have less say in the tools used for a project than they might like. Still, if you are interested in something closer to a commercial database, including transactions and integrity constraints, PostgreSQL is your best choice.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Doing for User Space What We Did for Kernel Space
- SuperTuxKart 0.9.2 Released
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide