PostgreSQL—the Linux under the Databases
Time and date values are handled in a very flexible manner. I will demonstrate this on a database that I use for tracking my telephone costs for connecting to my Internet provider. The structure of my database and a sample query look like this:
create table telephone ( datum abstime, online_secs int4, units int2, costs float8 ); select * from pppcosts where datum >= 'yesterday'::abstime; datum |online_secs|units|cost Tue Jul 08 00:16:22 1997 MET DST| 486| 3|0.36 Tue Jul 08 20:18:52 1997 MET DST| 1476| 10| 1.2 Wed Jul 09 01:06:33 1997 MET DST| 3317| 14|1.68
This query returns my on-line events from yesterday at 00:00:00 until now. The term 'yesterday'::abstime is an explicit type conversion that makes sure the parser gets the right type for literal values. In this case it is not necessary, but sometimes the input type might be guessed wrong and an error returned. Explicit notation avoids such errors.
It is also possible to get aggregates of data. From my telephone costs table, I can get the costs for the last 30 days by giving the following query:
select sum(online_secs) as seconds, sum(units) as units, sum(costs) as costs where datum > ('now'::abstime - '30 days'::reltime);
The keyword as sets the column titles to the specified values; otherwise, they would all read sum.
Also supported is the SQL date type which accepts dates in the form mm/dd/yyyy. There is even a mechanism to change the date format for foreign languages (currently only European and US formats are supported). Use the set command as follows:
set datestyle to 'european'; set datestyle to 'us';
The date format is changed to dd/mm/yyyy and then back to mm/dd/yyyy. In v6.1 a comma had to follow the keywords “european” and “us”. In later releases this bug has been fixed.
Another nifty feature is the sequence command. A frequent demand to a database is that some column contains a sequence of numbers which is automatically increased when new records are added. To do this, create sequences by giving the following statements:
create sequence id start 1000 increment 10; insert into table values (nextval(id),...);
This example will create a sequence, with the name ID, which starts at 1000 and increases by 10 every time the function nextval() returns a unique number.
The next example shows how to implement a user-defined function. This technique can be used to overcome a limitation of PostgreSQL—the database system is not able to process subqueries. Querys that contain another query, usually in a where clause, are called subquerys. PostgreSQL supports them only indirectly, i.e., when the subquery is hidden in a function.
-- I will use the cities table again create function big_city() returns set of cities as 'select * from cities* where population > 700000' language 'sql'; select name(big_city()) as highpop;
In the example, the function returns a tuple from the cities table. It is also possible to return only single values by writing e.g., returns int4. The language command in the example tells the parser that this is a query-language function. With programming-language functions (e.g., language 'c'), you can program your own functions in C and load them as dynamic modules directly into the back end. This mechanism is also used to create user-defined types. (Examples can be found in the tutorial directory of the source tree.) A function is permanently stored in the same way as a table. So, when it is no longer needed, it must be explicitly deleted by giving the drop function command.
I have presented only some highlights of the SQL language here, with an emphasis on special features of PostgreSQL. I also omitted examples for joining data from different tables, for creating views and for enclosing your operation in transaction blocks to ensure that either all actions are performed or none at all. I also omitted mentioning cursors, which allow you to limit the amount of tuples returned from a query. These and other features can be learned best from a book about the SQL query language. I have used the book A Visual Introduction to SQL from J.H. Trimble, D. Chappel and others, published by Wiley in 1989 for my learning. It is aimed at novices and easy to understand.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide