PostgreSQL Performance Tuning

Tweak your hardware to get the most from this open-source database.

PostgreSQL is an object-relational database developed on the Internet by a group of developers spread across the globe. It is an open-source alternative to commercial databases like Oracle and Informix.

PostgreSQL was originally developed at the University of California, Berkeley. In 1996, a group began development of the database on the Internet. They used e-mail to share ideas and file servers to share code. PostgreSQL is now comparable to proprietary databases in terms of features, performance and reliability. It has transactions, views, stored procedures and referential integrity constraints. It supports a large number of programming interfaces, including ODBC, Java (JDBC), Tcl/Tk, PHP, Perl and Python. PostgreSQL continues to improve at a tremendous pace thanks to a talented pool of internet developers.

Performance Concepts

There are two aspects of database-performance tuning. One is improving the database's use of the CPU, memory and disk drives in the computer. The second is optimizing the queries sent to the database. This article talks about the hardware aspects of performance tuning. The optimization of queries is done using SQL commands like CREATE INDEX, VACUUM, VACUUM ANALYZE, CLUSTER and EXPLAIN. These are discussed in my book, PostgreSQL: Introduction and Concepts at www.postgresql.org/docs/awbook.html [see also Stephanie Black's review on page 76].

To understand hardware performance issues, it is important to understand what is happening inside the computer. For simplicity, a computer can be thought of as a central processing unit (CPU) surrounded by storage. On the same chip with the CPU are several CPU registers, which store intermediate results and various pointers and counters. Surrounding this is the CPU cache, which holds the most recently accessed information. Beyond the CPU cache is a large amount of random-access main memory (RAM), which holds executing programs and data. Beyond this main memory are disk drives, which store even larger amounts of information. Disk drives are the only permanent storage area, so anything to be kept when the computer is turned off must be placed there (see Table 1). Figure 1 shows the storage areas surrounding the CPU.

Table 1. Types of Computer Storage

Figure 1. Storage Areas

You can see that storage areas increase in size as they get farther from the CPU. Ideally, a huge amount of permanent memory could be placed right next to the CPU, but this would be too slow and expensive. In practice, the most frequently used information is stored next to the CPU, and less frequently accessed information is stored farther away and brought to the CPU as needed.

Keeping Information Near the CPU

Moving information between various storage areas happens automatically. Compilers determine which information should be stored in registers. CPU chip logic keeps recently used information in the CPU cache. The operating system controls which information is stored in RAM and shuttles it back and forth from the disk drive.

CPU registers and the CPU cache cannot be tuned effectively by the database administrator. Effective database tuning involves increasing the amount of useful information in RAM, thus preventing disk access where possible.

You might think this is easy to do, but it is not. A computer's RAM contains many things, including executing programs, program data and stack, PostgreSQL shared buffer cache and kernel disk buffer cache. Proper tuning involves keeping as much database information in RAM as possible while not adversely affecting other areas of the operating system.

PostgreSQL Shared Buffer Cache

PostgreSQL does not directly change information on disk. Instead, it requests data be read into the PostgreSQL shared buffer cache. PostgreSQL backends then read/write blocks, and finally flush them back to disk. Backends that need to access tables first look for needed blocks in this cache. If they are already there, they can continue processing right away. If not, an operating system request is made to load the blocks. The blocks are loaded either from the kernel disk buffer cache or from disk. These can be expensive operations.

The default PostgreSQL configuration allocates 64 shared buffers. Each buffer is eight kilobytes. Increasing the number of buffers makes it more likely that backends will find the information they need in the cache, thus avoiding an expensive operating system request.

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Found this article useful

Anonymous's picture

I found this updated PostgreSQL Tuning article very useful.

This guide was very helpful.

Anonymous's picture

This guide was very helpful. Please ignore the ignorant posts saying otherwise. I'd be scared to have those people managing my data.

A brief overview for database system

Breno Leitao's picture

This article is performance tunning for newbie, nothing to be used as day-a-day book. But one question is important, no one that must tune the pgsql is really newbie about db system. ;-(

Breno Leitao

Re: PostgreSQL Performance Tuning

Anonymous's picture

Hello,
After learning this great article which provides the reader with some very useful and basic understandings of the data path in postgreSQL backend, I must say I have been a little shock by the comments I saw from people who are just waiting others to do what they should also be trying to do. These kinds of comments do not look like an encouragment to Bruce who deserves a lot of credit for the simplicity of his writing, and above all for his willingness toi help others.
Thanks Bruce, your article help explain easily some basic backgrounds required to tune wisely a PostgreSQL database.

Paul

Re: PostgreSQL Performance Tuning

Anonymous's picture

HELLO! CAN SOMEONE TELL ME WHAT THIS ARTICLE IS ABOUT. I EXPECTED TO OPTIMIZE MY DATABASE AFTER READING THIS ARTICLE BUT IT SEEMS THAT I WAS BETTER-OFF WITHOUT READING IT!!!!

Re: PostgreSQL Performance Tuning

Anonymous's picture

I was hoping to learn what the vacuum and vacuum analyze commands do, maybe read about the performance benefits of using them or find out other methods to make my queries run faster. I guess the guy just wanted to advertise his book. Thanks for nothing.

Re: PostgreSQL Performance Tuning

Anonymous's picture

not only is not really about PostgreSQL specifically, but it also has incorrect links.

Re: PostgreSQL Performance Tuning

Anonymous's picture

This document should be called General Overview of Performance Tuning. It provides almost no technical detail or insight into the tuning process. I would like to see an article which delves deeper into the issue and provides some real numbers from real applications on some real hardware. After reading the article, one is no more enabled to do any tuning on a PostgreSQL DB than if one didn't read the article at all. I am disappointed.

Re: PostgreSQL Performance Tuning

Anonymous's picture

This is a test from mso without cookies.

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState