The Past, Present and Future of GIS: PostGIS 2.0 Is Here!
Knowing that your features are safely stored in a database is nice, but you may want to use them for purposes other than later retrieval. PostGIS functions let you interact with spatial objects and explore their relationships.
Functions known as constructors build geometry from definitions in
several formats. They are sort of like translators. You used it before with
ST_GeomFromGeoJSON enable translations from
other popular formats. Output functions enable the inverse translation
ST_GeometryType check fundamental properties of
geometry. You can interact with geometry with
ST_NumPoint to retrieve the
total number of vertexes and
ST_PointN to get the
removes the vertex at the position you pass to the function. Function names
often are self-explanatory, as with
ST_Distance measures the minimum distance between two geometry objects. As
others, this function is overloaded, the exact definition is:
float ST_Distance(geometry g1, geometry g2); float ST_Distance(geography gg1, geography gg2); float ST_Distance(geography gg1, geography gg2, boolean use_spheroid);
The returned distance is measured along a Cartesian plane for geometry, and along a spheroid/sphere for the geography type. If you are querying objects relatively nearby, the question of how to use them may seem futile, but think about measuring the distance from San Francisco to Denver:
SELECT to_char(round(ST_Distance( ST_GeomFromText('POINT(-122.440 37.802)',4326)::geography, St_GeomFromText('POINT(-104.987 39.757)',4326)::geography )),'999,999,999'); 1,529,519
About 1,530 km is quite a long way to go, and going straight from San
to Denver may be a real challenge, so there's room for extra mileage. But if
you try to measure the same distance on a printed map, you may find a
different result. As you learned in primary school, the Earth's shape is almost
a sphere. When a map represents a wide portion of the planet on the surface
of a plane (yes, curved monitors are yet to come), it has to distort the real
shape and distance. By passing two geography objects to
ST_Distance, you are
asking it to perform a distance calculus over the sphere's surface. Let's
use geometry, and it will use a Cartesian plane for the calculus:
SELECT to_char(round(ST_Distance( ST_Transform(ST_GeomFromText('POINT(-122.440 37.802)',4326),3857), ST_Transform(ST_GeomFromText('POINT(-104.987 39.757)',4326),3857) )),'999,999,999');
To get the result in meters, comparable to the previous one, you need
to add the
ST_Transform function to change on the fly the SRS to the Web
Mercator used by most Web mapping systems:
More than 1,900km! Hey, Mr Mercator, where are you taking me?
You've learned how you can process spatial data in many ways inside PostGIS, but how do you get the data into the database? If you are familiar with PostgreSQL, you know it is shipped with psql, a command-line tool, or you probably have been using pgAdmin III if you prefer to interact with a GUI. Both are not specialized at dealing with spatial data, but you can execute SQL code that performs data loading.
If you search on the Internet, you quickly will realize that a lot of data is available in shapefiles, a binary proprietary format that is the de facto standard in spatial data exchange. Are you wondering how you can transform the binary format in an SQL script? Don't worry; since its early releases, PostGIS has included some tools that read shapefiles and load them in the database.
shp2pgsql and pgsql2shp are command-line tools that make your data go in and out. Not surprisingly, shp2pgsql loads the data. In fact, shapefiles are not really loaded by shp2pgsql but are translated in a form that psql can keep and load for you. So, you just have to pipe the output to psql:
$ shp2pgsql -s 4269 -g geom -I ~/data/counties.shp ↪public.counties | psql -h localhost -p 5432 -d ↪postgisDB -U gisuser
The basic set of parameters required are
-s to set the spatial reference
-g to name the geometric column (useful when
appending data) and
-I to create a spatial index. There are quite a few
other parameters that make it a flexible tool. As usual,
-? is your
friend if you need to execute less-trivial data loading. Apart from
creating a new table, the default option, you may append data to an existing
table, drop it and re-create or just create an empty table modeling its
structure according to the shapefile data.
pgsql2shp lets you drop your data
in a shapefile:
$ pgsql2shp -f ~/data/rivers -h localhost -p 5432 -u ↪postgres postgisDB0 public.rivers
The source of the data can be a table or a view, but you also can filter data at extraction time to export only a portion of a table:
$ pgsql2shp -f ~/data/california_counties -h localhost -p ↪5432 -u postgres postgisDB "SELECT * FROM ↪public.counties WHERE statefp = '06'"
As declared in its name, shp2pgsql-gui is a graphical version of shp2pgsql. Release 2.0 introduced some interesting features. Despite the name, you now can use it both for loading shapefiles and for exporting them, and although earlier versions processed one shapefile at a time, now you can add as many files as you need to load and then run it once.
Figure 1. Shapefile Loader GUI
Storing and processing raster data in PostGIS is analogous to vector data. Aerial imagery and satellite scenes, like those visible in Google maps, are common examples, but other types may be way more useful inside PostGIS. Indeed, the real value to having raster data inside PostGIS is the possibility to perform analysis. You also can mix raster and vector data in your analysis. The digital elevation model, a raster where an elevation value is associated to each pixel, is commonly used to perform terrain analysis by geologists. A raster data type has been added to support this kind of data. You can create a table for raster storage in the same way that you did for a vector:
CREATE TABLE myraster(rid integer, rast raster);
A raster is tiled in regular tiles, and each block is loaded as a record in the table. For example, if you have an imagery.tif file whose size is 4096x3072 pixels, and you choose a tile size of 256x256 pixels, after loading it, you will have a table with 192 records.
Loading raster data from the SQL prompt is not easy. As with vectors, a command-line utility exists, raster2pgsql:
$ raster2pgsql -s 4326 -t 256x256 -I -C ↪/home/postgis/data/imagery.tif imagery | ↪psql -d postgisDB -h localhost -p 5432 -U gisuser
Parameters are very similar except you use
-t to set
tile sizes, and
sets the standard set of constraints on the raster.
This article is merely a brief exploration of what PostGIS can do. Consider that there are about 700 specialized functions for dealing with spatial data. I hope you found it interesting and want to give it a try. Among experts, PostGIS always has been considered to be a hard horse to ride. I think it requires a little humility and a willingness to read the manual. Once you start using it, however, you soon will find yourself asking why people are spending big bucks for commercial spatial databases.
EnterpriseDB Downloads: http://www.enterprisedb.com/downloads/postgres-postgresql-downloads
The Shapefile Format: http://en.wikipedia.org/wiki/Shapefile
Official Whitepaper from ESRI about Shapefiles: http://www.esri.com/library/whitepapers/pdfs/shapefile.pdf
The Main Reference for EPSG Codes: http://epsg-registry.org
PostGIS 2.0 Presentation (you can find details about new serialization on pages 5–13): http://s3.cleverelephant.ca/foss4gna2012-postgis2.pdf
PostGIS Users Wiki: http://trac.osgeo.org/postgis/wiki/UsersWikiMain
PostGIS Official Documentation: http://www.postgis.org/documentation
Stefano Iacovella is a longtime GIS developer and consultant. He strongly believes in open source and constantly tries to spread the word, not only in the GIS sector. When not playing with polygons and linestrings, he loves reading travel books, riding
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Profiles and RC Files
- Maru OS Brings Debian to Your Phone
- Astronomy for KDE
- Understanding Ceph and Its Place in the Market
- OpenSwitch Finds a New Home
- Git 2.9 Released
- SoftMaker FreeOffice
- Snappy Moves to New Platforms
- What's Our Next Fight?