Open Database Connectivity
This is a share library which implements many useful Microsoft extensions and a number of unixODBC extensions. It provides an API for reading and writing ODBC system information (see Figure 4). This essentially means reading and writing the various INI files and environment variables. unixODBC also provides a command-line interface to this library. This command-line tool (of the same name) can be used by driver programmers when creating their install script/RPM. The Odbcinst library is used heavily by ODBCConfig, the Driver Manager and drivers but is seldom used by other applications. Odbcinst implements many Microsoft extensions.
The drivers, and their data source-specific client libraries, typically implement the vast majority of the ODBC functionality. As important as the Driver Manager is, it is the driver that carries out most of the work. The unixODBC Driver Manager has been designed to be binary compatible with any compliant drivers compiled on Linux. unixODBC includes several drivers (and their sources) as examples. unixODBC also includes a driver template for those wishing to write a new driver. MiniSQL, MySQL and PostgreSQL are examples of drivers included in unixODBC. unixODBC maintains a driver-certification process which can be found at its web site (see Resources).
unixODBC also includes a library for working with INI files, a LOG library and a command-line tool for executing SQL commands. The INI library is used by Odbcinst for reading/writing the ODBC system information. The LOG library is used by the Driver Manager and Odbcinst to log ODBC activity, such as errors, and implement ODBC tracing. The isql command-line tool allows the user to submit SQL commands in a terminal window. isql can also be used in batch mode, which can be quite useful because it can return results wrapped in an HTML table or delimited with a chosen character. It supports piping and redirection.
You can download the source distribution from the unixODBC website. Unpack the file using tar, then follow the instructions in the README file. You will want the Qt dev libraries to build the GUI components and you will need the database-specific client libraries for any drivers you decide to build. The README talks about this in more detail. You must have at least one driver section in /etc/odbcinst.ini. You can add this section using the Odbcinst command-line tool or your favourite editor. Normally, odbcinst.ini will get updated by driver install scripts using the Odbcinst command-line tool, but at this time, you may have to edit this file directly. A driver section in odbcinst.ini should look something like this:
[MiniSQL 2.x] Description = MiniSQL ODBC Driver Driver = /usr/lib/libodbcmini.so.1.0.0 Setup = /usr/lib/libodbcminiS.so.1.0.0 FileUsage = 2
Now run ODBCConfig to add, edit or remove data sources. You will need to be root to work with system data sources, but any user can add, edit and remove user data sources. Now run the DataManager, and you should see all ODBC data sources. When you try to expand the TreeView below a data source, you will be prompted for login information to connect to the data source. You can also try to connect to your data source using the isql command-line tool. Simply execute the command:
$isql DataSourceName MyID MyPWD
Most database access can be accomplished using a simple handful of ODBC function calls. In fact, it is good practice to keep it simple, because each driver implements its own level of compliance and completeness. An application can expect to be used with very modest drivers from time to time. The isql command-line tool is an example of an application that uses only a small, simple set of ODBC functions. I will not get too deep into ODBC APIs, since you can pick up excellent reference material elsewhere. The typical sequence of events goes something like this:
Connect to a data source.
Create and Execute an SQL Statement.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Designing Electronics with Linux
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- What's the tweeting protocol?
- A Topic for Discussion - Open Source Feature-Richness?
- Home, My Backup Data Center
- Dart: a New Web Programming Experience
- Drupal Is a Framework: Why Everyone Needs to Understand This
1 hour 53 min ago
- Kernel Problem
11 hours 56 min ago
- BASH script to log IPs on public web server
16 hours 23 min ago
19 hours 59 min ago
- Reply to comment | Linux Journal
20 hours 31 min ago
- All the articles you talked
22 hours 55 min ago
- All the articles you talked
22 hours 58 min ago
- All the articles you talked
22 hours 59 min ago
1 day 3 hours ago
- Keeping track of IP address
1 day 5 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?