Open Database Connectivity
Open Database Connectivity (ODBC) is an open specification for providing application developers with a predictable application programmers interface (API) with which to access data sources. Data sources can be just about anything, provided someone has created an ODBC driver for it. The most common data source is an SQL server.
The two major advantages of coding an application with the ODBC API are portable data access code and dynamic data binding.
The ODBC API (or CLI, command-line interface), as outlined by X/Open and the ISO, is available on all major platforms. Microsoft platforms include many enhancements to this specification. The current version from Microsoft is 3.51. The idea is that a programmer using the ODBC API is likely to have data access code which is portable to other platforms. The same code will also be portable across different data sources. For example, data for an accounting program application can reside on a light SQL server during development and then be moved over to a heavy SQL server just by linking to a different ODBC driver. ODBC delivers platform and data source portability.
Dynamic binding allows the user or the system administrator to easily configure an application to use any ODBC-compliant data source. This is the single biggest advantage of coding an application with the ODBC API and purchasing such an application. Dynamic binding allows the end user to pick a data source, e.g., an SQL server, and use it for all data applications. Applications do not have to be recompiled or recoded for the new target data source. This is achieved by the ODBC Driver Manager which will pass the ODBC calls to the user's ODBC driver without the need to relink the code. ODBC enables the user to choose where the data will be stored.
The unixODBC Project's goals are to develop and promote unixODBC as the definitive standard for ODBC on the Linux platform. This is to include Microsoft extensions, where they make sense, and GUI clients. The unixODBC team is achieving this objective by providing the best technical solution to ODBC demands on the Linux platform. All unixODBC development is released under GPL or LGPL.
The components of this project are the Driver Manager, DataManager, ODBCConfig, Odbcinst, drivers and other utilities.
This share library is the hub of most ODBC activity, but its function is simple. Ninety percent of the Driver Manager's function is to validate arguments, load and unload drivers and pass the call to the driver in a manner consistent with the ODBC specification. Normally, an application links only to this share to get the ODBC support it requires (see Figure 1). The Driver Manager loads/unloads the appropriate driver and passes calls to the driver.
This is a GUI-client utility. The current version is based upon Troll Tech's Qt class library (http://www.troll.no/). The DataManager allows the user to browse and manage data sources (see Figure 2). The right side of the TreeView contains a sizable canvas which can be extended to include properties for any TreeView selections. An example of this has been implemented for the data source TreeViewItem. When a data source is selected, the canvas becomes a handy editor which can be used to submit SQL, review results and save/load either SQL or the results. Table designers and data editors could be easily added to the DataManager using the same techniques. The DataManager is an easy way to manage ODBC data-source resources.
This is another GUI-client utility. It has been created to be user compatible with the Microsoft ODBC administration utility (see Figure 3). ODBCConfig makes it easy, even for non-techies, to configure their data sources. ODBCConfig uses the Odbcinst library to read/write ODBC system information. ODBCConfig will make use of any installed driver configuration libraries to present a list of driver-specific options to edit. ODBCConfig functionality is an excellent candidate for the KDE (http://www.kde.org/) Control Center. ODBCConfig makes it easy to configure ODBC data sources.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Keeping track of IP address
25 min 24 sec ago
- Roll your own dynamic dns
5 hours 38 min ago
- Please correct the URL for Salt Stack's web site
8 hours 50 min ago
- Android is Linux -- why no better inter-operation
11 hours 5 min ago
- Connecting Android device to desktop Linux via USB
11 hours 34 min ago
- Find new cell phone and tablet pc
12 hours 32 min ago
14 hours 1 min ago
- Automatically updating Guest Additions
15 hours 9 min ago
- I like your topic on android
15 hours 56 min ago
- This is the easiest tutorial
22 hours 31 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?