IBM's Universal Database
The information I provide in this section is based on a TurboLinux version 3.6 Base Workstation installation. If you installed a different installation type on your Linux workstation, you may have to add some of the required packages to your workstation.
There are some problems trying to get DB2 to run on a workstation running TurboLinux. Download a fix from the Web at: ftp://ftp.software.ibm.com/ps/products/db2/tools/. The fix is called tl36_instfix.tar.Z, note that the l is the letter “l” not the number “1” All the information you require to implement this fix is mentioned in the README file called tl36_instfix.readme.txt.
After you have downloaded the fix, you need to add the pdksh package, which is not part of the Base Workstation installation. This file is available on the TurboLinux CD-ROM, in the /TurboLinux/RPMS directory.
Once you have completed these tasks, your TurboLinux version 3.62 workstation is ready for a DB2 installation.
The information that I am providing in this section is based on a SuSE version 6.3 Network Oriented System installation. These instructions also apply to a workstation running SuSE version 6.2. If you installed a different installation type on your Linux workstation, you may have to add some of the required packages to your workstation.
The biggest problem with installing DB2 on a workstation that is running SuSE Linux is the naming convention that SuSE uses for its packages. For example, SuSE calls the required glibc package shlibs. This will causes problems when you try to install DB2 because the DB2 installation utility will fail to recognize the existence of the required glibc package. To get around this problem, you have to install a dummy package, called glibc-2.0.7-0.i386.rpm. This package is located in the /db2/install/dummyrpm directory on your DB2 product CD-ROM.
SuSE Linux version 6.1 ships with a beta copy of the DB2 for Linux version 5.2 code. Consequently, when you go to install DB2, this causes problems with the default users. To make things ever stranger, I noticed that when I installed the Network Oriented System installation, which was not supposed to include DB2, the default DB2 users were created. To make matters worse, I could not find any information about the passwords for the DB2 users that SuSE creates (they are not the default DB2 passwords), and some of the settings that SuSE implements do not work for DB2. In the end, remove the users (db2inst1, db2as, db2fenc1) that the SuSE installation creates. For more information on SuSE user management, refer to your product's documentation.
Once you have completed these tasks, your SuSE version 6.1 workstation is ready for a DB2 installation.
The information I provide in this section is based on a Red Hat version 6.0 Server installation. These instructions also apply to a workstation that is running Red Hat version 5.2, though the names of the packages may be at a different level. If you installed a different installation type on your Linux workstation, you may have to add some of the required packages to your workstation.
Both the Red Hat version 5.2 and version 6.0 installation are easy to enable for a DB2 installation. They are both missing the required pdksh package that is required to run the DB2 Installer. This package is located in the /RedHat/RPMS directory on the Red Hat CD-ROM.
If you are trying to install DB2 on a workstation that is running Red Hat version 6.1, you aren't going to get very far due to a problem with this version of Red Hat v6.1 and DB2. You can download the Red Hat fix at ftp.software.ibm.com/ps/products/db2/tools. The fix you need depends on where you got your DB2 code. If you are installing the copy of DB2 bundled with Red Hat 6.1, download the file db2rh61fix.tgz. If you are installing any other DB2 code, you need to download the db2rh61gafix.tgz file.
After you download the appropriate fix, unpack them by entering the tar xvzf filename command, where filename is the name of the downloaded fix file. After unpacking this file, you will see three files in the directory. Once of them is a README file, called readme.txt. This file gives complete and detailed instructions on how to implement this fix.
Once you have completed these tasks, your Red Hat version 6.1 workstation is ready for a DB2 installation.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- Weechat, Irssi's Little Brother
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- New Products
- Poul-Henning Kamp: welcome to
1 hour 30 min ago
- This has already been done
1 hour 31 min ago
- Reply to comment | Linux Journal
2 hours 16 min ago
- Welcome to 1998
3 hours 5 min ago
- notifier shortcomings
3 hours 28 min ago
5 hours 5 min ago
- Android User
5 hours 7 min ago
- Reply to comment | Linux Journal
7 hours 29 sec ago
9 hours 49 min ago
- This is a good post. This
15 hours 2 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?