A Beginner's Guide to Compiling Source Code
One of the first things a newcomer to Linux often does is search the Internet for interesting and useful programs to run, quickly discovering that many programs are available only in the form of a source-code tree (a form that can be intimidating if one isn't a programmer). For this reason, the new Linux user needs to become comfortable with the compilation process, as it truly is a necessary skill for anyone running Linux.
I'm not a programmer myself, but I do have a basic knowledge of how source code becomes an executable file. Possibly my non-programming status will enable me to bring up information which might seem “too obvious for words” to the experienced programmer. A good introduction to the subject is chapter six of Running Linux by Matt Welsh and Lar Kaufman (O'Reilly, 1995).
The GNU programming utilities, including the gcc compiler, the make program, the linker and a slew of related tools (many of which you don't necessarily need to know about) are an integral part of most Linux distributions. The Slackware distribution has a menu-driven installation during which you are given the option of having the GNU programming tools installed. If you elected not to install these packages, you will have to start up the pkgtool utility and have them copied to your hard disk.
There are other free compilers out there, but it is advisable to stick with the GNU tools, as they are well-maintained and of high quality. Most Linux software seems to be written with gcc in mind as well, and the less editing of the supplied Makefiles you have to do the better off you'll be.
Applications written in the popular Tcl/Tk programming languages don't generally use the GNU tools; if they do, the C-language components are subsidiary to the Tcl/Tk components. You need to have the Tcl and Tk libraries and executables installed on your system in order to install the source for this type of application. These applications aren't compiled in the usual sense. Installation consists of copying the the Tcl and Tk files to directories specified in the makefile. These programs are completely dependent on their ability to access an existing Tcl/Tk installed base of files, one of the most important of which is the Tk “wish” executable.
As recently as a couple of months ago, it was difficult to maintain a current Tcl/Tk installation; development was rapid, binaries weren't always available and the packages could be difficult to compile successfully. Some of the newer applications required the beta libraries to function. The situation has stabilized recently with the release of the non-beta Tcl-7.5 and Tk-4.1 in both binary and source versions. For these programs most users are better off installing the binaries since, in my experience, they can be difficult to compile from source.
Note that even if you have a reasonably current Linux distribution, the Tcl/Tk versions included may very well be out of date. If you want to run the latest versions of such exemplary applications as TkDesk and TkMan it is well worthwhile to upgrade your Tcl/Tk files.
FTP sites can't really be called user-friendly or inviting to newcomers. The file names are often cryptic, and navigating through seemingly infinite levels of a directory tree can be frustrating, especially if you don't know where the file is located. These sites aren't designed for casual browsing, but the maintainers of the large archive sites (e.g., ftp://sunsite.unc.edu and its mirrors) keep the various index files, sometimes in HTML format, which list the available files with brief descriptions. Usually a file called NEW exists which lists recent arrivals. The complete index files can be very large, but are worth downloading in order to be able to use a text editor with a good search facility to search for keywords or file names which interest you.
In general, a file called filename.tar.gz will usually be a source code directory tree, tarred and gzipped. The binary distributions usually have a name patterned as filename.BIN.tar.gz, or filename.BIN-ELF.tar.gz.
Usenet postings in the various Linux newsgroups often contain locations of various packages.
I recommend using NcFtp as an FTP client.This well-written interface to the command-line FTP program has many convenient features, such as remembering every site you visit in a bookmarks file, including the directory you were last in. This feature meshes well with NcFtp's “reget” function, which allows resumption of interrupted file transfers at the point the connection was broken.
Another handy resource is a recent release of a CD-ROM containing a snapshot of one of the major Linux FTP archive sites. Several companies market these CD-ROMs and they are reasonably priced. Linux software changes so quickly that the files on even the most recent of these CD-ROMs will probably be back-level by a version or two, but if you have a sudden desire to compile Xemacs or the Andrew User Interface System, a CD-ROM will save you a lengthy download.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Weechat, Irssi's Little Brother
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?