Beachhead - Beneath the Surface
I was walking along the beach with one of the Pollywogs when I saw a small tidal pool. I stopped to wade through it and look at some of the life under the rocks.
Most people never look under the rocks in a tidal pool or in a freshwater stream, but there is a lot of very interesting and necessary life to be found—life forms that are necessary because they fill a very important part of the world. Most people see only the glossy surface of the ocean or the stream, simply because they never look any deeper.
The same is true with Linux. I have noticed that recently there has been a lot of work on graphical user interfaces, with translucent windows and different ways of displaying multiple desktops—all of this is good.
In my opinion, however, the real power of Linux comes from the command-line interface that resides below this glossy surface and allows people to write very powerful programs to manipulate huge amounts of data.
I do not expect that everyone will want to learn every type of command-line interface or small language, but if you do not learn at least one or two, you will never know how powerful your system can be.
Many years ago, the company where I was working needed to get a new piece of software out to its customers. However, the customers who were supposed to receive the software were represented by two different printouts from two different systems, and my company was planning on having a clerk evaluate the two reports to accomplish this task. Estimated time for the clerk to do this was nine months, which meant that the software would be almost a year old before the customers received it.
I asked if this process could somehow be automated, because the customers were waiting for the software. “No”, I was told, “it can't be done”, because the databases were incompatible and on different machines. There was no program that could reach across the systems to coordinate the data.
I had the managers put the printout into two files, and put both files on my (at that time) UNIX system. In less than a quarter of a day, using the stream editor sed(1), the pattern matching program grep(1) and the pattern matching, scanning and processing language awk(1), I was able not only to correlate the data but also to print out mailing labels for the shipping boxes along with an indication of the proper software to go in each one. The managers could not believe it.
Some people think that it takes a lot of study in order to “know” command-line programming. However, if you approach the task systematically, you can learn it over time, taking advantage of each learning cycle.
The first thing you probably should do is get a book on Linux commands. Linux In A Nutshell: A Desktop Quick Reference by Figgins, Weber and Siever (O'Reilly) is a good start. Another good one is Linux Pocket Guide by Barrett, also from O'Reilly. Finally, Linux For Dummies Quick Reference by Hughes and Navratilova (Wiley) also is a good reference.
Read the book you choose, but do not obsess with memorizing the capabilities of each command. After you have read the book, think about some task you have to do repeatedly and what it would take to automate that task. You probably will find some Linux command-line programs that would help make things easier.
When you log in to your Linux system, execute a terminal emulator program, such as xterm or one of the others. Stay away from superuser (root) mode for the present, as you are trying to learn and sometimes things go astray.
Practice with some commands, such as grep, sed, ls, cd and others, simply by typing them into the command line and feeding them data according to what the command requires. Or, create a file of ASCII characters that you would like to use the commands to search, sort, filter or otherwise change.
Then, start putting the commands together using the pipe symbol (|). Note that this is not either the lowercase l or uppercase i. It is typically found along with some of the other special characters on ASCII keyboards, usually above the Enter key.
For example, start by putting together the ls and grep commands:
ls | grep 'e'
This will show you every visible file in your directory with the letter e in its name.
Another area of study should be the concept of regular expressions—ways of describing strings of data that typically are used for searching or matching with other strings of characters. The aforementioned books also cover issues of regular expression creation, which can be quite tricky, but also quite powerful.
Although different programs may use different methods of regular expressions, they tend to follow the same principles, and generally you can use the same type of special characters with each command.
I was working for Bell Laboratories in 1977, trying to be a system administrator for this interesting system called “UNIX”. For several months I had been frustrated by trying to learn this operating system that had seemingly millions of tiny little commands, multiple directories holding them and “cryptic” names for them. One night I was trying to modify a text file with the interactive text editor, ed(1), and I could see that it would take me hours to modify the file using ed, if not all night.
I remember suddenly thinking, “I do not know that there is a command in UNIX for doing this easily, but I am willing to bet there is one.” So, I started going through the manual looking only at the description of each command given in the “Name” line for the command. Fairly soon, I came across cut and its partner program paste, which allowed me to do exactly what I needed to do in two commands. From that time on, I followed the philosophy of first looking for the right command, and although that philosophy was sometimes wrong, more times than not, the philosophy was right, and a suitable command did exist.
To start learning the command line with only on-line resources, make sure that you have loaded the on-line manual and info pages from your distribution. You can then type in man intro to read the introduction section of the man(1) command, then type man <command-name>—for example, man ls—to learn more about the ls(1) command. The (1) after the command name ls means that it is a user-level command, rather than a programming interface, system administrator command or other specialized function.
If you like a graphical, mouse-based reader, rather than a command-line reader, there is xman. Once you have invoked xman by typing xman, click Help in the little window and read the first section of the help page. You then can click manual page in the little control window, and when the text window pops up, select show both screens from the Options menu at the top. This lets you see both the index of all the manual commands in the top section and the actual manual page itself in the bottom section. Click on the program of interest in the top section, and the command will be formatted in the bottom section. An example of an interesting command is less(1).
I can't touch on all the issues and needs for learning the power of the command line in one column, but perhaps I've piqued your interest in discovering why a lot of Linux users do not use a graphical windowing system at all, preferring to use only the command line, while others (myself included) heavily use both the windowing system and the command line.
And, perhaps you will look beneath the surface to see the power of the underlying currents.
Jon “maddog” Hall is the Executive Director of Linux International (www.li.org), a nonprofit association of end users who wish to support and promote the Linux operating system. During his career in commercial computing, which started in 1969, Mr Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager and educator. He has worked for such companies as Western Electric Corporation, Aetna Life and Casualty, Bell Laboratories, Digital Equipment Corporation, VA Linux Systems and SGI. He is now an independent consultant in Free and Open Source Software (FOSS) Business and Technical issues.
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
- New Products
- Weechat, Irssi's Little Brother
- Validate an E-Mail Address with PHP, the Right Way
- Tech Tip: Really Simple HTTP Server with Python
- Poul-Henning Kamp: welcome to
14 min 52 sec ago
- This has already been done
15 min 52 sec ago
- Reply to comment | Linux Journal
1 hour 1 min ago
- Welcome to 1998
1 hour 49 min ago
- notifier shortcomings
2 hours 13 min ago
3 hours 50 min ago
- Android User
3 hours 51 min ago
- Reply to comment | Linux Journal
5 hours 44 min ago
8 hours 34 min ago
- This is a good post. This
13 hours 47 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?