Beachhead - Beneath the Surface
I was walking along the beach with one of the Pollywogs when I saw a small tidal pool. I stopped to wade through it and look at some of the life under the rocks.
Most people never look under the rocks in a tidal pool or in a freshwater stream, but there is a lot of very interesting and necessary life to be found—life forms that are necessary because they fill a very important part of the world. Most people see only the glossy surface of the ocean or the stream, simply because they never look any deeper.
The same is true with Linux. I have noticed that recently there has been a lot of work on graphical user interfaces, with translucent windows and different ways of displaying multiple desktops—all of this is good.
In my opinion, however, the real power of Linux comes from the command-line interface that resides below this glossy surface and allows people to write very powerful programs to manipulate huge amounts of data.
I do not expect that everyone will want to learn every type of command-line interface or small language, but if you do not learn at least one or two, you will never know how powerful your system can be.
Many years ago, the company where I was working needed to get a new piece of software out to its customers. However, the customers who were supposed to receive the software were represented by two different printouts from two different systems, and my company was planning on having a clerk evaluate the two reports to accomplish this task. Estimated time for the clerk to do this was nine months, which meant that the software would be almost a year old before the customers received it.
I asked if this process could somehow be automated, because the customers were waiting for the software. “No”, I was told, “it can't be done”, because the databases were incompatible and on different machines. There was no program that could reach across the systems to coordinate the data.
I had the managers put the printout into two files, and put both files on my (at that time) UNIX system. In less than a quarter of a day, using the stream editor sed(1), the pattern matching program grep(1) and the pattern matching, scanning and processing language awk(1), I was able not only to correlate the data but also to print out mailing labels for the shipping boxes along with an indication of the proper software to go in each one. The managers could not believe it.
Some people think that it takes a lot of study in order to “know” command-line programming. However, if you approach the task systematically, you can learn it over time, taking advantage of each learning cycle.
The first thing you probably should do is get a book on Linux commands. Linux In A Nutshell: A Desktop Quick Reference by Figgins, Weber and Siever (O'Reilly) is a good start. Another good one is Linux Pocket Guide by Barrett, also from O'Reilly. Finally, Linux For Dummies Quick Reference by Hughes and Navratilova (Wiley) also is a good reference.
Read the book you choose, but do not obsess with memorizing the capabilities of each command. After you have read the book, think about some task you have to do repeatedly and what it would take to automate that task. You probably will find some Linux command-line programs that would help make things easier.
When you log in to your Linux system, execute a terminal emulator program, such as xterm or one of the others. Stay away from superuser (root) mode for the present, as you are trying to learn and sometimes things go astray.
Practice with some commands, such as grep, sed, ls, cd and others, simply by typing them into the command line and feeding them data according to what the command requires. Or, create a file of ASCII characters that you would like to use the commands to search, sort, filter or otherwise change.
Then, start putting the commands together using the pipe symbol (|). Note that this is not either the lowercase l or uppercase i. It is typically found along with some of the other special characters on ASCII keyboards, usually above the Enter key.
For example, start by putting together the ls and grep commands:
ls | grep 'e'
This will show you every visible file in your directory with the letter e in its name.
Another area of study should be the concept of regular expressions—ways of describing strings of data that typically are used for searching or matching with other strings of characters. The aforementioned books also cover issues of regular expression creation, which can be quite tricky, but also quite powerful.
Although different programs may use different methods of regular expressions, they tend to follow the same principles, and generally you can use the same type of special characters with each command.
I was working for Bell Laboratories in 1977, trying to be a system administrator for this interesting system called “UNIX”. For several months I had been frustrated by trying to learn this operating system that had seemingly millions of tiny little commands, multiple directories holding them and “cryptic” names for them. One night I was trying to modify a text file with the interactive text editor, ed(1), and I could see that it would take me hours to modify the file using ed, if not all night.
I remember suddenly thinking, “I do not know that there is a command in UNIX for doing this easily, but I am willing to bet there is one.” So, I started going through the manual looking only at the description of each command given in the “Name” line for the command. Fairly soon, I came across cut and its partner program paste, which allowed me to do exactly what I needed to do in two commands. From that time on, I followed the philosophy of first looking for the right command, and although that philosophy was sometimes wrong, more times than not, the philosophy was right, and a suitable command did exist.
To start learning the command line with only on-line resources, make sure that you have loaded the on-line manual and info pages from your distribution. You can then type in man intro to read the introduction section of the man(1) command, then type man <command-name>—for example, man ls—to learn more about the ls(1) command. The (1) after the command name ls means that it is a user-level command, rather than a programming interface, system administrator command or other specialized function.
If you like a graphical, mouse-based reader, rather than a command-line reader, there is xman. Once you have invoked xman by typing xman, click Help in the little window and read the first section of the help page. You then can click manual page in the little control window, and when the text window pops up, select show both screens from the Options menu at the top. This lets you see both the index of all the manual commands in the top section and the actual manual page itself in the bottom section. Click on the program of interest in the top section, and the command will be formatted in the bottom section. An example of an interesting command is less(1).
I can't touch on all the issues and needs for learning the power of the command line in one column, but perhaps I've piqued your interest in discovering why a lot of Linux users do not use a graphical windowing system at all, preferring to use only the command line, while others (myself included) heavily use both the windowing system and the command line.
And, perhaps you will look beneath the surface to see the power of the underlying currents.
Jon “maddog” Hall is the Executive Director of Linux International (www.li.org), a nonprofit association of end users who wish to support and promote the Linux operating system. During his career in commercial computing, which started in 1969, Mr Hall has been a programmer, systems designer, systems administrator, product manager, technical marketing manager and educator. He has worked for such companies as Western Electric Corporation, Aetna Life and Casualty, Bell Laboratories, Digital Equipment Corporation, VA Linux Systems and SGI. He is now an independent consultant in Free and Open Source Software (FOSS) Business and Technical issues.
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- The Firebird Project's Firebird Relational Database
- Stunnel Security for Oracle
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide