grep: Searching for Words
Within Linux (or any other UNIX), many people make use of filters, small programs (black boxes) that read input from standard input (stdin), do something with this input, and return the result to standard output (stdout).
Linux has many filters. Some examples are:
wc: print the number of bytes, words and lines in a file
tr: translate or delete characters
grep: print lines matching a pattern
sort: sort lines in a file
cut: cut selected fields from a file
The easiest way to learn these filters is to use them. This may seem daunting at first, since you may not know all the capabilities of these filters. I will describe the functions of grep so that you can benefit from its power.
I will be using this article (article.txt) as the input file for all the examples.
The syntax of the grep command is as follows:
grep [ -[[AB] ]num ] [ -[CEFGVBchilnsvwx] ]\ [ -e ] pattern| -file ] [ files... ]
I use GNU grep Version 2; if you're using another version, you may have slightly different options. I will touch on only those options I use most. To learn more about the grep command, see the man page. Variants of the grep command are egrep and fgrep. grep includes flags to simulate these commands: -E for egrep and -F for fgrep.
The simplest form of the command is:
grep flip article.txt
This will search for the word “flip” in the file article.txt and will display all lines containing the word “flip”.
grep also accepts regular expressions, so to search for “flip” in all files in the directory, the following command can be given:
grep flip *
All lines in all files which contain the word “flip” will be displayed, preceded by the file name. Thus, the first line of the output will look like this:
article.txt:grep flip article.txtThe line begins with the name of the file containing the word “flip”, followed by a colon, then the appropriate line.
Sometimes you may want to define the search for special characters or a word combination. To do this, put the expression between quotes so that the whole expression/pattern will be treated as one. The command would then look like this:
grep -e "is the"
I put the -e (i.e., do pattern search) option in this example just for demonstration purposes. It is not necessary to specify, as it is the default value.
To see the line numbers in which the pattern is found, use the -n option. The output will look like that shown above, with the file name replaced by the line number before the colon.
Another option which provides us with a number is the -c option. This option outputs the number of times a word exists in a file. This article contains the word “flip” 10 times.
> grep -c flip article.txt 10
You may now be able to think of many ways in which you might use grep. For any command you use often, speed is important. Normally, grep can do its job quickly. However, if the search is being done over many large files, the results will be slower to return. In this case, you can speed up the process by using either fgrep or egrep. fgrep is used only for finding strings, and egrep is used for complicated regular expressions.
File names, words, sentences and numbers can all be found quickly using grep. In addition, using the grep command together with other filters can be very powerful and prove to be of great value. For example, you could search a statistics file and sort the output by piping it through the sort and cut commands (see man pages):
grep ... | sort ... | grep ... | cut ... > result
This has been a quick introduction to get you started and rouse your curiosity to learn more about grep and other filters.
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- RSS Feeds
- Readers' Choice Awards
- The Secret Password Is...
- All the articles you talked
35 min 25 sec ago
- All the articles you talked
38 min 32 sec ago
- All the articles you talked
39 min 52 sec ago
5 hours 4 min ago
- Keeping track of IP address
6 hours 55 min ago
- Roll your own dynamic dns
12 hours 8 min ago
- Please correct the URL for Salt Stack's web site
15 hours 20 min ago
- Android is Linux -- why no better inter-operation
17 hours 35 min ago
- Connecting Android device to desktop Linux via USB
18 hours 4 min ago
- Find new cell phone and tablet pc
19 hours 2 min ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?