My Favorite bash Tips and Tricks

 in
Save a lot of typing with these handy bash features you won't find in an old-fashioned UNIX shell.

The inner command, rpm -qa | grep httpd, lists all the packages that have httpd in the name. The outer command, rpm -ql, lists all the files in each package.

Now, those of you who have experience with the Bourne shell might point out that you could perform command substitution by surrounding a command with back quotes, also called back-ticks. Using Bourne-style command substitution, the date assignment from above becomes:

today2=`date +%d-%b-%Y`

$ echo $today2
12-Mar-2004

There are two important advantages to using the newer bash-style syntax for command substitution. First, it can be nested more easily. Because the opening and closing symbols are different, the inner symbols don't need to be escaped with back slashes. Second, it is easier to read, especially when nested.

Even on Linux, where bash is standard, you still encounter shell scripts that use the older, Bourne-style syntax. This is done to provide portability to various flavors of UNIX that do not always have bash available but do have the Bourne shell. bash is backward-compatible with the Bourne shell, so it can understand the older syntax.

Redirecting Standard Error

Have you ever looked for a file using the find command, only to learn the file you were looking for is lost in a sea of permission denied error messages that quickly fill your terminal window?

If you are the administrator of the system, you can become root and execute find again as root. Because root can read any file, you don't get that error anymore. Unfortunately, not everyone has root access on the system being used. Besides, it's bad practice to be root unless it's absolutely necessary. So what can you do?

One thing you can do is redirect your output to a file. Basic output redirection should be nothing new to anyone who has spent a reasonable amount of time using any UNIX or Linux shell, so I won't go into detail regarding the basics of output redirection. To save the useful output from the find command, you can redirect the output to a file:

$ find /  -name foo > output.txt

You still see the error messages on the screen but not the path of the file you're looking for. Instead, that is placed in the file output.txt. When the find command completes, you can cat the file output.txt to get the location(s) of the file(s) you want.

That's an acceptable solution, but there's a better way. Instead of redirecting the standard output to a file, you can redirect the error messages to a file. This can be done by placing a 2 directly in front of the redirection angle bracket. If you are not interested in the error messages, you simply can send them to /dev/null:

$ find /  -name foo 2> /dev/null

This shows you the location of file foo, if it exists, without those pesky permission denied error messages. I almost always invoke the find command in this way.

The number 2 represents the standard error output stream. Standard error is where most commands send their error messages. Normal (non-error) output is sent to standard output, which can be represented by the number 1. Because most redirected output is the standard output, output redirection works only on the standard output stream by default. This makes the following two commands equivalent:

$ find / -name foo > output.txt
$ find / -name foo 1> output.txt

Sometimes you might want to save both the error messages and the standard output to file. This often is done with cron jobs, when you want to save all the output to a log file. This also can be done by directing both output streams to the same file:

$ find / -name foo > output.txt 2> output.txt

This works, but again, there's a better way to do it. You can tie the standard error stream to the standard output stream using an ampersand. Once you do this, the error messages goes to wherever you redirect the standard output:


$ find / -name foo > output.txt 2>&1

One caveat about doing this is that the tying operation goes at the end of the command generating the output. This is important if piping the output to another command. This line works as expected:


find -name test.sh 2>&1 | tee /tmp/output2.txt

but this line doesn't:


find -name test.sh | tee /tmp/output2.txt 2>&1

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

oldie but a goodie

Gavin's picture

Hey Prentice! Great article. Still doing any writing?

Hey Gavin, thanks! No, I

prenticeb's picture

Hey Gavin, thanks! No, I haven't done any more writing in a while, but I should. I've been using a couple other bash "tricks" lately. Might have enough for a sequel.

eh?

Mr UNIX's picture

instead of

  for file in *; do echo $file; done

to show files in a directory without invoking
any subprocesses, you could have just done

  echo .* *

in fact, yours does't work properly if filenames
contain items such as asterixes or question marks
(amongst others)

try it

  mkdir test
  cd test
  touch \*
  touch a
  find file in *; do echo $file; done

prints items twice, you need to surround your
echo with quotes

Reading the contents of a

Anonymous's picture

Reading the contents of a file FILE using bash builtins:

while line;do echo $line;done

Another example of brace expansion

Anonymous's picture

I needed to create directories to hold some batch files on Windows.
Right clicking and creating "batch1", "batch2" etc got boring once I got to "batch3".
I simply opened up my cygwin shell and did:
mkdir batch{1,2,3,4,5,6,7,8}

:)

brace expansions iterate too

Mathx's picture

instead of batch{1,2,3,4} etc try this batch{1..100}

win.

Inquire on sorting field in file

Lee's picture

Hi,

Thanks for the article. However, i am trying to solve the below problem:-

Given the file content is:-

Count Line
0 1
1 1
2 10
3 1
2 3

If I would like to map each line with the count and also add the count for each line, what command should I used? I tried to have a for loop with cut for firled in the file but I do notknow how to extract these array according to line no.

Please advise. Thanks

-Lee

This is help full for me,

Anonymous's picture

This is help full for me, thank. Some other bash commands can be found here : http://24hnews.net/General/25-Linux-UNIX-commands-that-you-must-know.html

> Now, can anyone suggest a

Anonymous's picture

> Now, can anyone suggest a way to display the contents of a file using
> only bash built-ins?

# source /path/to/filename

Quote: Can anyone tell me

Chris's picture

Quote:
Can anyone tell me how to list the contents of a file using only the bash built-ins?

My answer:
Take advantage of the catalog command and pipe it into awk that parses for (in this example) double quotes and returns the zeroth result, hence printing the entire file.

cat /filename | awk -F "\"" '{ print $0 }'

VIOLA, file contents printed.

Use a variant of this command on my webserver to parse logs for the names of webbots to keep the d@$n things from eating all my band.
HellMINTH aka christopmj

oh it is tempting to leave...

Anonymous's picture

oh it is tempting to leave the solution of only using builtins off the replies as it got eaten by the server and its hunger for <

But here it is:

exec 5<file

while read -u5 k; do echo $k; done

nice trick but why not while

mathx's picture

nice trick but why not while read a; do echo "$a"; done < file ?

remember to put "" around your var, or you'll have bash parsing the contents of $k and destroying spaces

cat is not actually the

Anonymous's picture

cat is not actually the catalog/catalogue command, but in fact the concatenate command; man cat will show you this.

So, really cat's main purpose is to join two or more files together.

To output a file (list the contents of a file) using bash built-ins:

Bash out-of-meory hacks

allenp's picture

The for loop technique for listing a directory is nice, but this is
fewer keystrokes and shows more files if you can't pipe to more:

echo *

To display the contents of a file, the simplest command I see is

echo $(<file)

According to strace(1), the shell clones itself, opens and reads
the file in the subprocess, and then the main process writes the
file's contents to stdout. This could fail for lack of a process
table slot or if it needed more memory and brk(2) failed, but it
doesn't exec anything else.

Paul Allen

another 'classic'

gongoputch's picture

One of my personal favorites: turn any command into a (sort of) full
screen monitor, e.g. :

while [ 1 ]; do clear; w; sleep 3; done

Very simplistic (doesn't check for output > 25/24, etc) but good
enough for common purposes.

I also like subshells a lot :
(cd /somewhere && do something) | \
(cd /somewhere/else && read from something)

Maybe a 'My Favorite bash Tips and Tricks - Part II' is in order?

instead of "while [ 1 ]; do

Anonymous's picture

instead of "while [ 1 ]; do clear; w; sleep 3; done"
"watch -n 3 w" is preferable if you want to watch the output of a command over time

yep but watch won't work for

GarthWick's picture

yep but watch won't work for more complex commands, for the example that was written, monitoring 'w' it is okay, but for something with pipes or redirections in it it will not work!

It works just fine, all you

Patagonicus's picture

It works just fine, all you have to do is escaping the special characters. For example:
watch df -h \; echo \; dmesg \| tail
This prints the output of df, a blank line and the last lines of dmesg. You can also use && and || (which has to be written as \&\& or '&&').

or put the whole thing into

Bernisys's picture

or put the whole thing into double quotes, like:

watch "ls -x ; df ."

but one disadvantage with "watch" is, that it strips output off coloring codes etc.
so if you wanna observe output which contains any hilighting, you will end up with plain uniform text, sadly. And as far as i know, the while ... sleep construct is the only usable workaround here. Sometimes i also add "date" as first command, just to be sure when the snapshot was taken :)

Hint me if you find any better approach :)

But there's also some major advantage using watch:
-d parameter automatically diffs the output to the previous
And screen formatting is automatically done.
Plus it shows the timestamp with each output.

Emacs?

grigora's picture

Thanks for a very helpful and useful article. One thing I would suggest adding is that the Ctrl-R trick is the Emacs shortcut for reverse incremental search. And in general that many Emacs shortcuts (Ctrl-A, Ctrl-E, Alt-d, Alt-<DEL>) can be used to edit commands at Bash prompt.

Display file contents using bash built-ins

remalone's picture

To display the contents of a file (file.txt in this case) using BASH built-ins you could use:
while read line; do echo "$line"; done <file.txt

Rick Malone
Systems Engineer Technician
Simulation Training Centre
Petawawa, ON Canada

more as a script

sstock's picture

Here is the result of a discussion I had about how to implement more using just a shell script (almost ten years ago :-). The goal was just to have fun, but the simplistic result is handy when the terminal (such as a Sparc10 console) doesn't have a scrollback buffer. Sadly I haven't been able to find the other person involved, but here ya go (note: the indentation doesn't show up):

shmore() {
LINES=""
while read line
do
echo "$line"
LINES=".$LINES"
if [ $LINES == "......................." ]; then
echo -n "--More--"
read < /dev/tty
LINES=""
fi
done
}

Now you can load it up, . more.sh, and use it: shmore < somefile

Steve Stock

Anonymous's picture

cat.sh
---cut---
#!/bin/sh

if [ $# = 0 ]
then
while read -r; do echo $REPLY; done
else
for F in "$@"
do
while read -r; do echo $REPLY; done < "$F"
done
fi
---cut---

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix