My Favorite bash Tips and Tricks

 in
Save a lot of typing with these handy bash features you won't find in an old-fashioned UNIX shell.

and neither does this one:


find -name test.sh 2>&1 > /tmp/output.txt

I started this discussion on output redirection using the find command as an example, and all the examples used the find command. This discussion isn't limited to the output of find, however. Many other commands can generate enough error messages to obscure the one or two lines of output you need.

Output redirection isn't limited to bash, either. All UNIX/Linux shells support output redirection using the same syntax.

Searching the Command History

One of the greatest features of the bash shell is command history, which makes it easy to navigate through past commands by navigating up and down through your history with the up and down arrow keys. This is fine if the command you want to repeat is within the last 10–20 commands you executed, but it becomes tedious when the command is 75–100 commands back in your history.

To speed things up, you can search interactively through your command history by pressing Ctrl-R. After doing this, your prompt changes to:

(reverse-i-search)`':

Start typing a few letters of the command you're looking for, and bash shows you the most recent command that contains the string you've typed so far. What you type is shown between the ` and ' in the prompt. In the example below, I typed in htt:

(reverse-i-search)`htt': rpm -ql $(rpm -qa | grep httpd)

This shows that the most recent command I typed containing the string htt is:

rpm -ql $(rpm -qa | grep httpd)

To execute that command again, I can press Enter. If I want to edit it, I can press the left or right arrow key. This places the command on the command line at a normal prompt, and I now can edit it as if I just typed it in. This can be a real time saver for commands with a lot of arguments that are far back in the command history.

Using for Loops from the Command Line

One last tip I'd like to offer is using loops from the command line. The command line is not the place to write complicated scripts that include multiple loops or branching. For small loops, though, it can be a great time saver. Unfortunately, I don't see many people taking advantage of this. Instead, I frequently see people use the up arrow key to go back in the command history and modify the previous command for each iteration.

If you are not familiar with creating for loops or other types of loops, many good books on shell scripting discuss this topic. A discussion on for loops in general is an article in itself.

You can write loops interactively in two ways. The first way, and the method I prefer, is to separate each line with a semicolon. A simple loop to make a backup copy of all the files in a directory would look like this:

$ for file in * ; do cp $file $file.bak; done

Another way to write loops is to press Enter after each line instead of inserting a semicolon. bash recognizes that you are creating a loop from the use of the for keyword, and it prompts you for the next line with a secondary prompt. It knows you are done when you enter the keyword done, signifying that your loop is complete:

$ for file in *
> do cp $file $file.bak
> done

And Now for Something Completely Different

When I originally conceived this article, I was going to name it “Stupid bash Tricks”, and show off some unusual, esoteric bash commands I've learned. The tone of the article has changed since then, but there is one stupid bash trick I'd like to share.

About five years ago, a Linux system I was responsible for ran out of memory. Even simple commands, such as ls, failed with an insufficient memory error. The obvious solution to this problem was simply to reboot. One of the other system administrators wanted to look at a file that may have held clues to the problem, but he couldn't remember the exact name of the file. We could switch to different directories, because the cd command is part of bash, but we couldn't get a list of the files, because even ls would fail. To get around this problem, the other system administrator created a simple loop to show us the files in the directory:

$ for file in *; do echo $file; done

This worked when ls wouldn't, because echo is a part of the bash shell, so it already is loaded into memory. It's an interesting solution to an unusual problem. Now, can anyone suggest a way to display the contents of a file using only bash built-ins?

______________________

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

oldie but a goodie

Gavin's picture

Hey Prentice! Great article. Still doing any writing?

Hey Gavin, thanks! No, I

prenticeb's picture

Hey Gavin, thanks! No, I haven't done any more writing in a while, but I should. I've been using a couple other bash "tricks" lately. Might have enough for a sequel.

eh?

Mr UNIX's picture

instead of

  for file in *; do echo $file; done

to show files in a directory without invoking
any subprocesses, you could have just done

  echo .* *

in fact, yours does't work properly if filenames
contain items such as asterixes or question marks
(amongst others)

try it

  mkdir test
  cd test
  touch \*
  touch a
  find file in *; do echo $file; done

prints items twice, you need to surround your
echo with quotes

Reading the contents of a

Anonymous's picture

Reading the contents of a file FILE using bash builtins:

while line;do echo $line;done

Another example of brace expansion

Anonymous's picture

I needed to create directories to hold some batch files on Windows.
Right clicking and creating "batch1", "batch2" etc got boring once I got to "batch3".
I simply opened up my cygwin shell and did:
mkdir batch{1,2,3,4,5,6,7,8}

:)

brace expansions iterate too

Mathx's picture

instead of batch{1,2,3,4} etc try this batch{1..100}

win.

Inquire on sorting field in file

Lee's picture

Hi,

Thanks for the article. However, i am trying to solve the below problem:-

Given the file content is:-

Count Line
0 1
1 1
2 10
3 1
2 3

If I would like to map each line with the count and also add the count for each line, what command should I used? I tried to have a for loop with cut for firled in the file but I do notknow how to extract these array according to line no.

Please advise. Thanks

-Lee

This is help full for me,

Anonymous's picture

This is help full for me, thank. Some other bash commands can be found here : http://24hnews.net/General/25-Linux-UNIX-commands-that-you-must-know.html

> Now, can anyone suggest a

Anonymous's picture

> Now, can anyone suggest a way to display the contents of a file using
> only bash built-ins?

# source /path/to/filename

Quote: Can anyone tell me

Chris's picture

Quote:
Can anyone tell me how to list the contents of a file using only the bash built-ins?

My answer:
Take advantage of the catalog command and pipe it into awk that parses for (in this example) double quotes and returns the zeroth result, hence printing the entire file.

cat /filename | awk -F "\"" '{ print $0 }'

VIOLA, file contents printed.

Use a variant of this command on my webserver to parse logs for the names of webbots to keep the d@$n things from eating all my band.
HellMINTH aka christopmj

oh it is tempting to leave...

Anonymous's picture

oh it is tempting to leave the solution of only using builtins off the replies as it got eaten by the server and its hunger for <

But here it is:

exec 5<file

while read -u5 k; do echo $k; done

nice trick but why not while

mathx's picture

nice trick but why not while read a; do echo "$a"; done < file ?

remember to put "" around your var, or you'll have bash parsing the contents of $k and destroying spaces

cat is not actually the

Anonymous's picture

cat is not actually the catalog/catalogue command, but in fact the concatenate command; man cat will show you this.

So, really cat's main purpose is to join two or more files together.

To output a file (list the contents of a file) using bash built-ins:

Bash out-of-meory hacks

allenp's picture

The for loop technique for listing a directory is nice, but this is
fewer keystrokes and shows more files if you can't pipe to more:

echo *

To display the contents of a file, the simplest command I see is

echo $(<file)

According to strace(1), the shell clones itself, opens and reads
the file in the subprocess, and then the main process writes the
file's contents to stdout. This could fail for lack of a process
table slot or if it needed more memory and brk(2) failed, but it
doesn't exec anything else.

Paul Allen

another 'classic'

gongoputch's picture

One of my personal favorites: turn any command into a (sort of) full
screen monitor, e.g. :

while [ 1 ]; do clear; w; sleep 3; done

Very simplistic (doesn't check for output > 25/24, etc) but good
enough for common purposes.

I also like subshells a lot :
(cd /somewhere && do something) | \
(cd /somewhere/else && read from something)

Maybe a 'My Favorite bash Tips and Tricks - Part II' is in order?

instead of "while [ 1 ]; do

Anonymous's picture

instead of "while [ 1 ]; do clear; w; sleep 3; done"
"watch -n 3 w" is preferable if you want to watch the output of a command over time

yep but watch won't work for

GarthWick's picture

yep but watch won't work for more complex commands, for the example that was written, monitoring 'w' it is okay, but for something with pipes or redirections in it it will not work!

It works just fine, all you

Patagonicus's picture

It works just fine, all you have to do is escaping the special characters. For example:
watch df -h \; echo \; dmesg \| tail
This prints the output of df, a blank line and the last lines of dmesg. You can also use && and || (which has to be written as \&\& or '&&').

or put the whole thing into

Bernisys's picture

or put the whole thing into double quotes, like:

watch "ls -x ; df ."

but one disadvantage with "watch" is, that it strips output off coloring codes etc.
so if you wanna observe output which contains any hilighting, you will end up with plain uniform text, sadly. And as far as i know, the while ... sleep construct is the only usable workaround here. Sometimes i also add "date" as first command, just to be sure when the snapshot was taken :)

Hint me if you find any better approach :)

But there's also some major advantage using watch:
-d parameter automatically diffs the output to the previous
And screen formatting is automatically done.
Plus it shows the timestamp with each output.

Emacs?

grigora's picture

Thanks for a very helpful and useful article. One thing I would suggest adding is that the Ctrl-R trick is the Emacs shortcut for reverse incremental search. And in general that many Emacs shortcuts (Ctrl-A, Ctrl-E, Alt-d, Alt-<DEL>) can be used to edit commands at Bash prompt.

Display file contents using bash built-ins

remalone's picture

To display the contents of a file (file.txt in this case) using BASH built-ins you could use:
while read line; do echo "$line"; done <file.txt

Rick Malone
Systems Engineer Technician
Simulation Training Centre
Petawawa, ON Canada

more as a script

sstock's picture

Here is the result of a discussion I had about how to implement more using just a shell script (almost ten years ago :-). The goal was just to have fun, but the simplistic result is handy when the terminal (such as a Sparc10 console) doesn't have a scrollback buffer. Sadly I haven't been able to find the other person involved, but here ya go (note: the indentation doesn't show up):

shmore() {
LINES=""
while read line
do
echo "$line"
LINES=".$LINES"
if [ $LINES == "......................." ]; then
echo -n "--More--"
read < /dev/tty
LINES=""
fi
done
}

Now you can load it up, . more.sh, and use it: shmore < somefile

Steve Stock

Anonymous's picture

cat.sh
---cut---
#!/bin/sh

if [ $# = 0 ]
then
while read -r; do echo $REPLY; done
else
for F in "$@"
do
while read -r; do echo $REPLY; done < "$F"
done
fi
---cut---

White Paper
Linux Management with Red Hat Satellite: Measuring Business Impact and ROI

Linux has become a key foundation for supporting today's rapidly growing IT environments. Linux is being used to deploy business applications and databases, trading on its reputation as a low-cost operating environment. For many IT organizations, Linux is a mainstay for deploying Web servers and has evolved from handling basic file, print, and utility workloads to running mission-critical applications and databases, physically, virtually, and in the cloud. As Linux grows in importance in terms of value to the business, managing Linux environments to high standards of service quality — availability, security, and performance — becomes an essential requirement for business success.

Learn More

Sponsored by Red Hat

White Paper
Private PaaS for the Agile Enterprise

If you already use virtualized infrastructure, you are well on your way to leveraging the power of the cloud. Virtualization offers the promise of limitless resources, but how do you manage that scalability when your DevOps team doesn’t scale? In today’s hypercompetitive markets, fast results can make a difference between leading the pack vs. obsolescence. Organizations need more benefits from cloud computing than just raw resources. They need agility, flexibility, convenience, ROI, and control.

Stackato private Platform-as-a-Service technology from ActiveState extends your private cloud infrastructure by creating a private PaaS to provide on-demand availability, flexibility, control, and ultimately, faster time-to-market for your enterprise.

Learn More

Sponsored by ActiveState