"Dogs" of the Linux Shell

Could the command-line tools you've forgotten or never knew save time and some frustration?

The fmt command is a simple text formatter that focuses on making textual data conform to a maximum line width. It accomplishes this by joining and breaking lines around white space. Imagine that you need to maintain textual content that was generated with a word processor. The exported text may contain lines whose lengths vary from very short to much longer than a standard screen length. If such text is to be maintained in a text editor (like vi), fmt is the command of choice to transform the original text into a more maintainable format. The first example below shows fmt being asked to reformat file contents as text lines no greater than 60 characters long.

# (8) No more than 60 char lines
$ fmt -w 60 README.txt > NEW_README.txt
# (9) Force uniform spacing:
#     1 space between words, 2 between sentences
$ echo "Hello   World. Hello Universe." | \
fmt -u -w80 
Hello World.  Hello Universe.
fold--Break Up Input

fold is similar to fmt but is used typically to format data that will be used by other programs, rather than to make the text more readable to the human eye. The commented examples below are fairly easy to follow:

# (10) Format text in 3 column width lines
$ echo oxoxoxoxo | fold -w3 
# (11) Parse by triplet-char strings - 
#      search for 'xox'
$ echo oxoxoxoxo | fold -w3 | grep "xox"
# (12) One way to iterate through a string of chars
$ for i in $(echo 12345 | fold -w1)
> do
> ### perform some task ...
> print $i
> done

tr is a simple pattern translator. Its practical application overlaps a bit with other, more complex tools, such as sed and awk [with larger binary footprints]. tr is quite useful for simple textual replacements, deletions and additions. Its behavior is dictated by "from" and "to" character sets provided as the first and second argument. The general usage syntax of tr is as follows:

# (12)  tr usage
tr [options] "set1" ["set2"] < input > output

Note that tr does not accept file arguments; it reads from standard input and writes to standard output. When two character sets are provided, tr operates on the characters contained in "set1" and performs some amount of substitution based on "set2". Listing 1 demonstrates some of the more common tasks performed with tr.

Listing 1. Common Tasks with tr


pr shares features with simpler commands like nl and fmt, but its command-line options make it ideal for converting text files into a format that's suitable for printing. pr offers options that allow you to specify page length, column width, margins, headers/footers, double line spacing and more.

Aside from being the best suited formatter for printing tasks, pr also offers other useful features. These features include allowing you to view multiple files vertically in adjacent columns or columnizing a list in a fixed number of columns (see Listing 2).

Listing 2. Using pr


The following two commands are specialized parsers used to pick apart file path pieces.


The basename and dirname commands are useful for presenting portions of a given file path. Quite often in scripting situations, it's convenient to be able to parse and capture a file name or the containing-directory name portions of a file path. These commands reduce this task to a simple one-line command. (There are other ways to approach this using the Korn shell or sed "magic", but basename and dirname are more portable and straightforward).

basename is used to strip off the directory, and optionally, the file suffix parts of a file path. Consider the following trivial examples:

:# (21) Parse out the Java Class name
$ basename
/usr/local/src/java/TheClass.java .java 
# (22) Parse out the file name.  
$ basename srcs/C/main.c 

dirname is used to display the containing directory path, as much of the path as is provided. Consider the following examples:

# (23) absolute and relative directory examples
$ dirname /homes/curly/.profile 
$ dirname curly/.profile
# (24) From any korn-shell script, the following
#  line will assign the directory from where 
#  the script was launched 
SCRIPT_HOME="$(dirname $(whence $0))" 
# (25)
# Okay, how about a non-trivial practical example?
#  List all directories (under $PWD that contain a  
#  file called 'core'.
$ for i in $(find $PWD -name core )^
> do 
> dirname $i
> done | sort -u


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Re: recall things in depth

Anonymous's picture

i just involve a project for a data convertion .

old window software can print report/data to file as file.prn. but the output is dirty, they seperately one record to mutil-line.

and fill record seperator using one blank line.


head , tail just good for capture it.

while [ startLine -lt totalLine ]; do

parse using wc -c to check is empty line

using cat -A , sed to trailing ^M(return) char

use >>,> to join 3/4 line as one record.


than port to mysql database

many thanks to arctical author.

Re: recall things in depth

Anonymous's picture

You're quite welcome. Glad it helped!

--- Louie Iacona

Re: [...]Linux Shell, Cygwin anyone?

Anonymous's picture

Just a reminder to those Linux/UNIX enthusiasts who have to suffer the Microsoft command line at work... Check out Cygwin for the coolest shell (and X) stuff that runs on Windows.


Re: One-level Deep Directory Listing

Anonymous's picture

Here's a super simple command line thingy that I use all the time to see the contents of the current directory and one level down:

daemonbox [1]: ls -AF `ls -A`

I've aliased it to "l1" for convienence

note - this is on NetBSD-1.6: YMMV in Linux

One-level Deep Directory Usage

Anonymous's picture

What I also find quite useful these days is:

du -h --max-depth=1

which shows me the how much disk space is being used by the sub-directories of the current folder (or whatever argument is added), and I've aliased it to 'd1'.

I've also used this as `d1|grep M` which will show me all the results that are 1 MB or greater (or contain "M" in their name :-)), for quick answers. And to sort `ls -l` by date, I've sometimes used `ls -l|sort -k6,7`.

Re: One-level Deep Directory Usage

Anonymous's picture

try grep "M" to get the files that are actually one meg or larger. You may want to try "[MG]" to get files that are over 1 gig to show. If you grep for M and a file is over 1000 megs it won't display.

Re: One-level Deep Directory Usage

Anonymous's picture

Ever tried "du -s *"?

OK, that lists files too, but it's quicker to write! Yay!

Re: One-level Deep Directory Listing

Anonymous's picture

in zsh you can simply do this

ls *(/)

Re: "Dogs" of the Linux Shell

DrScriptt's picture

Now this is a GREAT article!!! I really would like to see more articles like this one.

I've been using Linux for 3+ years now and I LOVE it. I cut my teeth on DOS batch files using DATE, FC, and TIME to do a LOT of what was done here. It was VERY hard, I ended up creating temporary files all over the place that had to be subsequently cleaned up. Unicies on the other hand make it SO easy. I really do enjoy seeing all the CLI tools that are out there and knowing that people are using them. To me using tools like these are what make us unix people. No matter how experienced or inexperienced (me) we may be. Using the system to its potential is what it's there for. Try doing some of these tasks things and more (combine them...) in Windows with what is provided with the OS.




Anonymous's picture


Re: Excellent article.

Anonymous's picture

Found your site from Linux Today.

My linux tips page:


Re: use seq, not fold, for iteration

Anonymous's picture

The iteration example is less than convincing. Try iterating over a 10 elements. Oops. Try 1000. Huh? ...

for i in $(echo 12345|fold -w1); do print $i; done

should be

for i in `seq 5`; do print $i;done

seq(1) allows to define start, stop, step and more.

Re: use seq, not fold, for iteration

Anonymous's picture

Thank You very much!!
I was looking for this exact feature for my script.

Re: use seq, not fold, for iteration

Anonymous's picture

Hi - the examples were not designed to convey

a message of, "this is absolutely the BESTway

to accomplish the given task".

(although, that might be true for some examples ;-) )

The examples are mainly intending to show

basic functionality - what tool generally does -

the output given a certain input.


Louie Iacona

Re: use seq, not fold, for iteration

Anonymous's picture

Plus I think it is always much more fun doing it the hard way.

I remember when we used to have competitions to see how many different ways one could cat a file without using cat..

GREAT article!

Re: line numbering

Anonymous's picture

If you don't need anything complicated, cat -n somefile > somefile.numbered can do the trick with numbering lines.

Re: line numbering

Anonymous's picture

Hi - yes, that would work - however, nl provides

format options that 'cat -n' does not.

NL or PR are generally used to number lined text

since they're 'option rich' around that kind of formatting.

Good observation though - I should have included

that in the column ...

--- Louie Iacona

Re: line numbering

Anonymous's picture

nl isnt part installed in freebsd by default. Command line tools should be available everywhere. Of course you can download/compile/install yourself but thats alot of work. might as well just write the awk/perl script at that point.


Anonymous's picture

What is the Unix equivalent of Windows' "dir /s"? "dir /s" is like 'ls' but it looks recursively in all subdirectories too. I know 'find' can do something like this, but its man page is practically unreadable.. <:-


Anonymous's picture

`ls -R` ;)

regards, elybis

Re: dir /s

Anonymous's picture

If you want to display just the directories/subdirectories in the current directory as you would do with the DOS/Windows command "dir /AD" you might try:
ls -alp | grep '^d'
find -type d -maxdepth 1
ls -d */

Re:dir /s

Anonymous's picture

If you know the filename try locate you might be surprised by the output;-)

Re:dir /s

Anonymous's picture

or if ypur even close to the file name

Re: Other tricks: DU and DF

Anonymous's picture

Heh, I misread your question initially. Even though you said Windows, I saw "dir /s" and thought of VMS, where that provides subtotals

$ du -s *

works as a basic equivalent of that. (Yeah, I know I'm offtopic and not answering the real inital question.)

Another favorite of mine is

$ df -k

which shows mounted disks and how much space it has, how much is used, and LIES ABOUT HOW MUCH IS FREE. It's intentionally off by five percent. Note this seems to be true in every un*x I've used, not just linux flavors.

Re: Other tricks: DU and DF

Anonymous's picture

> $ df -k

>which shows mounted disks and how much space it has, how much is >used, and LIES ABOUT HOW MUCH IS FREE. It's intentionally off by five >percent. Note this seems to be true in every un*x I've used, not just >linux flavors.

That is because unices reserve 5% on each partition. This can only be used by root. This means that if a user fills a partition it does not stop the system working and root can still run normally to correct it.

Re: Other tricks: DU and DF

Anonymous's picture

The difference you are noticing is disk space reserved for root. I think 5% is the default amount reserved for root when you create a file system on most Unix boxes The amount of free disk space reported by 'df' is the remaining disk space available to non-root users.


Anonymous's picture



Anonymous's picture

Simple way to use find:

find dirname -ls

(where dirname is the directory to list -- use . if you want the current directory.) The output format will look like ls -ali but it will list all files and directories recursively.

You can also do:

ls -alR

But the format kind of sucks.

zsh: ls **/*.txt

Anonymous's picture

If you want to search only one directory deep, try

ls -hal */*.txt

and, here is the good part, IF you are using the zsh shell (free and comes with all Linux distributions) you can use

ls -hal **/*.txt

to search recursively directly in the shell! (Since this is shell expanded, it works with ALL commands, but you can't have more than a couple of thousand files then the expansion gets too large and you have to use 'find'.)

Re:dir /s equiv

Anonymous's picture

ls -la * is pretty close

Re:dir /s equiv

Anonymous's picture

the closest replacement (if you are using gnu find)

$ find . -name 'pattern' -ls

ie: pattern would be somthing like '*.txt'

it provides output that looks like ls "long format"

I suppose without gnu find you could

$ find . -name 'pattern' -exec ls -l "{}" ;

but that would be _slow_

find is very useful if a file pattern expands to a string larger than the commandline because with find the pattern is quoted. So it is not expanded by the shell.

ex: to delete a very large directory of files. ...

$ find . -name '*' -type 'f' -maxdepth 1 -exec rm "{}" ;

instead of rm *.


Re:dir /s equiv

Anonymous's picture

There is also

find . -name '*.txt' -print

if you only want to list the names and not sizer, date etc. I believe this may be more portable than the '-ls' option.


Anonymous's picture

ls -lR

That recurses through subdirectories.

Add some ls tweaks to make things more interesting. For instance, to sort directory listings from largest file size to smallest:

ls -lRS

To sort directory listings from most recently altered to "oldest":

ls -lRt

on and on and on...


Anonymous's picture

try "tree", "du", or "find ." (the dot means current directory).

easy find options are: -type f (regular files only) -type d (directories only).

for example

find . -type f |xargs grep 'nvidia'

will show you all the files under the current directory containing the

string nvidia. (xargs works kinda like the backquotes ("`")).

have fun!

Re: find

Anonymous's picture

find . -type f -name '*nvidia*'

would be a better example of how to use find. It would find all files whose _name_ contains nvidia.

xargs deserves a section and explanation of it's own.


Anonymous's picture

'ls -R' perhaps?


Anonymous's picture

thanks, that was too easy.. <:-)

Re: dir /s

Anonymous's picture

try "ls -R" or "ls -Rl"


Anonymous's picture

Is `ls -R` what you're looking for?

Re: recursive dir for UNIX/Linux

Anonymous's picture

Depending on what you want your output to

look like, try

ls -R /

It will display the contents of / (root)

in a:


file1 file2



type of format.

The find command is easier to use than the man

page would lead you to believe.


find / -type f -print

This produces a more flat/linear list.

Depends on what you're doing - one will

be more suitable than the other.

These commands are pretty much the only game in

town for this sort of thing.

Oh, on the clarity of the man page, try typing:

info find

at your shell prompt. It's more verbose,

but more clear - I think.

--- Louie Iacona

Re: ``dir /s

Anonymous's picture

One Anonymous asked:

``What is the Unix equivalent of Windows' "dir /s"?"

Try ``find $DIR -name $FILE_NAME"

where $DIR is the name of the top directory you want to look in (typing just ``." works fine), & $FILE_NAME is the name of the file you are looking for.

Enclose $FILE_NAME if you are using wildcards.

Butake the time to read the manpage & learn how find works. It is a truly useful command.



Anonymous's picture

I don't know what dir /s does.

ls -R lists all files in the directory and all subdirectories.


Anonymous's picture

Here's something I use now and again:

find / -type f -exec grep -icH 'regex' '{}' ; | sed -e '/0$/ d' | sed 's/(.*:)([0-9]*)/21/' | sort -n > results.txt

What this does is search every regular file on your system, greps it for a regex, pipes the output of that through sed a couple of times to remove results with zero hits and to put the number of hits at the front, sorts them by number then puts then in a file.

Useful when trying to find out how a particular distribution sets stuff for programs; be warned though, it can take a while to complete :-) but that shouldn't be a problem if you need a coffee!

Re: Cool, but...

Anonymous's picture

You might try the --recursive option to GNU grep. ;-)

Re: Faster Modification (I think)

Anonymous's picture

By looking at your command string it seems that an instance of grep is run for every single file on your system. If this could be avoided then the scan could be completed much quicker.

I think this should work faster:

find / -type f -print0 | xargs --null grep -icH 'regex' | sed -e '/0$/ d; s/(.*):([0-9]*)/2 1/' | sort -n

Or the two command version (Better for low memory machines because of the sort command):

find / -type f -print0 | xargs --null grep -icH 'regex' > results_prev

cat results_prev | sed -e '/0$/ d; s/(.*):([0-9]*)/2 1/' | sort -n > results

It should work faster because xargs will run the grep command with batches of input files. I also combined the sed expression, removed the ':' at the end of each line, and added a space between the number of times regex appears in the file and the name of the file. Note that the -print0 in the find command, and the --null in xargs is to avoid problems with files that contain spaces.


Jason B.

j bowman mydotmanager.com

Re: Faster Modification (I think)

Anonymous's picture

"By looking at your command string it seems that an instance of grep is run for every single file on your system. If this could be avoided then the scan could be completed much quicker. "

Absolutely :-) Most of the time I limit the search to /etc when trying to find which obsucre configuration file the parameters for xyz are located. The / was more a proof of concept.

I'll try it with the xargs and the print0. Thanks :-)



Anonymous's picture

Duh, all my backslashes have been stripped out :(

Basically, put a backslash before every ( and ) in the second sed and before the 2 and the 1 in the second sed.


Linux Sort files

Anonymous's picture

I want to sort files by created/Modified time in Ascending order


Use ls -altr

Anonymous's picture

Use ls -altr