Features of the TCSH Shell
In this article, I will describe some of the main features of TCSH, which I believe make it worth using as the primary log in shell. This article is not meant to persuade bash users to change. I've never used bash, so I know very little about it.
As some of you know, I've created a configuration tool called The Dotfile Generator, http://www.imada.ou.dk/~blackie/dotfile/, which can configure TCSH. I believe that this tool is very handy for getting the most out of TCSH without having to read the manual a couple of times. Therefore, I'll refer to this tool several times throughout this article to show how it can be used to set up TCSH.
The shell is your interface to executing programs, managing files and directories, etc. Though very few people are aware of it, they use the shell frequently in daily work, e.g., completing file names, using history substitution and aliases. The TCSH shell offers all of these features and a few more, which the average user seldom optimizes.
With a high knowledge of your shell's power, you can decrease the time you need to spend in the shell, and increase the time spent on original tasks.
An important feature used by almost all users of a shell is command line completion. With this feature you don't need to type all the letters of a file name—just the ambiguous ones. This means that if you wish to edit a file called file.txt, you may need to type only fi and press the TAB key, and the shell will type the rest of the file name for you.
Basically, one can use completion on files and directories. This means that you cannot use completion on host names, process IDs, options for a given program, etc. Another thing you cannot do with this type of completion is to complete directory names when typing the argument for the command cd.
In TCSH the completion mechanism is enhanced, so that it is possible to tell TCSH which list to use to complete a particular command. For example, you can tell TCSH to complete from a list of host names for the commands rlogin and ping. An alternative is to tell it to complete only on directories when the command is cd.
To configure user-defined completion using The Dotfile Generator (TDG), go to the TDG page completion -> userdefined; this will bring up a page which looks like Figure 1.
For the command name, you tell TDG which command you wish to define a completion for. In this example it is rm.
Next you have to tell TDG to which arguments to the command this completion applies. To do this, press the button labeled Position definition. This will bring up a page, which is split into two parts as shown in Figures 2 and 3.
In the first part, you tell TDG the position definition that should be defined from the index of the argument to be completed (i.e., the one where the TAB key is pressed). Here you can tell it that you wish to complete on the first argument, all the arguments except the first one, and so forth.
2066f3.gifFigure 3. TDG Pattern Definition Page
The alternative to “position-dependent completion” is “pattern-dependent completion”. This means that you can tell TDG that this completion should only apply if the current word, the previous word or the word before the previous word conform to a given pattern.
Now you have to tell TDG which list to complete from. To do this, press the button labeled List. This will bring up a page where you can select from a lot of different lists, e.g., aliases, user names or directories.
Four of the lists you can select from are Commands, Directories, File names and Text files. If you select one of these, only elements from that directory are used.
There are two ways to specify completion from a predefined list. One is to mark the option predefined list, and type all the options in this list.
This solution is a bad idea if the list is used in several places (e.g., a list of host names). In that case, one should select the list to be located in a variable, then set this variable in the .tcshrc file.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Doing for User Space What We Did for Kernel Space
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- SuperTuxKart 0.9.2 Released
- Rogue Wave Software's Zend Server
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide