Introduction to Gawk
It seems impossible to have such ease of use together with speed; there must be a trade-off. This is one area in which gawk suffers—run-time performance. However, this is not to say that gawk is a terribly slow language. Since gawk is interpreted rather than compiled, it cannot compete with compiled languages for speed of execution. (It also is somewhat slower than a comparable program written in Perl.) However, if your main concern is getting a working program written as quickly as possible, you probably do not want to wrestle with C or C++ for a week to perfect the most efficient algorithm. By trading off the speed advantages and control features of C (or another compiled language) for ease of use, gawk lets you get the job done quickly and relatively painlessly.
If, however, execution speed is a critical point, gawk makes an excellent tool for implementing and testing a prototype before you start to code in C. And when the prototype is complete you may find that the gawk version is fast enough to meet your needs.
gawk offers the programmer a simple, somewhat C-like syntax, automatic file handling, associative arrays, and powerful pattern matching—features which can help you to create a program much more quickly than with a more traditional language. gawk also has many other useful and powerful features such as user-defined functions, recursion, many built-in functions, regular expressions, multidimensional arrays, formatted output using printf and sprintf, even the ability to set variables on the command line. These features are beyond the scope of this article. Without doubt, gawk's interpreter will produce a slower running final product than a C compiler, or even a Perl interpreter. But this slower execution speed (it certainly is not slow!) is more than compensated for by the speed and ease of program development and testing. When you need a program to perform a task and you need it right now, whether it is a quick-and-dirty, use-once program or a program that will be getting plenty of use, gawk may prove to be the right language for the task.
Ian Gordon (email@example.com) is a support programmer at Hyprotech Ltd. in Calgary, Alberta. He discovered the joys of Linux 15 months ago, a discovery which has taken up much of his free time.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- RSS Feeds
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- So when they found it hard to
1 hour 43 min ago
2 hours 6 min ago
- Reply to comment | Linux Journal
2 hours 28 min ago
- Android has been dominating
2 hours 32 min ago
- It is quiet helping
5 hours 18 min ago
5 hours 35 min ago
- Reachli - Amplifying your
6 hours 52 min ago
7 hours 40 min ago
- good point!
7 hours 43 min ago
- Varnish works!
7 hours 52 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?