How Fast Can You Type? Develop a Tiny Utility in Bash to Find Out
If you spend most of your time typing on your keyboard (and I hope you don’t use that mouse very frequently, if you care for your wrists, that is), getting up to speed and practicing to become a better and faster typist is well worth the time and effort. And measuring something is the first step to improve it.
There are tons of applications which test your typing abilities and help you improve it, but wouldn’t it be nice to have a basic idea about your typing performance using nothing but good old Bash? After all, this is about DIY (Do It Yourself) approach and having fun; two notions that Linux Journal readers know very well.
The idea is actually very simple: Measuring the typing speed means basically measuring how many words you typed in a given amount of time. One of the most popular units used is wpm (words per minute). Maybe not very accurate and scientific, but we’re aiming for a ballpark figure here so some approximate measure will be fine for our purpose. Based on this information we can write the formula as:
typing_speed_in_wpm = num_words / ( (end_time - start_time) / 60 )
Now that we have our theoretical framework set up, it is time to build the practical computational part of the project. Before diving into code, let’s break down the above formula into pieces and see which GNU/Linux utilities can help us achieve various tasks:
- date: This is our well-known utility and if you use it with the %s format specifier it returns the “seconds since 1970-01-01 00:00:00 UTC”. So if you run date +%s once at the beginning of your typing session and once at the end of it, you’ll have the end_time and start_time.
- wc: This is yet another well-known utility that can give you the number of words in a file if invoked with -w option. And remember, in GNU/Linux almost everything is a file, including your input from the keyboard.
cat: Officially it concatenates files and print them on the standard output. Practically it can
grab your input from the keyboard and via a pipe send that to the wc. In other words all we need
to do to count the number of words we just typed is to issue the following command:
cat | wc -w
- bc: Officially it is an arbitrary precision calculator language. Practically it is a very handy utility if you want to do calculations within the command line. But you have to be careful and read its manual page. Why? Well, if you try these:
$ echo 1 + 1 | bc 2
everything seems fine but if you try the following:
$ echo 1 / 2 | bc 0
That’s not what you’d expect from a computer. Why doesn’t it return the correct answer, that is 0.5? According to its manual page “scale defines how some operations use digits after the decimal point. The default value of scale is 0.” Apparently it is not a very sensible default for our division operations which we’ll use later. The solution then is to tell bc what scale to use before doing the operation, e.g.:
$ echo “scale=2; 1 / 2” | bc 0.50
That’s much better. We have all the components in place and now it is time to glue them together using our favorite application development environment, Bash:
1 #!/bin/sh 2 # speed.sh: a very tiny utility to measure typing speed. 3 prompt="Start typing a piece of text. Press Ctrl-d twice to finish." 4 echo "\n$prompt \n" 5 start_time=`date +%s` 6 words=`cat|wc -w` 7 end_time=`date +%s` 8 speed=`echo "scale=2; $words / ( ( $end_time - $start_time ) / 60 )" | bc` 9 echo "\n\nYou have a typing speed of $speed words per minute."
If you save the above shell script as speed.sh and make it executable, you are ready to measure your typing speed. Oh, I forgot one thing, that is a piece of text to type. It is always good to have some text ready so that you’ll know what you type. In this case I prefer the first few lines of the Usenet message of Linus Torvalds in which he announced the birth of Linux:
Start typing a piece of text. Press Ctrl-d twice to finish.
Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. You have a typing speed of 43.33 words per minute.
Well, 43.33 words per minute is not a world record for sure (and I definitely had better scores, believe me!). According to the relevant Wikipedia article, “as of 2005, writer Barbara Blackburn was the fastest English language typist in the world, according to The Guiness Book of World Records. Using the Dvorak Simplified Keyboard, she has maintained 150 words per minute (wpm) for 50 minutes, and 170 wpm for shorter periods. She has been clocked at a peak speed of 212 wpm.” I don’t know, maybe it is time to switch to the Dvorak simplified keyboard but I have my doubts.
The tiny utility above helped me to have a rough idea about my typing speed. Certainly it lacks some important features. It would be nice if:
- It included various texts and showed them in a random order so that the average performance of different typing sessions can be calculated. A single measurement is hardly a reliable indicator when it comes to this kind of benchmarking.
- It had the option of getting sample texts from files.
- It took into account the number of errors made by the typist. This calls for a small function that can compare the sample text and the input of the typist. It does not mean much if your performance is 1000 wpm but 90% of it includes terrible errors, typos, etc.
The three points above are left as an exercise to the Linux Journal reader.
Happy hacking and typing.
Emre Sevinç currently works as a software developer and researcher. He's been involved with GNU/Linux since 1994 when he first met it at the math department of Istanbul Technical University.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- Linux Systems Administrator
- New Products
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Designing Electronics with Linux
- Dynamic DNS—an Object Lesson in Problem Solving
- Using Salt Stack and Vagrant for Drupal Development
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Have you tried Boxen? It's a
3 hours 27 min ago
- seo services in india
7 hours 58 min ago
- For KDE install kio-mtp
7 hours 59 min ago
- Evernote is much more...
9 hours 59 min ago
- Reply to comment | Linux Journal
18 hours 44 min ago
- Dynamic DNS
19 hours 18 min ago
- Reply to comment | Linux Journal
20 hours 17 min ago
- Reply to comment | Linux Journal
21 hours 7 min ago
- Not free anymore
1 day 1 hour ago
1 day 4 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?