The Perl Debugger
Now that we've got the recursive call debugged, let's play with the calling stack a bit. Giving the command T will display the current calling stack. The calling stack is a list of the subroutines which have been called between the current point in execution and the beginning of execution. In other words, if the main portion of the code executes subroutine “a”, which in turn executes subroutine “b”, which calls “c”, then pressing “T” while in the middle of subroutine “c” outputs a list going from “c” all the way back to “main”.
Start up the program and enter the following commands (omit the second one if you have fixed the bug we discovered in the last section):
b 34 ( $_ =~ /file2$/) a 34 $_ = "$dir/$_" c
These commands set a breakpoint that will only stop execution if the value of the variable $_ ends with the string file2. Effectively, this code will halt execution at arbitrary points in the program. Press T and you'll get this:
@ = main::searchdir('./dir1.0/file2') called from file '../p2.pl' line 45 @ = main::searchdir(.) called from file '../p2.pl' line 10
Enter c, then T again:
@ = main::searchdir('./dir1.0/dir1.1/file2') called from file `../p2.pl' line 45 @ = main::searchdir(undef) called from file '../p2.pl' line 45 @ = main::searchdir(.) called from file '../p2.pl' line 10
Do it once more:
@ = main::searchdir('./dir2.0/file2') called from file '../p2.pl' line 45 @ = main::searchdir(.) called from file '../p2.pl' line 10
You can go on, if you so desire, but I think we have enough data from the arbitrary stack dumps we've taken.
We see here which subroutines were called, the debugger's best guess of which arguments were passed to the subroutine and which line of which file the subroutine was called from. Since the lines begin with @ = , we know that searchdir will return a list. If it were going to return a scalar value, we'd see $ =. For hashes (also known as associative arrays), we would see % =.
I say “best guess of what arguments were passed” because in Perl, the arguments to subroutines are placed into the @_ magic list. However, manipulating @_ (or $_) in the body of the subroutine is allowed and even encouraged. When a T is entered, the stack trace is printed out, and the current value of @_ is printed as the arguments to the subroutine. So when @_ is changed, the trace doesn't reflect what was actually passed as arguments to the subroutine.
Well, by now you must be thinking, “Gosh, this Perl debugger is so keen that with it I can end world hunger, learn to play the piano and increase my productivity by 300%!” Well, this is the right attitude. You are now displaying the third programmer's virtue, hubris. However, there are some warnings.
Race conditions are the scourge of the programmer. Race conditions are bugs that occur only under certain circumstances. These circumstances usually involve the time at which certain events correlate with other events. Using the debugger to debug these situations is not always possible, because the act of using the debugger may change the timing of the events in the program. This can cause a symptom to occur without the debugger, but while using the debugger, the symptom may disappear. The bug isn't gone, it just isn't being “tickled”.
There really isn't any stock method to get rid of race conditions. Usually, an intense analysis of the algorithms is necessary. Finite-state diagrams may also be useful, if you have the patience for it.
When writing code that involves more than one process (for example, if your code uses a “fork” system call or its equivalent), using the debugger becomes very difficult. This is because when the fork occurs, you are left with two (or more) processes, all running under the debugger. But since the debugger is interactive, you have to interact with every process. The result is that you have to individually deal with each process, controlling each execution. All the processes will want to read debug commands from the controlling terminal, but only one at a time will be able to do so. The other(s) will block, waiting for the first to complete. When it does, another process will complete. Incidentally, we can't know for sure which process will be first. This is an example of the above mentioned race condition.
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- SUSE LLC's SUSE Manager
- Managing Linux Using Puppet
- My +1 Sword of Productivity
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- Non-Linux FOSS: Caffeine!
- SuperTuxKart 0.9.2 Released
- <Watch> HD! Watch Walking On Sunshine Online Full Movie Streaming
- Google's SwiftShader Released
- SourceClear Open
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide