Best of Technical Support
How does one read a core dump file? Occasionally, a machine will crash and a core dump file is output. When I try to read them (using the more command) they are full of meaningless characters. I have yet to find anything on how to read these files except for a debugger for debugging the programs that caused the dump—I never know which program caused the core dump. Any ideas on other avenues of determining what happened? —G. Hendricks
Core dump files are process states for the process that died. When a process terminates with one of various signals (such as SIGSEGV, the segment violation, typically indicating a memory-related bug in the program) and the process owner's ulimit (see your shell's man page) allows for core files, a core dump will be created. It contains information such as the entire set of memory allocated to the program, where the program was when it died and what it was doing.
A core dump is an invaluable tool to Unix programmers. By using it in conjunction with a debugger, a programmer can see what went wrong with his or her program.
To examine one of these files, you typically need two things. First, the program must be compiled and linked using gcc with the -g switch set, which instructs the compiler to place debugging information in the executable. Although any program can produce a core file, the core file can only tell a programmer the location in the program where the fault occurred and the values of certain variables if this debugging information is available.
The second tool that is required is the debugger. If you have installed the development kit, chances are you already have this. The standard Linux debugger is gdb (the GNU Debugger) and is part of the gcc development kit. A programmer might then use this command to look at a core file:
gdb programname core
Core files are typically useful only to programmers, and a debugger is not a very friendly program (gdb is certainly no exception). If you have no programming experience, you will probably not increase your knowledge of what went wrong by examining a core file in this way. —Chad Robinson, BRT Technical Services Corporation firstname.lastname@example.org
I have loaded Linux and have all the settings for a full Internet connection. I can telnet to and from my computer and can send mail out. I have not been able to configure the system to receive mail. Any suggestions? —Jay Melton
Most likely you just don't have sendmail running as a daemon. You can start up sendmail as a daemon with a command like:
sendmail -bd -q15m
If that doesn't cause any odd errors, you'll want to add that command to your startup scripts. Check to make sure /etc/rc.d/init.d/sendmail.init exists. If it does, use the run level editor to make it start in run levels 2, 3 and 5 and stop in run levels 0, 1 and 6. —Steven Pritchard, President Southern Illinois Linux Users Group email@example.com
With what package and how can you mirror your favorite software site? —Andreas J. Bathe
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Stunnel Security for Oracle
- SourceClear Open
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Managing Linux Using Puppet
- Non-Linux FOSS: Caffeine!
- Google's SwiftShader Released
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide