Best of Technical Support
How does one read a core dump file? Occasionally, a machine will crash and a core dump file is output. When I try to read them (using the more command) they are full of meaningless characters. I have yet to find anything on how to read these files except for a debugger for debugging the programs that caused the dump—I never know which program caused the core dump. Any ideas on other avenues of determining what happened? —G. Hendricks
Core dump files are process states for the process that died. When a process terminates with one of various signals (such as SIGSEGV, the segment violation, typically indicating a memory-related bug in the program) and the process owner's ulimit (see your shell's man page) allows for core files, a core dump will be created. It contains information such as the entire set of memory allocated to the program, where the program was when it died and what it was doing.
A core dump is an invaluable tool to Unix programmers. By using it in conjunction with a debugger, a programmer can see what went wrong with his or her program.
To examine one of these files, you typically need two things. First, the program must be compiled and linked using gcc with the -g switch set, which instructs the compiler to place debugging information in the executable. Although any program can produce a core file, the core file can only tell a programmer the location in the program where the fault occurred and the values of certain variables if this debugging information is available.
The second tool that is required is the debugger. If you have installed the development kit, chances are you already have this. The standard Linux debugger is gdb (the GNU Debugger) and is part of the gcc development kit. A programmer might then use this command to look at a core file:
gdb programname core
Core files are typically useful only to programmers, and a debugger is not a very friendly program (gdb is certainly no exception). If you have no programming experience, you will probably not increase your knowledge of what went wrong by examining a core file in this way. —Chad Robinson, BRT Technical Services Corporation email@example.com
I have loaded Linux and have all the settings for a full Internet connection. I can telnet to and from my computer and can send mail out. I have not been able to configure the system to receive mail. Any suggestions? —Jay Melton
Most likely you just don't have sendmail running as a daemon. You can start up sendmail as a daemon with a command like:
sendmail -bd -q15m
If that doesn't cause any odd errors, you'll want to add that command to your startup scripts. Check to make sure /etc/rc.d/init.d/sendmail.init exists. If it does, use the run level editor to make it start in run levels 2, 3 and 5 and stop in run levels 0, 1 and 6. —Steven Pritchard, President Southern Illinois Linux Users Group firstname.lastname@example.org
With what package and how can you mirror your favorite software site? —Andreas J. Bathe
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
|Introduction to MapReduce with Hadoop on Linux||Jun 05, 2013|
- Containers—Not Virtual Machines—Are the Future Cloud
- Non-Linux FOSS: libnotify, OS X Style
- Linux Systems Administrator
- Validate an E-Mail Address with PHP, the Right Way
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Introduction to MapReduce with Hadoop on Linux
- RSS Feeds
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?