Best of Technical Support
I have upgraded my PC's memory from 16MB to 72MB. Although the BIOS and Win95 recognize the extra memory, Linux appears not to see it. Using utilities such as free and top show a total of only 16MB of available memory. My PC still performs as it did when I had only 16MB installed.
Is there something I must do for Linux to recognize the new memory?
—George Tankoski, Slackware 1.3.2
LILO's configuration file may be explicitly setting the available memory to 16MB. To see if this is true, check the file /etc/lilo.conf for a line that looks like this:
As root, change this line to read:
Then, run /sbin/lilo (also as root) to make LILO reread the edited configuration file and reboot.
If you don't see the “16m” line in /etc/lilo.conf, back up your system, then try rebooting and typing the “72m” line directly at the LILO prompt. If your system boots and appears to be stable, you can then permanently enshrine the “72m” line in /etc/lilo.conf.
—Scott Maxwell, firstname.lastname@example.org
I have a modem connection at work on a MS Windows NT/4 server. When I try to connect with PPP, I never get the login prompt. Once the dial up is done (using seyon), the connection hangs up. I guess NT is using RAS rather than the regular PPP. Do you know any way to set up the connection with NT?
Thanks for your help.
—Jacques Milman, Red Hat 4.1
The NT server uses ms-auth-chap authentication. Check out your pppd configuration to be sure it is compiled with ms-auth-chap crypted authentication.
Here is an example of what can happen if your pppd does not include ms-auth-chap support—server asks for ms-auth-chap:
pppd: rcvd [LCP ConfReq id=0x0 <asyncmap 0x0> <auth chap 80> <magic 0x307f> <pcomp> <accomp>]
pppd rejects the request:
pppd: sent [LCP ConfRej id=0x0 <auth chap 80>]
so NT closes the line.
—Pierre Ficheux, email@example.com
Let me start from the beginning. One day I booted my computer and received a message saying “remove and insert new disk” or something similar. I played around with the CMOS setting and checked the hardware to make sure no wires had fallen out. I ended up formatting my c: disk, and up until this time, I could still use Linux. I received the same message, but I could load DOS from a boot disk.
The next time I tried to load Linux at the LILO prompt, it said “loading........” and hung. I then tried to use a Linux boot disk and got the message “cannot initiate console”.
I am finding this to be a big problem, as I cannot access the files on either my Windows 95 partition or my Linux partition. If you can give me any help with fixing my Linux problem, I will be very grateful.
—Jamie Gamble, Slackware 3.2
The process of booting Linux on the PC platform is a bit intricate, mainly because of the peculiarities of the platform.
LILO loads the kernel using a list of disk blocks it built beforehand (when you ran /sbin/lilo, the map installer). After loading the blocks, it jumps to the kernel image; but if you moved the kernel after running /sbin/lilo, the loader will jump to nonsense program code, thus hanging the system.
Boot floppies, on the other hand, come in different flavours. The message “cannot open an initial console” means the kernel was loaded just right; it mounted a root file system, but couldn't open /dev/tty1 or /dev/ttyS0. I've seen this happen when mounting the /home partition as the root file system (there was no /dev directory).
Restoring a Linux installation is not trivial, especially if you have no other Linux box around. Short of finding a local Linuxer, try the /usr/doc/lilo*/README or my article “Booting the Kernel” in the June 1997 LJ.
—Alessandro Rubini, firstname.lastname@example.org
When I'm recompiling my kernel, I get a virtual memory exhausted error. It seems to happen when the compile is around the floppy.o section. When I use a different boot kernel (bare), it gives me a fatal signal 13.
—Wes Horn, Slackware 3.2
There are two common causes of this kind of problem.
Hardware; perhaps a bad cache or RAM
Out of swap space
You did not specify how much RAM nor how much swap space you have. Compiling the kernel is a CPU/memory intensive task, so if you do not have enough physical memory, your system will start using the swap space. If this swap is used up, strange things, such as the one you mentioned, can happen.
—Mario Bello Bittencourt, email@example.com
Practical Task Scheduling Deployment
July 20, 2016 12:00 pm CDT
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.Register Now!
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- SUSE LLC's SUSE Manager
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- Managing Linux Using Puppet
- Tech Tip: Really Simple HTTP Server with Python
- Parsing an RSS News Feed with a Bash Script
- Google's SwiftShader Released
- SuperTuxKart 0.9.2 Released
- Doing for User Space What We Did for Kernel Space
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide