Best of Technical Support
I'm trying to configure my Red Hat 6.0 system to allow clients to access CD-ROM images from my Linux server hard drives. After looking at various file systems such as Samba and NFS and commands such as MAKEDEV, vnconfig, mount, smbmount etc., I'm getting confused as to which combinations of commands to use. —Mark J. Foucht, firstname.lastname@example.org
Red Hat 6.0 uses knfsd, which works somewhat differently compared to the old userland NFS server. One big difference is that you have to export each file system mounted in order for clients to see them (with the old server, you could just export= /, and clients would have a view on all your file systems).
In your case, if your CD-ROM is mounted under /mnt/cdrom, put the following in your /etc/exports file:
Then, type the following to migrate the entry to /var/lib/nfs:
/xtab moremagic:~# exportfs -av exporting :/mnt/cdrom
To see if it worked, type:
moremagic:~# showmount -e localhost Export list for localhost: /mnt/cdrom (everyone)
To mount from another machine, type:
mkdir /mnt/remotecd mount remotemachinename:/mnt/cdrom /mnt/remotecd
—Marc Merlin, email@example.com
If your CD-ROM will be used by Windows computers, you should use Samba. Here is the entry you can add to your /etc/smb.conf file:
[CDROM] comment = CDROM path = /mnt/cdrom read only = yes guest ok = yes case sensitive = no mangle case = yes preserve case = yes
You should restart Samba after modifying the file. Just type as root:
If you want to make it accessible to NFS users (UNIX computers), you should add the line /mnt/cdrom to your /etc/exports and restart your NFS daemon by using /etc/rc.d/init.d/nfs restart. —Pierre Ficheux, firstname.lastname@example.org
I am currently running Caldera Openlinux 1.3 on a Compaq Presarion CDS 526 (486 66MHz). Believe it or not, I had no trouble getting it loaded on my machine. I do have a problem with my RAM memory. When I do the free command, it shows I have only 15MB of memory, when I actually have 36MB. Why is this? Is it a problem that has been corrected in a more current kernel, or is it more of a hardware problem?
My next question concerns the world of parallel computing. I have a new computer on order (P3 500MHz), and when I get it, I will be installing Linux on it as well as the one mentioned above. I am interested in hobbying in the world of parallel computing, and I wondered if it would do any good trying to run parallel with a 500MHz machine and a 66MHz machine, or will the whole thing run slower? Thanks for your help. —John, email@example.com
This 36MB you've mentioned is a rather “non-standard” amount of memory. Please use a dmesg command to see how much memory it finds during boot time. —Mario Bittencourt, firstname.lastname@example.org
There is a kernel option for limiting the memory to 16MB; maybe it is activated in your current kernel. You should recompile a new kernel without this option, in “General Setup”: Limit memory to low 16MB (CONFIG_MAX_16M) [N/y/?] N —Pierre Ficheux, email@example.com
On your second question, it will depend on how you do it. Think of a job jar, representing a problem decomposed into independent jobs. Each CPU grabs a job out of the jar when it's finished with the previous job. With good choices of job sizes, you win. If the jobs are too small, the extra communication and coordination overhead negates the gains from the slower CPU. If too large, the faster CPU will finish first and have to wait for the slower one to finish—and it may end up waiting longer than if it had done all the work itself. You may have to experiment to find good job sizes, though the obvious computation based on the two systems' relative speeds should get you in the right neighborhood.
I'd recommend you start by looking into PVM, the Parallel Virtual Machine system. Find PVM at www.epm.ornl.gov/pvm/pvm_home.html. —Scott Maxwell, firstname.lastname@example.org
Practical Task Scheduling Deployment
One of the best things about the UNIX environment (aside from being stable and efficient) is the vast array of software tools available to help you do your job. Traditionally, a UNIX tool does only one thing, but does that one thing very well. For example, grep is very easy to use and can search vast amounts of data quickly. The find tool can find a particular file or files based on all kinds of criteria. It's pretty easy to string these tools together to build even more powerful tools, such as a tool that finds all of the .log files in the /home directory and searches each one for a particular entry. This erector-set mentality allows UNIX system administrators to seem to always have the right tool for the job.
Cron traditionally has been considered another such a tool for job scheduling, but is it enough? This webinar considers that very question. The first part builds on a previous Geek Guide, Beyond Cron, and briefly describes how to know when it might be time to consider upgrading your job scheduling infrastructure. The second part presents an actual planning and implementation framework.
Join Linux Journal's Mike Diehl and Pat Cameron of Help Systems.
Free to Linux Journal readers.View Now!
|The Firebird Project's Firebird Relational Database||Jul 29, 2016|
|Stunnel Security for Oracle||Jul 28, 2016|
|SUSE LLC's SUSE Manager||Jul 21, 2016|
|My +1 Sword of Productivity||Jul 20, 2016|
|Non-Linux FOSS: Caffeine!||Jul 19, 2016|
|Murat Yener and Onur Dundar's Expert Android Studio (Wrox)||Jul 18, 2016|
- Stunnel Security for Oracle
- Murat Yener and Onur Dundar's Expert Android Studio (Wrox)
- The Firebird Project's Firebird Relational Database
- My +1 Sword of Productivity
- Non-Linux FOSS: Caffeine!
- SUSE LLC's SUSE Manager
- SuperTuxKart 0.9.2 Released
- Managing Linux Using Puppet
- Google's SwiftShader Released
- Parsing an RSS News Feed with a Bash Script
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide