Best of Technical Support
I want to use my ZIP drive as well as my printer with Linux. My friend suggested using kerneld to make modules of the ZIP drive and my printer so that I can load the ZIP drive, access it, then unload it to use my printer. How would I go about doing this?
—Scott Bell Red Hat 4.2
There is no need to do this. All the modules you need are already available from the stock install. To access your parallel port ZIP drive simply run:
as root. You will then have your ZIP drive available as /dev/sda if you don't have any other SCSI devices on your system. When done with the ZIP, make sure everything is unmounted. Then run:
rmmod ppaas root. You can then unplug it, plug your printer in and use the printer as you normally would (kerneld should load that module automatically). If you have further questions or need help, install the kernel-source RPM and see /usr/src/linux/drivers/scsi/README.ppa and the SCSI-HOWTO for more details.
—Donnie Barnes, Red Hat firstname.lastname@example.org
The gremlins at work are killing me. I have a few co-workers that keep sending me xmelt, xroach and xsnow. Unfortunately, I need to keep all my network connections open. Is there a way to find and kill processes that are sent to my display? Also, is it possible to reroute these processes back to the display they came from?
—Ray Banez Red Hat 4.2
Type ps -ax to get the process ID which is on the left side of the output, then typekill -9 <process ID>. Now to keep this from happening again make use of the xhost command. It will allow you to deny X sessions from hosts you don't want(xhost -unwanted_machinename).
—Mark Bishop, Vice President Southern Illinois Linux Users Group email@example.com
You should restrict access to your display. If you access your display locally you could just deny remote access to shell accounts on the workstation; otherwise, look at the docs about xauth and use it to authenticate graphic programs. This way only your own programs will be able to use your graphic display. If you share your account with other people, on the other hand, there's no solution to the problem.
Routing the processes back to the display they came from is not possible. You must block intruders before they get in or kill them afterwards.
—Alessandro Rubini firstname.lastname@example.org
What is the kcore file in /proc directory? The file size is growing out of control. Can I delete it?
—Kai Lien, Pharm.D. Red Hat 4.1
This is a “virtual” file, reflecting your memory. It doesn't exist at all on your hard disk, like all of /proc, so it can't and shouldn't be deleted!
—Ralf W. Stephan email@example.com
Perhaps a short explanation of the /proc file system is in order. The /proc file system is a “virtual” file system. It doesn't actually reside on any sort of physical device. /proc is a means of examining what is going on inside the Linux kernel without having to resort to a lot of programming. /proc/kcore is actually all of the memory that is in use on your system. Even if you could delete it, you wouldn't want to. Don't worry about the “size” of kcore. It's actually not affecting any of your drives.
I strongly encourage you to explore the files in the /proc file system, preferably as a non-root user. By looking inside these files, you can learn a lot about how your system is configured, what it is doing and how certain things work. As long as you aren't poking around as root, it's very difficult to mess anything up.
—Keith Stevenson firstname.lastname@example.org
I am currently trying to migrate an ISP's radius authentication server from FreeBSD to Linux (not distribution specific, but using Debian). The /etc/passwd file from FreeBSD is using MD5 encryption. The default scheme for Linux is the DES-like scheme. FreeBSD states, correctly, that the scheme may be switched to MD5 by changing the sym-links in /usr/lib from libcrypt to libscrypt. I cannot find a solution of this nature for any Linux distribution, though at this level there should not be any distribution specificity. I know of at least one ISP with a similar problem between BSD and Linux. I am not alone.
—Michael Roark Generic
Transferring passwords between different operating systems can be a major problem. The difficulty lies in the fact that passwords are encrypted in a one-way fashion. You have to know the password in order to decrypt it. Here is what I have done in a similar situation.
1) Write a program wrapper around your login program that will capture the userid and password of your users before passing the information to your authentication server.
2) Use this file of clear-text userid and password combinations to set up the authentication database on the new system.
If you are unable or unwilling to do this, find out whether or not freeBSD and Linux use the same crypt function for storing passwords. If so, set freeBSD to use the crypt function instead of MD5. Accelerate the rate of password expirations so all of your users have to change their passwords (assuming you use password expiration). After all of the passwords have been changed, simply copy the encrypted passwords from the freeBSD box to the appropriate place on the Linux box. This will not work unless freeBSD and Linux use the same crypt function.
You may also want to take a look at Red Hat Linux. I think that the PAM security system bundled with it may support MD5 passwords. If so, you can copy them directly.
—Keith Stevenson email@example.com
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
|Non-Linux FOSS: Seashore||May 10, 2013|
- RSS Feeds
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Using Salt Stack and Vagrant for Drupal Development
- Dynamic DNS—an Object Lesson in Problem Solving
- New Products
- Validate an E-Mail Address with PHP, the Right Way
- Drupal Is a Framework: Why Everyone Needs to Understand This
- A Topic for Discussion - Open Source Feature-Richness?
- Download the Free Red Hat White Paper "Using an Open Source Framework to Catch the Bad Guy"
- Tech Tip: Really Simple HTTP Server with Python
- Roll your own dynamic dns
4 hours 27 min ago
- Please correct the URL for Salt Stack's web site
7 hours 38 min ago
- Android is Linux -- why no better inter-operation
9 hours 53 min ago
- Connecting Android device to desktop Linux via USB
10 hours 22 min ago
- Find new cell phone and tablet pc
11 hours 20 min ago
12 hours 49 min ago
- Automatically updating Guest Additions
13 hours 57 min ago
- I like your topic on android
14 hours 44 min ago
- This is the easiest tutorial
21 hours 19 min ago
- Ahh, the Koolaid.
1 day 2 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?