Best of Technical Support
I've got my wireless NIC working in Fedora Core 2, and I'm able to
communicate with the network after I configure with iwconfig. But, after
I reboot, all the settings are lost, and I have to enter all the info
again. Is there something I'm not doing, or is this problem something
You should use the system configuration tools to set up the device:
Task Bar→System Settings→Network.
The wireless-tools command-line utilities (iwconfig, iwspy, iwpriv) do not stick through a reboot, as you have experienced. The thing to do is to implement the settings through the appropriate configuration file. In the case of Fedora, you probably can get what you need set permanently, by adding wireless-specific options to your ifcfg file (/etc/sysconfig/network-scripts/ifcfg-ethX, where ethX is the Ethernet device name assigned to your wireless card). You simply need to place the additional wireless-related options into that file. For example, on my laptop, which connects over an 802.11b interface to the router/firewall/dhcp server I hacked together in my house, the file is very simple:
DEVICE=eth1 BOOTPROTO=dhcp ONBOOT=no MODE=Ad-Hoc CHANNEL=1 KEY=XXXXXXXXXX
The MODE, CHANNEL and KEY options were all I needed for my setup;
yours are sure to be different, but anything you can do via the
command line should be available as an option in the file. For a list
of all available settings, take a look at
The following page contains specific instructions
on how to set wireless networking on the Fedora
Core Linux distribution:
Felipe Barousse Boué
In Embedded Linux Journal, issue 9, there was an
article titled “Update on Single-Board Computers” that
contained a picture captioned “An EBX Form Factor
PowerPC-based SBC from Motorola”. What I wanted to
know is which company manufactures such a board—an
EBX Form Factor PowerPC-based SBC.
We have a system-defined structure called MACHINE_STATIC to
find machine architecture, including the IP address, processor speed,
OS and so on. I need a similar structure on Linux to extract machine
architectures. Do you know if one is available?
Rajesh Kumar Patnaik
Not a structure, per se, although certain ioctl() calls can provide many
of these details. But you should be looking toward the new mechanisms for
extracting this information—procfs and the upcoming sysfs. Note that sysfs
is new, and not many systems implement it yet. The files in /proc, when read,
will provide you with many relevant system data elements. For example,
/proc/cpuinfo provides CPU data, /proc/net/route provides network routes (IP
addresses are in hex) and /proc/version provides the kernel version.
Does anyone know what tools are used in Red Hat 8 to find and
repair disk and file problems—scandisk, scan registry?
These tools are not specific to a distribution; they are specific to a filesystem type. Like other distributions, Red Hat supports a number of filesystems, including ext2, ext3, xfs, ReiserFS and many others. The “see also” section in the man page for fsck lists the file checking utilities for the most common filesystems.
Recovery options available is a key selection criteria for
choosing a filesystem in the first place. New Linux users are often
baffled by the array of choices and seek guidance regarding filesystem
selection. I always recommend that the availability of recovery tools
be part of this evaluation.
I am using Red Hat 9 and SuSE 9 and really like both of
them as well as OpenOffice.org software. However,
I am puzzled by the fact that I cannot locate any
quality database programs similar to Microsoft Access.
I would like to build a database that will work on
a small intranet consisting of three or four machines.
If you want an open-source product, take a look at Rekall, which was
recently open-sourced by TheKompany.com. For a proprietary but inexpensive
alternative, take a look at Adabas D, part of Sun's StarOffice. Both
products are younger than Access and do not implement every function
provided by it, so if you are looking to migrate existing databases,
complex applications may not be 1:1 portable. However, both products
do provide significant functionality and may be suitable for your
environment, especially if your needs are more focused on new
OpenOffice.org can work with almost any ODBC or
JDBC database, including PostgreSQL, MySQL and even
Access. Go to Tools→Options→Data Sources to set
up a connection.
I suspect you are interested in the MS Access-like
user interface. For that there are many front ends
for PostgreSQL, going from pgaccess, www.pgaccess.org,
to OpenOffice.org's database tools that interface to
PostgreSQL and other RDBMS. As a side note,
tons of companies use open-source databases, from
small companies with small Web sites to Fortune
500-sized companies; one special case may be the dot
ORG registry that relies on PostgreSQL for managing
the overall .org domain on the Internet.
Felipe Barousse Boué
If your application's interface is a simple form-based
one, consider doing it as a Web application instead.
You won't have to maintain software on the client,
you won't have to learn different tools for internal
and customer-facing applications and users already
know how to use a browser.
My company uses a mixture of Linux (Red Hat AS 2.1) and
Microsoft Windows servers. I want to set up a central
authentication server for both platforms. We use
Active Directory, and it has been suggested that we
might be able to use AD for Linux. Is it possible to
use AD as a central authentication server for Linux,
and what's the best way to do it? Or, would we be
better off with a Kerberos or LDAP server?
In Linux, explore a security layer known as Pluggable Authentication Modules, or PAM. This allows you to authenticate users logging in locally against your AD servers.
For Apache, take a look into mod_auth_ldap, which allows you to do the same. Alternatively, you can use mod_auth_pam to instruct Apache to share your Linux server's PAM deployment. This is worthwhile if you do intend to have multiple applications use this data, because it will reduce your setup time. However, if Apache is your only application (and this is not uncommon) you might want to stick with a direct mod_auth_ldap configuration, as there are fewer steps involved in its configuration.
In the long run, you'll likely be happier with
a cross-platform “single sign-on” plan based on
LDAP, as described in “OpenLDAP Everywhere” in the
December 2002 issue of Linux Journal. It works on Linux and Microsoft
Windows and is more flexible and future-proof than
a vendor-specific solution. But if you do plan to
authenticate against Microsoft Active Directory using
Kerberos and PAM, Tim Fredrick has some helpful notes
Dear Bill—I just saw your question in Linux Journal [July 2004] about setting up a Red Hat 9 and Windows 2003 server on the same PC, and I wanted to add something to the replies you already received. Namely, 128MB of RAM is too small for a Red Hat 9 or Fedora Core installation, if you use the GNOME desktop (the standard choice). You need at least 256MB of RAM for this. If you do not want to add memory (although it's pretty cheap these days), and you want to stay with Red Hat, then it is necessary to set up an alternative, lightweight window manager, like IceWM. This would involve extra trouble for a beginner on Red Hat, where IceWM does not come automatically installed.
SuSE 9.1 is a distribution that makes it easy to choose between KDE or
GNOME (with heavy RAM requirements) or IceWM or other lightweight
window managers. You can buy SuSE 9.1 directly from the SuSE Web site.
They have a Personal edition for about $40, but it doesn't include server
software. The Professional edition for about $90
does include it.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
9 hours 47 min ago
- BASH script to log IPs on public web server
14 hours 14 min ago
17 hours 50 min ago
- Reply to comment | Linux Journal
18 hours 22 min ago
- All the articles you talked
20 hours 45 min ago
- All the articles you talked
20 hours 49 min ago
- All the articles you talked
20 hours 50 min ago
1 day 1 hour ago
- Keeping track of IP address
1 day 3 hours ago
- Roll your own dynamic dns
1 day 8 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?