Running Linux and Netfilter on Nokia IP Series Hardware
Now that we have Linux installed on the original Nokia disk, we can begin the process of customizing the installation to function on the Nokia hardware. The first step is to download and compile a custom kernel (see Resources). Boot into Linux on the desktop PC, login as root, gain network connectivity and download the latest stable kernel (2.4.20 as of this writing) from kernel.org. Extract the kernel tar archive and begin the compilation process with make menuconfig (possible because we installed the ncurses library) to configure the kernel build. It is important to ensure that only necessary portions of the kernel code are compiled into the resulting kernel binary. To this end, only the following features should be compiled in:
Processor type and features:
PCI device name database
System V IPC
Kernel support for ELF binaries
Enhanced IDE disk support
CMD640 chipset bugfix/support
RZ1000 chipset bugfix/support
Include IDE/ATA-2 disk support
Use multi-mode by default
Generic PCI IDE chipset support
Sharing PCI IDE interrupts support
Generic PCI bus-master DMA support
Intel PIIXn chipsets support
PIIXn Tuning support
Network packet filtering (replaces ipchains)
UNIX domain sockets
IP: Netfilter Configuration:
IP tables support
Connection state match support
Connection tracking match support
MASQUERADE target support
LOG target support
Network device support:
EtherExpressPro (eepro100, Becker driver)
Standard/generic serial support
Support for console on serial port
Ext3 journaling filesystem support
Virtual memory filesystem support
/proc filesystem support
Second extended fs support
After compiling the kernel with the standard make dep && make clean && make bzImage, our shiny new kernel should be around 610KB in size. Copy it to the /boot partition, configure LILO to see the new kernel binary and run lilo -t && lilo to reinstall LILO in the MBR.
By default the LILO boot loader does not send any kernel boot messages, init messages or system log messages over the serial port. Initially when we reinstall the IP330 drive back in the IP330, the only method we have available to interact with the machine is through the serial port. To configure LILO to send messages over the serial port, add the following line just before the timeout=50 line:
This instructs LILO to send messages out of /dev/ttyS0, which corresponds to serial port 0, at a speed of 9600 baud with one stop bit and no parity bits (see Resources). Also, there is no need to have LILO display the fancy semi-graphical boot message, so remove the message=/boot/message line. Now that we have finished editing /etc/lilo.conf, it is time to rerun lilo -t && lilo once more.
Configuring LILO to send messages over the serial port would not be of much use if, after the machine boots and init has run, there is no way to login. Therefore, we require init to spawn a getty process on /dev/ttyS0. Getty processes are spawned from the init process based on the /etc/inittab configuration file. The default Red Hat inittab file instructs init to start getty processes on ttys 1 through 6:
# Run gettys in standard runlevels 1:2345:respawn:/sbin/mingetty tty1 2:2345:respawn:/sbin/mingetty tty2 3:2345:respawn:/sbin/mingetty tty3 4:2345:respawn:/sbin/mingetty tty4 5:2345:respawn:/sbin/mingetty tty5 6:2345:respawn:/sbin/mingetty tty6
Because there is no way to attach a keyboard to a Nokia IP330, all of these should be replaced with the following single line:
1:2345:respawn:/sbin/agetty -h ttyS0 9600 vt102
agetty, in contrast to mingetty, does not reference any configuration files and simply takes all configuration input from the command line. mingetty also is not suitable for use on serial lines, according to its man page.
|Designing Electronics with Linux||May 22, 2013|
|Dynamic DNS—an Object Lesson in Problem Solving||May 21, 2013|
|Using Salt Stack and Vagrant for Drupal Development||May 20, 2013|
|Making Linux and Android Get Along (It's Not as Hard as It Sounds)||May 16, 2013|
|Drupal Is a Framework: Why Everyone Needs to Understand This||May 15, 2013|
|Home, My Backup Data Center||May 13, 2013|
- RSS Feeds
- Dynamic DNS—an Object Lesson in Problem Solving
- Making Linux and Android Get Along (It's Not as Hard as It Sounds)
- Designing Electronics with Linux
- Using Salt Stack and Vagrant for Drupal Development
- New Products
- A Topic for Discussion - Open Source Feature-Richness?
- Drupal Is a Framework: Why Everyone Needs to Understand This
- Validate an E-Mail Address with PHP, the Right Way
- What's the tweeting protocol?
- Kernel Problem
5 hours 47 min ago
- BASH script to log IPs on public web server
10 hours 14 min ago
13 hours 50 min ago
- Reply to comment | Linux Journal
14 hours 22 min ago
- All the articles you talked
16 hours 45 min ago
- All the articles you talked
16 hours 49 min ago
- All the articles you talked
16 hours 50 min ago
21 hours 15 min ago
- Keeping track of IP address
23 hours 6 min ago
- Roll your own dynamic dns
1 day 4 hours ago
Enter to Win an Adafruit Pi Cobbler Breakout Kit for Raspberry Pi
It's Raspberry Pi month at Linux Journal. Each week in May, Adafruit will be giving away a Pi-related prize to a lucky, randomly drawn LJ reader. Winners will be announced weekly.
Fill out the fields below to enter to win this week's prize-- a Pi Cobbler Breakout Kit for Raspberry Pi.
Congratulations to our winners so far:
- 5-8-13, Pi Starter Pack: Jack Davis
- 5-15-13, Pi Model B 512MB RAM: Patrick Dunn
- 5-21-13, Prototyping Pi Plate Kit: Philip Kirby
- Next winner announced on 5-27-13!
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?