First Look at an Apple G4 with the Altivec Processor
If you are like me, you might get paranoid about changing any of these values. After a bit of research on http://developer.apple.com/, I came across some interesting technical notes. In particular were Technotes 2000-2004. Some of the benefit of having a full-featured interpreter with the power of an operating system is the ability to provide for viewing, running files and displaying hardware information for debugging. Much of this information is too detailed to write down, so there is the notion of a “two-machine” mode (TN 2004). In this mode, you can display the OF output on a serial port. The G4 PPC doesn't come with a serial port, but within Apple's OF there is a Telnet dæmon. I'm not entirely sure that you couldn't use the USB devices as output, after all, “serial” is in the acronym, but I do know that the Telnet dæmon works. Also, I don't know if minicom can be used with a USB port.
The dæmon is easily configured. First, from the OF prompt enter the following command:
0 > " enet:telnet, 192.168.2.20" io
Observe the spaces, press Return and now OF has created a Telnet dæmon awaiting a Telnet client. This command has configured the Ethernet interface to IP address 192.168.2.20. You may want to choose a different IP address depending on your own network configuration. You will need another machine on the same physical network segment as your PPC. If you don't have a segment, a crossover Ethernet cable will do.
From your client machine, Telnet to your target (PPC) machine. You should be presented with the same “0 >” prompt as displayed from the Mac. Now you have the ability to capture all of the output from printenv, devalias, etc., to a file. This helps if you screw things up so badly that you have to return to your default configuration.
Okay, let's install Linux. Insert the YDL CD into your DVD ROM and hold the C key down while you boot. This is the method to boot from the CD. You'll be presented with the installation screen for YDL. You can follow the YDL installation guide for the most part, but a word of caution about partitioning: unless you've installed Linux before on your Mac, you'll need to create some partitions. No longer are you creating ext2 partitions, now you'll be creating partitions of the type Apple_UNIX_SVR2. Also, you'll be using pdisk rather than fdisk to create your partitions. Use the p command to display the partitions. If you've followed my advice above, you should see nine partitions. These are created by default, and if you intend to leave some form of running system (recommended), leave them alone.
Now you need to create the partitions for your normal partitioning scheme, I've chosen to create partitions for the mount points /, /usr, /opt, /home and a swap partition. Yours may be different, but the scheme I've created is shown in Table 1.
After you write the partitions to the table using the w command, and you quit out of pdisk (q command), reboot the system. pdisk will not recognize the new partitions until a reboot. Begin the installation anew by holding down the C key; indicate your newly created mount points, and you can begin selecting packages as you would on a normal Red Hat Linux installation. After you have completed these steps, you're going to have to reboot again. This time, don't hold down any keys as you want to boot the Mac OS.
Now, back to the Mac OS. Open the yaboot.conf you copied to the system folder and take a look. Mine looks like Listing 2.
Notice the label for “linux”. The yaboot.conf that comes from the CD has an error; you need to prepend the extra “\\” to yaboot again. This time, use the command sequence Command-Opt-O-F to get to OF. When you again get the “0 >” prompt, enter the following:
0 > boot hd:,\\yaboot
After some flickering, you'll be presented with a LILO-like prompt. Linux should begin to boot. Success! You should now see the power of Open Firmware; the command above allows you to execute a file from your hard drive, and you haven't even booted an operating system yet!
After you log on as user root, you should edit the file /etc/modules.conf and add the following:
alias sound dmasound
This will allow you to use /dev/dsp to play audio. However, in its present form, dmasound supports write only—you can't use it to record data from an external microphone.
I configured X (XFree86 3.3.6) using the XConfigurator that runs during the Linux installation. I chose values for 1024 x 768 with a 24-bit color depth. In yaboot.conf I added the line:
so that the kernel would correctly observe the ATI graphics card installed. Then I edited /etc/X11/XF86Config and added DefaultBitsPerPixel 24 in the “Screen” section so that I didn't have to pass the bits per pixel to startx when I ran it.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- Android's Limits
- Reply to comment | Linux Journal
1 hour 7 min ago
- Yeah, user namespaces are
2 hours 24 min ago
- Cari Uang
5 hours 55 min ago
- user namespaces
8 hours 48 min ago
9 hours 14 min ago
- One advantage with VMs
11 hours 43 min ago
- about info
12 hours 16 min ago
12 hours 17 min ago
12 hours 18 min ago
12 hours 20 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?