Economy Size Geek - Installation Toolkit
I don't know if it's because I have a column in Linux Journal or because of the release of Karmic Koala, but either way, I seem to be installing Linux a lot lately. If there is one lesson I have learned from time spent on other hobbies, it's that things are always easier if you have the right tools handy. In this case, that means taking a step back from the install process to see what tools might help.
ISO 9660 is the standard for storing images of CD-ROM filesystems. Although I vaguely remember trying to install Slackware from floppy disks, and once in a while, I'll see a DVD-ROM ISO for a distro, most stick to the 650MB image. As a result, I have lots of them. These days, having the right ISO is useful for a fresh desktop install (once you finish burning it), or it can be used directly if you are creating a virtual machine. This is pretty much the entry level of installation tools. My only piece of advice is make sure that when you burn an ISO, burn the contents and not the file. If, when you mount the disc and see ubuntu-9.10-desktop-amd64.iso on the disc, you missed a step.
Another option for installation media is the thumbdrive. Prices have dropped, capacities have skyrocketed, motherboard support has expanded, and tools have improved. All that adds up to making this a really great option.
Ubuntu ships with a tool called usb-creator. It's a very straightforward tool for converting your thumbdrive into a bootable utility. However, I prefer UNetbootin (unetbootin.sourceforge.net). This handy tool does the same thing, but it adds a helpful hand by offering to auto-download a variety of Linux distributions.
Both tools make it incredibly easy to make your thumbdrive bootable. One thing to keep in mind: in most cases, you need only 650MB, but when I wrote this, it was cheaper on Amazon to buy 2GB thumbdrives than 1GB thumbdrives. Manufacturers constantly are chasing the biggest capacities, which means the sweet spot in pricing often is just behind this—much like hard drives (have you priced an 80GB hard drive lately?). I ended up buying a three-pack of 2GB thumbdrives just for this purpose. They are installed with the current x86 version of Ubuntu, SystemRescueCD and Clonezilla. I am contemplating adding on the x64 version of Ubuntu (as I seem to be choosing that more often) and Darik's Boot and Nuke (comes in handy as you decommission equipment). The nice thing about the thumbdrive form factor is that I can just keep them on a key chain in my laptop bag. I don't have to worry about scratching them, and when updates come out, I can re-install over them.
CDs and thumbdrives work great, but if you are going to be doing a lot of installing, there is another tool to add to you arsenal—PXE booting. PXE (pronounced “pixie”) stands for Preboot eXecution Environment. I've used it a lot at hosting companies I've worked at, but I never have gotten around to setting it up on my home network.
PXE allows you to boot a computer off your network. In this case, I am going to set it up so I can boot the installation environment and then switch back to booting locally. More work is involved if you want to make thin clients (meaning, if you want a computer to boot off the network and not have any local storage).
In order for this to work, you need a server on your network to host the PXE, DHCP and other services required. The target computer has to be connected to the same network, and the BIOS of that computer must support PXE (or as I learned later, you have a gPXE ISO and a CD drive). The good news is that most modern motherboards support PXE (usually labeled as boot off of LAN). You may be able to tell the computer to offer you a boot menu on startup. This allows you to boot off the network one time without forcing you to modify your BIOS.
I sat down to start the process. I have a file server (keg) that will handle all the PXE services. PXE also expects DHCP. Many of the guides I found on-line assume the PXE server also will handle DHCP. In my case, all the networking is handled by my main DD-WRT router (co2). That means I will have to modify it as well to make things work.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- RSS Feeds
- Senior Perl Developer
- Technical Support Rep
- Non-Linux FOSS: libnotify, OS X Style
- UX Designer
- So when they found it hard to
35 min 22 sec ago
57 min 33 sec ago
- Reply to comment | Linux Journal
1 hour 19 min ago
- Android has been dominating
1 hour 24 min ago
- It is quiet helping
4 hours 10 min ago
4 hours 27 min ago
- Reachli - Amplifying your
5 hours 43 min ago
6 hours 32 min ago
- good point!
6 hours 35 min ago
- Varnish works!
6 hours 44 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?