Talking Point: Should Distros Stick to CDR Size?
It's starting to look like the end of an era for Ubuntu users as Canonical mull the creation of an ISO that won't fit onto a CDR. The question is, does it matter?
Canonical owes at least part of its success with Ubuntu Linux to the unique way that it has been distributed. From the start it has been available as a downloadable ISO image and a free CD, posted at no cost to the user. This was great news for people who wanted to install Linux but did not have the luxury of a decent Internet connection. In a sense, installing via a CDR image has always been like a kind of cache, in that you're moving part of the content that you need onto permanent storage rather than pulling it through the network connection.
Things have changed since Ubuntu made its debut in 2004, and far more people now have a decent Internet connection. In addition, the CDR format itself is beginning to fall out of favor. The majority of computers that are suitable for use as an Ubuntu-powered desktop are capable of booting from a flashdrive, a more flexible and higher capacity medium.
So, should Canonical (and other creators of Linux distros) make an extra effort to squeeze Ubuntu 12.04 onto a CDR?
Some have argued that attempting to adhere to the size limit for a CDR forces the developers into a disciplined approach to resisting bloat. Once the 700MB limit for the basic install is breached, what should the limit be, and does it matter? Within reason, a large percentage of the potential install base for distros like Ubuntu can fetch a boot medium of almost any size. The next convenient milestone would be around 4GB as it's a common size for smaller flashdrives and close to the limit for a single layer DVD-R.
As for the people who still have a slow connection, there are solutions that are better than the traditional one of downloading an ISO and then burning it to a CD, such as arranging to have the installation medium sent through the mail or arranging an organization-wide cache for a network-based installation.
As Shawn pointed out recently, a smaller, but incomplete, installation medium such as an Ubuntu or Debian Netinstall carries with it a few advantages such as allowing you to begin with an up to date set of packages. It’s possible that such a way of working may involve a lower amount of network traffic than booting from a full CD and then updating to replace some of the packages.
Another option would be for Canonical to offer an Ubuntu Lite version with a minimal desktop and few major applications. Although, this approach probably clashes with the overall Ubuntu ethos to ship with a complete, standardized desktop.
In conclusion, I wonder if a few of the major distros will soon drop the familiar 700MB ISO entirely. The number of people who want to install standard Ubuntu but can't manage a download any bigger than the normal ISO, or who can't boot of any medium other than CDR is going to be pretty small these days.
UK based freelance writer Michael Reed writes about technology, retro computing, geek culture and gender politics.
|Speed Up Your Web Site with Varnish||Jun 19, 2013|
|Non-Linux FOSS: libnotify, OS X Style||Jun 18, 2013|
|Containers—Not Virtual Machines—Are the Future Cloud||Jun 17, 2013|
|Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer||Jun 12, 2013|
|Weechat, Irssi's Little Brother||Jun 11, 2013|
|One Tail Just Isn't Enough||Jun 07, 2013|
- Speed Up Your Web Site with Varnish
- Containers—Not Virtual Machines—Are the Future Cloud
- Linux Systems Administrator
- Non-Linux FOSS: libnotify, OS X Style
- Lock-Free Multi-Producer Multi-Consumer Queue on Ring Buffer
- Senior Perl Developer
- Technical Support Rep
- UX Designer
- RSS Feeds
- Reply to comment | Linux Journal
3 min 27 sec ago
- Reply to comment | Linux Journal
4 hours 3 min ago
- Yeah, user namespaces are
5 hours 19 min ago
- Cari Uang
8 hours 50 min ago
- user namespaces
11 hours 44 min ago
12 hours 10 min ago
- One advantage with VMs
14 hours 38 min ago
- about info
15 hours 11 min ago
15 hours 12 min ago
15 hours 13 min ago
Free Webinar: Hadoop
How to Build an Optimal Hadoop Cluster to Store and Maintain Unlimited Amounts of Data Using Microservers
Realizing the promise of Apache® Hadoop® requires the effective deployment of compute, memory, storage and networking to achieve optimal results. With its flexibility and multitude of options, it is easy to over or under provision the server infrastructure, resulting in poor performance and high TCO. Join us for an in depth, technical discussion with industry experts from leading Hadoop and server companies who will provide insights into the key considerations for designing and deploying an optimal Hadoop cluster.
Some of key questions to be discussed are:
- What is the “typical” Hadoop cluster and what should be installed on the different machine types?
- Why should you consider the typical workload patterns when making your hardware decisions?
- Are all microservers created equal for Hadoop deployments?
- How do I plan for expansion if I require more compute, memory, storage or networking?