How to Deploy A Server

When I write my column, I try to stick to specific hacks or tips you can use to make life with Linux a little easier. Usually, I describe with pretty specific detail how to accomplish a particular task including command-line and configuration file examples. This time, however, I take a step off this tried-and-true path of tech tips and instead talk about more-general, high-level concepts, strategies and, frankly, personal opinions about systems administration.

In this article, I discuss the current state of the art when it comes to deploying servers. Through the years, the ways that sysadmins have installed and configured servers has changed as they have looked for ways to make their jobs easier. Each change has brought improvements based on lessons learned from the past but also new flaws of its own. Here, I identify a few different generations of server deployment strategies and talk about what I feel are the best practices for sysadmins.

The Beginning: by Hand

In the beginning, servers were configured completely by hand. When needing a Web server, for instance, first a sysadmin would go through a Linux OS install one question at a time. When it came to partitioning, the sysadmin would labor over just how many partitions there should be and how much space /, /home, /var, /usr and /boot truly would need for this specific application. Once the OS was installed, the sysadmin either would download and install Apache packages via the distribution's package manager (if feeling lazy) or more likely would download the latest stable version of the source code and run through the ./configure; make; make install dance with custom compile-time options. Once all of the software was installed, the sysadmin would pore over every configuration file and tweak and tune each option to order.

Even the server's hostname was labored over with names chosen specifically to suit this server's particular personality (although it probably was named after some Greek or Roman god at some point in the sysadmin's career—sysadmins seem to love that naming scheme). In the end, you would have a very custom, highly optimized, tweaked and tuned server that was more like a pet to the sysadmin who created it than a machine. This server was truly a unique snowflake, and a year down the road, when you wanted a second server just like it, you might be able to get close if the original sysadmin was still there (and if he or she could remember everything done to the server during the past year); otherwise, the poor sysadmin who came next got to play detective. Worse, if that server ever died, you had to hope there were good backups, or there was no telling how long it would take to build a replacement.

The fact is, plenty of sysadmins still deploy servers this way today, and that's fine if you are responsible for only a handful of servers, or if your company can afford one administrator for every ten servers or so (the old recommendation many years ago). For the most part though, administrators have moved on from configuring servers completely by hand to one of the following three generations of server deployment automation.

First Generation: Images

Sysadmins started to realize that deploying servers completely by hand wasn't sustainable for large numbers of servers, especially if you needed multiple servers of a certain type. In response, administrators would go through all of the steps lovingly to craft a new server from scratch, then once that work was done, they would create a complete disk image of that server and lock in its fresh install state. When they needed another server just like it, they simply would apply that image to the new hardware using software like Ghost or even dd, then go in and change a few of the server-specific settings like hostname and network information (maybe by a script if they wanted to automate it even further), and the server would be ready. Instead of days or weeks to deploy a server, they could have this server up and running in a few hours. When sysadmins wanted a Web server, they would just locate and apply the Web server image they created before on top of bare metal, and in an hour or so in many cases, they would have a new functioning Web server.

The problem with images ultimately became the maintenance. Whenever you decided to upgrade the software on your servers, you were faced with a dilemma: either go through the painful steps to create a new image with the upgraded software or deploy the old image and run through any software upgrades by hand afterward. Either way, you still had to figure out what to do with existing servers in the field. Do you re-image them with an updated image and go through the hassle of backing up and restoring any unique data made after the image or do you manually apply the changes you just made to your image? In addition, you might face two servers that were mostly the same but had enough differences that they justified having different images, and eventually you found yourself maintaining an ever-growing library of large disk images even though they all may share 90% of the same software.

Second Generation: the Post-Install Script

In response to all of the hassles with maintaining server images, some administrators realized they could bypass the pain of regenerating disk images due to the fact that they were installing the same base OS to all of their machines and only afterward were they applying any specific changes. It was out of this realization that this next generation—the automated install with the post-install script—was born.

With an automated install (like kickstart for Red Hat-based distros or preseeding for Debian-based distros), administrators could create a configuration file with all of those install-time options they used to pick by hand and then feed it to the installer at the boot time, go get some coffee, and when they returned, the server went through the complete install without them. If administrators wanted a Web server, they would just select the installer configuration file for Web servers that would list a set of distribution packages including Web server software for the installer to select and install automatically.

Of course, an automated installer generally just left you with a base OS with some extra packages installed but left unconfigured. The real magic in these automated installers was in their post-install script. Simply stated, the post-install script was a shell script the installer would execute on the system after the base install was complete. What the post-install script became was an automation dream for sysadmins. If you could describe all of the commands and configuration file changes you wanted to make to a system inside a shell script, you could put it in a post-install script and have a completely automated server install.

The benefits to post-install scripts compared to images became apparent pretty quickly. Whenever you wanted to change the installer, all you had to do was change either the installer config file or your post-install script—there was no image to regenerate. These files were text and took up very little space on your disk. The files were easy to change, although unlike with images, when you changed a post-install script, usually you would need to run through a complete automated install to make sure you didn't introduce a bug.

The fact is, automated installs customized with post-install scripts can be an effective way to automate server deployments, and it's a method that's still in wide use today. That said, it isn't without its own problems. The main problem with the post-install script method is that the automation stops the moment the server is originally created. Any improvements you make to your Web server post-install script will help only any new servers—any servers created before those improvements will be different. You will be faced with the dilemma of trying to back-port improvements to your existing servers or completely rebuilding them based on the new install scripts. Although it's easier just to try to apply any improvements to existing servers, you never will be confident that the server you set up six months ago and the server you set up today are identical. At one point, what I did to try to resolve this dilemma was put all of my configuration file changes into packages I would put on a local package repository and then install on any relevant servers.

Third Generation: Central Configuration Management

The final generation of server deployment attempts to address the main problem with post-install scripts: any changes to the configuration apply only to newly installed servers; therefore, new and old servers tend to fall out of sync with each other. To solve that problem, administrators now are turning to configuration management systems like Puppet and Chef. With centralized configuration management, any changes you need to make are made on the configuration management server and then deployed to all relevant servers, whether they have been around for a year or were just created today. As long as you make your changes through the central server, you can be confident your servers' configurations are identical.

With centralized configuration management, automated installs and post-install scripts aren't thrown away, they just become more generic. Instead of all configuration being done via a post-install script, the automated install just installs the bare essentials for the operating system, and the post-install script just does whatever it needs to do so the configuration management software can check in. The configuration management system takes over from there and makes any changes it needs to make including package installs and configuration file changes to make the server ready for use. Because you can be more confident that a new server will match an old one, you end up being less fearful about any individual server going down—after all, why worry if you can re-create it in a few minutes?

Hopefully this article has given you some ideas for ways to improve your server deployment strategies or otherwise has validated the server deployment decisions you've already made. Just be careful; this automation is powerful stuff, and if you aren't careful, you may go into work one day to find you've replaced yourself with a shell script.

Server Image via ShutterStock.com.

______________________

Kyle Rankin is a director of engineering operations in the San Francisco Bay Area, the author of a number of books including DevOps Troubleshooting and The Official Ubuntu Server Book, and is a columnist for Linux Journal.

Comments

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

info

ms beam's picture

High PR Backlinks Panda I wanted to thank you for this great blog! I really enjoying every little bit of it and I have you bookmarked to check out new stuff you post.Well the project mentioned are quite imperssive i must apprecaite you to for this.

Your website gave us

Anonymous's picture

Your website gave us profitable informative content to chip away at. You have made a magnificent showing! http://onlinecampingstove.blogspot.com/

All things considered, this

Anonymous's picture

All things considered, this is my first visit to your site! We are an assembly of volunteers and beginning another drive in a group in the same specialty.
cheap van insurance uk
cute comfortable shoes

information

mind2's picture

High PR Backlinks Panda I wanted to thank you for this great blog! I really enjoying every little bit of it and I have you bookmarked to check out new stuff you post.Well the project mentioned are quite imperssive i must apprecaite you to for this.

Very nice post. Thank you

Anonymous's picture

Very nice post. Thank you margahayuland

Mutiara Bijak

MutiaraBijak's picture

MutiaraBijak When hearts overwhelmed by the fear of something that is not in desire. Then only one langkahlah very fitting to be done, MutiaraBijak let go of all things that will take place on the almighty and most excellent given hope and strive to make improvements without necessarily repeating mistakes in the future MutiaraBijak

Really good post. You have

Grants for Single Mothers's picture

Really good post. You have laid it out in a very clear and easy to understand manner.
www.skincarecounsel.com apl67gh I have never seen a good enough explanation. Thanks

nice share info

Penting, Panas, Perlu dan Seruu's picture

thanks for the post, totally i like this
Lowongan Kerja | Lowongan Kerja 2013 | Lowongan Kerjan Terbaru

Xorauguynafvlij

Demaemiainllpqc's picture

ontocheme xaikalitag brurcewibra http://usillumaror.com - iziananatt poursotbato http://gussannghor.com Galefelpreelt

This is the perfect blog for

Penting,Panas,Perlu Dan Seruu's picture

This is the perfect blog for anyone who wants to know about this topic. You know so much its almost hard to argue with you (not that I really would want...HaHa). You definitely put a new spin on a subject thats been written about for years. Great stuff, just great!

Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu

This is the perfect blog for

Penting,Panas,Perlu Dan Seruu's picture

This is the perfect blog for anyone who wants to know about this topic. You know so much its almost hard to argue with you (not that I really would want...HaHa). You definitely put a new spin on a subject thats been written about for years. Great stuff, just great!
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu
Penting,Panas,Perlu Dan Seruu

best post for me

kerozzi's picture

thank you, very helpful to us in his article adds reference and knowledge

Utrabook Terbaru | Penting, Panas, Perlu dan Seruu

perlindungan konsumen cerdas paham

Cipto Junaedy's picture

These are quality County properties and have been inspected by us with a view to marketing them World wide. We can safely say that these properties are good value for investors. The properties we have for the market are condos from $50,000 and pool homes from $100,000. Some of these properties are ideal for investors as we have them with tenants that are paying sufficient rental to make them an attractive investment.
The other Florida County properties can be purchased through us and you can have them serviced under our rental program that will supply steady rental revenue. Mitra SEO - http://mitraseo.16mb.com
Cipto Junaedy : http://mitraseo.16mb.com/cipto-junaedy/
Executive Search | Executive Search Indonesia | wwwExearchNet : http://mitraseo.16mb.com/executive-search-executive-search-indonesia-www...
Penting, Panas, Perlu dan Seruu : http://mitraseo.16mb.com/penting-panas-perlu-dan-seruu/
Iconia PC Tablet dengan Windows 8 : http://berita-hp.blogspot.com/2012/12/acer-iconia-pc-tablet-dengan-windo...
Agen Bola Ligabet88 Promo Bonus 100% Ibcbet Sbobet 368Bet : http://mitraseo.16mb.com/agen-bola-ligabet88-promo-bonus-100-ibcbet-sbob...

I do not think quite as can

tomek112's picture

I do not think quite as can be.

Ansible is the quickest way for your configuration management

ansible_geek's picture

I have used Puppet, looked at Chef but Ansible is the tool of choice for me. It is as close to using ssh to manage your servers but in better ways.

Thoughts on various config mgmt systems from Martin Krafft

Tom McNeely's picture

I've never used any configuration management system and I know nothing about them. But in the course of stalking Martin Krafft, author of "The Debian System: Concepts and Techniques" (I'm hoping for a second edition), I periodically check his blog, and the following two recent posts are relevant to this discussion:

http://madduck.net/blog/2012.10.19:configuration-management/
http://madduck.net/blog/2013.02.01:a-botnet-for-configuration-management/

Interesting superb short

Slapia's picture

Interesting superb short article good friend, i purchase cutting edge tips, brand new guidelines to execute something is, believe may discuss ever again, i just hold waiting for after that post, regards.Jam slapia also Jam tangan terbaru

Very nice post. Thank you

Marie's picture

Very nice post. Thank you

Open Source Puppet.

Jon Brouse's picture

The advantages of Puppet reach farther than initial server deployment as the agent can receive updates from the master every 30 minutes (default). DevOps teams rely on a standardized architecture for their Eng, QA and Staging stacks with multiple servers. Changes, and there will be plenty, made to the architecture must be applied consistently across all stacks. By leveraging Puppet's environment option and source control you can be assured none of those new bugs are related to deployment issues.

On a note related to next generation deployment check out "docker" and their use of Linux containers.

About the final generation I

Anonymous's picture

About the final generation I haver to say that it takes much time to load and probably its not worth the time unless you have really many servers to maintain, like really a lot

Agreed - we're at the

Anonymous's picture

Agreed - we're at the Kickstart stage, with a couple of recent starters humming about Puppet. Rather than saying "this is WHAT you should use!", how about an article on HOW you do it?

Puppet is too complicated?

Anonymous's picture

Puppet is SIMPLY too complicated. Period.
As old sysadmins, we are tied to the old "Keep It Simple Stupid" motto. When a tool became too intricate, not worth it.

That's extremely subjective,

Anonymous's picture

That's extremely subjective, and, in my opinion, extremely wrong.

Puppet can be as simple or as complicated as you make it.

Automation

Colin's picture

>> We're currently exploring options to take this level of automation to our other servers as well. Generation++.

I want to start mastering a system. Naturally Kickstart is great, but it only allows you to run up a system. You can't easily upgrade packages, apply fixes, etc. I want to know are people using Puppet, Salt or Chef mostly out there. Which is the best to pick

There is no best or worsed

bbl's picture

There is no best or worsed tool ... It's about tool meeting your needs. All of them are in some way similar, just this small diffs make a tool great or bad for you and your infrastructure :-) try and Have fun with all of them ... Puppet, chef, cfengine, salt and other :-)

Final Generation?

Anonymous's picture

I've been building post-install based builds since the mid-90's. When I started the Linux Team @ my current big. co., I used AutoYast/Kickstart scripts to deploy systems including our CFengine client. Puppet and Chef are bloated wanna be's, IMHO.

For our DataNodes in Hadoop, we don't even install an OS. Just PXE Boot to a NFS/GlusterFS root fs. They all look the same, only the hostnames change. :)

We're currently exploring options to take this level of automation to our other servers as well. Generation++.

Webinar
One Click, Universal Protection: Implementing Centralized Security Policies on Linux Systems

As Linux continues to play an ever increasing role in corporate data centers and institutions, ensuring the integrity and protection of these systems must be a priority. With 60% of the world's websites and an increasing share of organization's mission-critical workloads running on Linux, failing to stop malware and other advanced threats on Linux can increasingly impact an organization's reputation and bottom line.

Learn More

Sponsored by Bit9

Webinar
Linux Backup and Recovery Webinar

Most companies incorporate backup procedures for critical data, which can be restored quickly if a loss occurs. However, fewer companies are prepared for catastrophic system failures, in which they lose all data, the entire operating system, applications, settings, patches and more, reducing their system(s) to “bare metal.” After all, before data can be restored to a system, there must be a system to restore it to.

In this one hour webinar, learn how to enhance your existing backup strategies for better disaster recovery preparedness using Storix System Backup Administrator (SBAdmin), a highly flexible bare-metal recovery solution for UNIX and Linux systems.

Learn More

Sponsored by Storix