Lifting the Fog from Cloud Computing
Back in August 2008, at LinuxWorld in San Francisco, the big buzzword was "Cloud Computing". It's a neat concept, but after a week of hearing folks talk about "in the cloud", I was about at the end of my rope. To add insult to injury, it seemed that the San Francisco fog confused many folks, and "Cloud Computing" started to be used synonymously with "Grid Computing", "Clustered Virtualization" and "My Company Is Cool".
For clarity's sake, I thought a brief vocabulary lesson was in order. Cloud computing is indeed a viable, exciting idea—but it helps if we all know what we're talking about.
The idea behind cloud computing is that services, not servers, are offered to the end user. If people need a Web server, they buy Web services from the "cloud", and have no idea what is actually offering them the serving. The "cloud" essentially hides the server infrastructure from the client, and ideally scales on the fly and so on. Much of the confusion in terminology happens, because the cloud of services almost always is powered by a grid of computers in the background. Cloud computing itself, however, is just the abstraction of services away from servers themselves.
The advantage is that a vendor can offer more reliable, diverse and scalable services to a user without the cost of dedicating hardware to each user. This allows for more graceful temporary spikes (Slashdot, Digg and so on), while not letting servers sit idle during low times. Because the back end is transparent to the user, those actual grids of computers in the background can be geographically diverse, and oftentimes virtualized for easy migration, all without any end-user interaction. Ideally, it offers a reliable "service" to the end user, at a lower cost, and gives vendors flexibility in the back end, so they can manage servers in the most efficient way possible.
Most people don't realize that cloud computing ultimately is shared hosting. Vendors avoid terms like "shared hosting", because that implies multiple people sharing a single computer. By its strictest definition, however, cloud computing certainly could be run from a single back-end server. With current scalability and virtualization technologies, vendors have much more robust ways to serve to the "cloud", and the traditional hangups with shared hosting are largely eliminated. Still, it's important to understand what cloud computing really is, so you don't get fooled into buying more or less than what you truly need.
- What sort of back-end servers are you running?
- Do you have the ability to fail over to a secondary data center behind the cloud, transparent to me?
- How do you differ from traditional shared hosting? (This one should spark some heated retorts!)
- How well do you scale, and how does pricing work for occasional spikes?
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Hacking a Safe with Bash
- Django Models and Migrations
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Huge Package Overhaul for Debian and Ubuntu
- Shashlik - a Tasty New Android Simulator
- Home Automation with Raspberry Pi
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development