Behind the Cloud Redux
Cloud computing is the hot buzz phrase. But as both Shawn Powers and I have pointed out, cloud computing is not a new technology, or even a new implementation of new technology. But that does not mean it is well understood, either by those who are designing or those who are crying out for it as they follow the yellow-brick road (or the latest issue of PC Week). As several Anonymous (and not so anonymous) commentators have said, there is a lot more to cloud computing then just hardware, some good data links and some smart coding.
Because cloud computing means different things to different people (and at different times of the day), we need to be clear about our terms. Our friends at Wikipedia define cloud computing as:
Cloud computing is Internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand...
This definition works well enough for me, so let us investigate it a little deeper. By definition, anything that is Internet-based tends not to have a physical or geographic location associated with it. For example, when I go to the Linux Journal web page, I am not thinking about Houston, Texas, where the magazine is officially located. In fact, because so few of us actually live in Houston, I think of the Linux Journal page as not really being anywhere. This is further emphasized by the vast array of comments from around the world to our musings.
Similarly, when you do a Google search, a good example of cloud computing, you are more likely to be hitting a server cluster located in a data-center more local to your physical (IP based) location than you are to be hitting their servers in California (and I am assuming they have servers in a data center in California). In both of these examples, the data we are talking about - search returns and generic web pages - are pretty innocuous. It really does not matter where the servers are located, and there are not any great crushing legal issues related to them.
But when, for example, the Federal government (we will use the US one since that is where I am, but any federal government has the same set of issues) or more importantly, your company, decides it is going to embark on cloud computing, then we as IT professionals need to not only be part of the process, but we need to be asking the tough questions at the beginning, not the day before the switch is thrown.
In cloud computing, location matters. And so does ownership. Lawyers need to be involved. And a lot of careful planning. Here are some things to consider: Who owns the cloud you are going to utilize? Are you contracting with a third party for storage or are you building it from scratch. If local law enforcement show up with a writ demanding the data be turned over, who is responsible for turning over that data? When? Under the laws of what state (or country)? Who owns the data paths? Is there traffic shaping? How will it affect all the nodes in the cloud? If you work for an International company, is response in say Singapore going to be the same as response in Virginia or the United Kingdom? Is it supposed to be? Who is responsible for ensuring it is. Can data be uploaded (or downloaded) the same in different countries? (If you think the answer is yes, you need to really look at your data and be sure. There is a lot of stuff that cannot be exported.) Is your data sharing disk with someone else's data? Is there a chance that someone else can get to your data with (or without) your knowledge and what is the exposure of your data if this happens? And then there are the usual sort of service level agreement questions about access, up-time, backup and recovery, passwords, password recovery, management statistics and the other day-to-day minutia that you need to keep the system running.
Behind the cloud it is still just computers - not the Great and All Powerful Oz - (and data, data connections and us IT professionals), but there is certainly a lot more that needs to be considered before connecting to it.
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Django Models and Migrations
- Hacking a Safe with Bash
- Secure Server Deployments in Hostile Territory, Part II
- Huge Package Overhaul for Debian and Ubuntu
- The Controversy Behind Canonical's Intellectual Property Policy
- Home Automation with Raspberry Pi
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development