Behind the Cloud Redux
Cloud computing is the hot buzz phrase. But as both Shawn Powers and I have pointed out, cloud computing is not a new technology, or even a new implementation of new technology. But that does not mean it is well understood, either by those who are designing or those who are crying out for it as they follow the yellow-brick road (or the latest issue of PC Week). As several Anonymous (and not so anonymous) commentators have said, there is a lot more to cloud computing then just hardware, some good data links and some smart coding.
Because cloud computing means different things to different people (and at different times of the day), we need to be clear about our terms. Our friends at Wikipedia define cloud computing as:
Cloud computing is Internet-based computing, whereby shared servers provide resources, software, and data to computers and other devices on demand...
This definition works well enough for me, so let us investigate it a little deeper. By definition, anything that is Internet-based tends not to have a physical or geographic location associated with it. For example, when I go to the Linux Journal web page, I am not thinking about Houston, Texas, where the magazine is officially located. In fact, because so few of us actually live in Houston, I think of the Linux Journal page as not really being anywhere. This is further emphasized by the vast array of comments from around the world to our musings.
Similarly, when you do a Google search, a good example of cloud computing, you are more likely to be hitting a server cluster located in a data-center more local to your physical (IP based) location than you are to be hitting their servers in California (and I am assuming they have servers in a data center in California). In both of these examples, the data we are talking about - search returns and generic web pages - are pretty innocuous. It really does not matter where the servers are located, and there are not any great crushing legal issues related to them.
But when, for example, the Federal government (we will use the US one since that is where I am, but any federal government has the same set of issues) or more importantly, your company, decides it is going to embark on cloud computing, then we as IT professionals need to not only be part of the process, but we need to be asking the tough questions at the beginning, not the day before the switch is thrown.
In cloud computing, location matters. And so does ownership. Lawyers need to be involved. And a lot of careful planning. Here are some things to consider: Who owns the cloud you are going to utilize? Are you contracting with a third party for storage or are you building it from scratch. If local law enforcement show up with a writ demanding the data be turned over, who is responsible for turning over that data? When? Under the laws of what state (or country)? Who owns the data paths? Is there traffic shaping? How will it affect all the nodes in the cloud? If you work for an International company, is response in say Singapore going to be the same as response in Virginia or the United Kingdom? Is it supposed to be? Who is responsible for ensuring it is. Can data be uploaded (or downloaded) the same in different countries? (If you think the answer is yes, you need to really look at your data and be sure. There is a lot of stuff that cannot be exported.) Is your data sharing disk with someone else's data? Is there a chance that someone else can get to your data with (or without) your knowledge and what is the exposure of your data if this happens? And then there are the usual sort of service level agreement questions about access, up-time, backup and recovery, passwords, password recovery, management statistics and the other day-to-day minutia that you need to keep the system running.
Behind the cloud it is still just computers - not the Great and All Powerful Oz - (and data, data connections and us IT professionals), but there is certainly a lot more that needs to be considered before connecting to it.
How to Deliver Hybrid Apps in 2 Weeks
July 21, 2015
11:00 AM PDT
Have you fully unlocked the potential of DevOps? Need some expert advice on how to accelerate application delivery on hybrid cloud?
Join this webcast featuring DevOps experts Sanjeev Sharma, IBM CTO and Distinguished Engineer; Synchrony Systems CEO Slavik Zorin; and Silverpop production operations director John Karnes as they discuss the challenges that limit application delivery innovation and how they accelerated the lifecycle to deliver hybrid apps in two weeks instead of two months. They’ll also offer tips that will help you develop a business case to get started with DevOps initiatives.
Free to Linux Journal readers.Register Now!
|Hacking a Safe with Bash||Jul 28, 2015|
|KDE Reveals Plasma Mobile||Jul 28, 2015|
|Huge Package Overhaul for Debian and Ubuntu||Jul 23, 2015|
|diff -u: What's New in Kernel Development||Jul 22, 2015|
|Shashlik - a Tasty New Android Simulator||Jul 21, 2015|
|Embed Linux in Monitoring and Control Systems||Jul 20, 2015|
- Hacking a Safe with Bash
- KDE Reveals Plasma Mobile
- Shashlik - a Tasty New Android Simulator
- Home Automation with Raspberry Pi
- General Relativity in Python
- Embed Linux in Monitoring and Control Systems
- diff -u: What's New in Kernel Development
- Huge Package Overhaul for Debian and Ubuntu
- One Port to Rule Them All!
- How to Deliver Hybrid Apps in 2 Weeks [Webcast]