Behind the Cloud It's Still Just Computers
I had the pleasure of joining a video conference with our local Drupal expert, Katherine Druckman, and Barry Jaspan, one of the system architects from Acquia. Acquia is a company that specializes in Drupal. While their company does many things for clients interested in using Drupal, we were specifically talking about cloud computing. Acquia offers a highly available, fully scalable, and managed Drupal environment. For the end user, it means you pay a set amount every month, and your website is always there. It's always updated, it's always backed up, and it's always available. For that magic to happen, however, the folks at Acquia have to play the man behind the curtain and become the Wizard of Oz. That's where the focus of our talk was, and it was fascinating to hear some inside info on running a cloud service. The folks at Acquia actually use Amazon's cloud services to make their own cloud platform. Perhaps that's a cloud within a cloud, but nonetheless it's an economical way to leverage other people's datacenters in order to provide a service of your own. Using Amazon's EC2 service to spin up servers as you need them is great. It does come with some frustrating hinderances too though.
- As client needs grow or shrink, adding or removing servers takes literally seconds.
- Hardware maintenance is "farmed out" to Amazon. No more failed hard drives, faulty network cards, or power concerns.
- As a vendor, you only pay for the server power you need. No need to over-buy servers in case you need them.
- While spinning up new servers is instant, you're limited (mostly) to spinning up servers. There is no SAN, which can be a real problem.
- Creating failover support must be done completely with software. Not having actual hardware is an advantage, until you wish you had specialized hardware.
- Scaling can be challenging. Granted Amazon offers different size hardware, but load balancing and scaling still aren't magic. MySQL for instance still runs on a single server.
That's Where the Magic Happens
At Acquia, they've created a complex backend. Clients don't see anything other than a working service, but they've solved some complicated issues rather nicely. For example:
Drupal's Data Store
Drupal stores local files. Yes, most of the content is dynamically created from the database, but there are some things that Drupal expects to see on a filesystem. Images, documents, files, etc. are stored on a local filesystem and accessed by the web server. In a highly available environment without a shared SAN, that can be challenging. Acquia currently uses a filesystem that is spanned across several servers in such a way that if a server fails, the others keep the filesystem going. It works a bit like a network based RAID5 array. Unfortunately, it doesn't always work the way the Acquians (can I call them that?) like. Rather than just leave a mostly-working system as their default, the architects are working toward a better system. Using Varnish to cache the files, it will likely be possible to allow a seamless failover to a mirrored backup rather than worrying about a network based distributed filesystem. How it will all come together is still being planned, but it's exciting to see software solutions for hardware limitations.
MySQL, Your SQL…
If you were hoping for a magic bullet regarding the scaling of MySQL, unfortunately that's just not in the cards. Oh, there are ways to make it more efficient, with things like memcached, but in the end an instance of MySQL requires a server to run on. Thankfully Amazon offers some pretty beefy server instances if you need them. In the end, expertise is what really makes the database back end able to handle any load. If a site is busy enough to overtax MySQL, even on a huge server, it's time to tune up your Drupal install. If every page load makes 11,837 database calls, there's no MySQL server in the world that can help.
Clouds Don't Make Geeks Obsolete
Cloud computing is really great for end users looking to farm out the management of their servers. It's also really great for a company that wants to offer services, but doesn't want to invest in a datacenter. At the end of the day, however, cloud computing isn't a magic bullet. It still requires geeks behind the curtain keeping things running. The exciting part, at least for me, is that solving problems in the cloud is drastically different than traditional datacenters. I'm an old dog that likes to learn new tricks, and solving application issues within the limitations of virtual servers in the cloud is a whole new game. So go spin up that EC2 instance, and start solving problems. You'll become a pretty valuable employee to someone!
Getting Started with DevOps - Including New Data on IT Performance from Puppet Labs 2015 State of DevOps Report
August 27, 2015
12:00 PM CDT
DevOps represents a profound change from the way most IT departments have traditionally worked: from siloed teams and high-anxiety releases to everyone collaborating on uneventful and more frequent releases of higher-quality code. It doesn't matter how large or small an organization is, or even whether it's historically slow moving or risk averse — there are ways to adopt DevOps sanely, and get measurable results in just weeks.
Free to Linux Journal readers.Register Now!
- Hacking a Safe with Bash
- Django Models and Migrations
- Secure Server Deployments in Hostile Territory, Part II
- The Controversy Behind Canonical's Intellectual Property Policy
- Huge Package Overhaul for Debian and Ubuntu
- Home Automation with Raspberry Pi
- Shashlik - a Tasty New Android Simulator
- Embed Linux in Monitoring and Control Systems
- KDE Reveals Plasma Mobile
- diff -u: What's New in Kernel Development