Python in the Cloud
The next step is to put the image to use by launching an instance or instances
of it. For this, see the line in Listing 2 with
AMI previously created is used as the template for the instances.
max_count parameters need some explaining. AWS has a default limit of 20
instances that can be running per region (this can be increased by filling out a
request form) per account. This is where the min/max count comes into play. If I
already had 18 instances running in a region and I set the
min_count at 2 and
max_count at 5, two new instances will be launched.
The key name is an SSH keypair generated by Amazon that is used for SSH access to the instance(s).
instance_type is set to the EC2 micro instance, and it is slotted to run in the
us-east-1d availability zone by the placement parameter.
Let's take a brief side trip on
AWS availability zones. Availability zones are relative to your account. In
other words, my
us-east-1d may not represent the same physical availability zone
The final parameter
a lock-out mechanism. It prevents the termination of an instance using API termination
calls. An instance with this protection in force has to have that parameter
first changed to False and then have a termination command given in order for it
to go away. See the
modify_instance_attribute() line for how to undo the
termination protection on a running instance.
Assuming a successful launch,
run_instances() will return a boto reservation class that represents the
reservation created by AWS. This class then can be used to find the
instances that are running, assuming it was captured and iterated over right
then. A more generic approach is to use the
get_all_instances() function to
return a list of reservation classes that match the criteria given. In
Listing 2, I use the filter parameter to limit my search of instances to those
created from the AMI created above and that are actually running.
So, where do you find what filters are available? The boto documentation does not list the filters, but instead points you at the source, which is the AWS developer documentation for each resource (see Resources). Drilling down in that documentation gets you to an API Reference where the options to the various actions are spelled out. For this particular case, the information is in the EC2 API (see Resources) under Actions/DescribeImages. There is not a complete one-to-one correspondence between the boto function names and the AWS API names, but it is close enough to figure out.
Having run the function, I now have a list of reservations. Using this list, I iterate through the reservation classes in the list, which yields an instance list that contains instance classes. This, in turn, is iterated over, and various instance attributes are pulled out (Figure 1).
Figure 1. EC2 Instance Information
Having created an AMI and launched instances using that image, now what? How about backing up the information contained in the instance? AWS has a feature whereby it is possible to create a snapshot of an EBS volume. What is nice about this is that the snapshots are incremental in nature. Take a snapshot on day one, and it represents the state of the volume at that time. Take a snapshot the next day, and it represents only the differences between the two days. Furthermore, snapshots, even though they represent an EBS volume, are stored as S3 data. The plus to this is that although monthly charges for EBS are based on the size of the volume, space used or not, S3 charges are based on actual space used. So, if you have an EBS volume of 40GB, you are charged 40GB/month. Assuming only half of that is used, the snapshot will accrue charges of roughly 20GB/month. The final feature of a snapshot is that it is possible to create an EBS volume from it that, in turn, can be used to create a new AMI. This makes it relatively easy to return to some point in the past.
To make the snapshot procedure easier, I will create a
tag on the instance that has the EBS volume I want to snapshot, as well as the
volume itself. AWS supports user-defined tags on
many of its resources. The
create_tags() function is a generic one that applies a tag to the requested
resource(s). The first parameter is a list of resource IDs; the second is a
dictionary where the tag name is the key and the tag value is the dictionary
value. Knowing the Name tag for the instance, I use
get_all_volumes() with a
filter to retrieve a volume class. I then use the volume class to create the
snapshot, giving the snapshot a Description equal to the volume Name tag plus the
Snap. Though the
create_snapshot() will return a snapshot ID fairly
quickly, the snapshot may not finish processing for some time. This is where the
SNS service comes in handy.
SNS is exactly that, a simple way of sending out notifications. The notifications can go out as e-mail, e-mail in JSON format, HTTP/HTTPS or to the AWS Simple Queue Service (SQS). I don't cover SQS in depth here; just know it is a more robust and featured way of sending messages from AWS.
The first step is to set up a notification topic. The easiest way is to use the SNS tab in the AWS management console. A topic is just an identifier for a particular notification. Once a topic is created, subscriptions can be tied to the topic. The subscriptions do not take effect until they are confirmed. In the case of subscriptions going to e-mail (what I use here), a confirmation e-mail is sent out with a confirmation link. Once the subscription is confirmed, it is available for use.
As I alluded to earlier, I demonstrate monitoring the
snapshot creation process with a notification upon completion
(Listing 2). The function
check_snapshot() takes a snapshot class returned by
create_snapshot and checks in on its progress at regular intervals. Note
snap.update(). The AWS API is Web-based and does not maintain a persistent
connection. Unless the snapshot returns a status of
"completed" immediately, the
while loop will run forever without the
update() method to refresh the
Once the snapshot is completed, a message is constructed using the snap attributes. The message then is published to the SNS topic, which triggers the subscription to be run, and an e-mail should show up shortly. The ARN shown stands for Amazon Resource Name. It is created when the SNS topic is set up and represents the system address for the topic. Note that the simple part of SNS is that there is no delivery confirmation. The AWS API will throw an error if it can't do its part (send the message), but receiving errors are not covered. That's why one of the sending options for SNS is SQS. SQS will hold a message and resend a message for either a certain time period or until it receives confirmation of receipt and a delete request, whichever comes first.
So, there you have it—a basic introduction to using the Python boto library to interact with AWS services and resources. Needless to say, this barely scratches the surface. For more information and to keep current, I recommend the blog at the Alestic site for general AWS information and Mitch Garnaat's blog for boto and AWS information.
boto Web Site: http://code.google.com/p/boto
What is a boto? http://www.acsonline.org/factpack/Boto.htm
boto Documentation: http://boto.cloudhackers.com
boto Blog: http://www.elastician.com
AWS Web Site: http://aws.amazon.com
Using Public AMIs: http://aws.amazon.com/articles/0155828273219400
Creating Secure AMIs: http://alestic.com/2011/06/ec2-ami-security
AWS Free Tier: http://aws.amazon.com/free
AWS Developer Documentation: http://aws.amazon.com/documentation
Adrian Klaver, having found Python, is on a never-ending quest to explore just how far it can take him.
With all the industry talk about the benefits of Linux on Power and all the performance advantages offered by its open architecture, you may be considering a move in that direction. If you are thinking about analytics, big data and cloud computing, you would be right to evaluate Power. The idea of using commodity x86 hardware and replacing it every three years is an outdated cost model. It doesn’t consider the total cost of ownership, and it doesn’t consider the advantage of real processing power, high-availability and multithreading like a demon.
This ebook takes a look at some of the practical applications of the Linux on Power platform and ways you might bring all the performance power of this open architecture to bear for your organization. There are no smoke and mirrors here—just hard, cold, empirical evidence provided by independent sources. I also consider some innovative ways Linux on Power will be used in the future.Get the Guide
- Download "Linux Management with Red Hat Satellite: Measuring Business Impact and ROI"
- Astronomy for KDE
- Profiles and RC Files
- Understanding Ceph and Its Place in the Market
- OpenSwitch Finds a New Home
- Snappy Moves to New Platforms
- What's Our Next Fight?
- Maru OS Brings Debian to Your Phone
- Mark Geddes' Arduino Project Handbook (No Starch Press)
- Git 2.9 Released