blog.jarpy.net

Techno-babble of a Techno-dabbler

AWS Auto Scaling with boto.

The boto library is awesome for Python people who want to automate Amazon Web Services, but I had a minor struggle with the Auto Scaling interface recently.

Amazon deprecated the old “triggers” mechanism for handling dynamic scaling, and replaced it with CloudWatch “alarms”. While boto supports the new mechanism, it’s hard to find working examples on-line.

UPDATE: This example is now improved and merged into the official boto documentation.

Below is my attempt to define the minimal procedure for creating a dynamically scaled cluster of instances with boto. It’s derived from the excellent work of Liam Friel.

import boto
from boto.ec2.autoscale import ScalingPolicy
from boto.ec2.cloudwatch import MetricAlarm

autoscale = boto.connect_autoscale()
cloudwatch = boto.connect_cloudwatch()

# Let's assume you already have an Auto Scaling group.
# Setting one up is well documented elsewhere.
autoscaling_group = 'as_group_0'

# Define some Scaling Policies. These tell Auto Scaling _how_ to scale
# group, but not when to do it. (We'll define that later).

# We need one policy for scaling up...
scale_up_policy = ScalingPolicy(
    name='scale_up', adjustment_type='ChangeInCapacity',
    as_name=autoscaling_group, scaling_adjustment=1, cooldown=180)

#...and one for scaling down again.
scale_down_policy = ScalingPolicy(
    name='scale_down', adjustment_type='ChangeInCapacity',
    as_name=autoscaling_group, scaling_adjustment=-1, cooldown=180)

# The policy objects are now defined locally.
# Let's submit them to AWS.
autoscale.create_scaling_policy(scale_up_policy)
autoscale.create_scaling_policy(scale_down_policy)

# Now that the polices have been digested by AWS, they have extra properties
# that we aren't aware of locally. We need to refresh them by requesting them
# back again. 
# Specifically we'll need the Amazon Resource Name (ARN) of each policy.
scale_up_policy = autoscale.get_all_policies(
    as_group=autoscaling_group, policy_names=['scale_up'])[0]

scale_down_policy = autoscale.get_all_policies(
    as_group=autoscaling_group, policy_names=['scale_down'])[0]

# Now we'll create CloudWatch alarms that will define _when_ to run the
# Auto Scaling policies.

# We want to measure the average CPU usage across the whole Auto Scaling
# group, rather than individual instances. We can define that as a CloudWatch
# "Dimension".
alarm_dimensions = {"AutoScalingGroupName": autoscaling_group}

# One alarm for when to scale up...
scale_up_alarm = MetricAlarm(
        name='scale_up_on_cpu', namespace='AWS/EC2',
        metric='CPUUtilization', statistic='Average',
        comparison='>', threshold='70',
        period='60', evaluation_periods=2,
        alarm_actions=[scale_up_policy.policy_arn],
        dimensions=alarm_dimensions)
cloudwatch.create_alarm(scale_up_alarm)

# ...and one for when to scale down.
scale_down_alarm = MetricAlarm(
        name='scale_down_on_cpu', namespace='AWS/EC2',
        metric='CPUUtilization', statistic='Average',
        comparison='<', threshold='40',
        period='60', evaluation_periods=2,
        alarm_actions=[scale_down_policy.policy_arn],
        dimensions=alarm_dimensions)
cloudwatch.create_alarm(scale_down_alarm)
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: