Browsed by
Tag: vagrant

Using Vagrant to Test Galaxy Roles

Using Vagrant to Test Galaxy Roles

Last summer, I wrote a post about how we were using Vagrant to test Ansible roles across AWS and the Datacenter. This has worked well with a single AWS account, but it has proven to be a little trickier in our account layout, which uses a centralized account and STS roles. Initially, I had an assume_role script that we had written that would get and set the right bits for the vagrant file to work, but it wasn’t very elegant.

While working on some Ansible roles recently, I decided to take an afternoon and see what I could come up with to make everything a little easier. I’m pretty happy with the results. It’s much more streamlined and easier to run and maintain.

If you would like to try it out yourself, you can start by cloning (or forking first) the repo to your local system:

git clone git@github.com:MarsDominion/vagrant-ansible-testing.git

Once you have it cloned, you will want to change directories and then checkout the sts branch:

cd vagrant-ansible-testing
git checkout sts

From here, you will need to create an env.rb file in the top level of the directory and add the environmental variables you will want to use:

ENV['AWS_ACCESS_KEY_ID'] = 'XXXXXXXXXXXXXXXXXXXX'
ENV['AWS_SECRET_ACCESS_KEY'] = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
ENV['AWS_KEYPAIR_NAME'] = 'my-keypair'
ENV['MY_PRIVATE_AWS_SSH_KEY_PATH'] = '/Users/me/.ssh/my-keypair.pem'
ENV['AWS_SUBNET'] = 'subnet-xxxxxxxx'
ENV['AWS_SG'] = 'sg-xxxxxxxx' 

You can also optionally define the following variables (defaults are listed):

ENV['AWS_DEFAULT_REGION'] = 'us-east-1'
ENV['AWS_INSTANCE_TYPE'] = 't2.micro'
ENV['AWS_AMI'] = 'ami-9be6f38c' #(aws-linux)
ENV['AWS_EC2_USER'] = 'ec2-user'

After you have saved the env.rb file, you can update the requirements.yml and playbook.yml with your Ansible code. Vagrant will run the ansible-galaxy command with the -f (force) option on the “up” and “provision” vagrant sub commands. Once you have your Ansible files the way you want, all that is left is to run the vagrant command:

% vagrant up

You can iterate on your Ansible by running the vagrant command:

% vagrant provision

Once you have completed your Ansible testing, you can destroy the environment:

% vagrant destroy

That’s it. A quick and easy way to test your Ansible roles against an AWS server.

Testing Ansible Galaxy Roles

Testing Ansible Galaxy Roles

With the push to move our roles to Ansible Galaxy as much as possible, we needed to come up with a good way to test the roles as we write them. Up until now, we would build and test them completely within Ansible against the specific system type that we planned to run on. While this works ok against the focused roles that we were writing, it doesn’t work very well for generalized roles that are expected to run on the many different Linux distributions that we run at Blackbaud.

To solve this, we have come up with a Vagrant configuration that allows us to test against multiple OSs both locally (via VirtualBox or VMware) or in the cloud (AWS). You can check out code here. To get started, simple clone the project to you your local machine.

git clone git@github.com:MarsDominion/vagrant-ansible-testing.git

The Vagrantfile in the master branch provides three test environments: aws-linux, centos7, and ubuntu. The aws-linux role will build an Amazon Linux host in AWS while the CentOS and Ubuntu nodes environments are vmware_desktop based nodes that are pulled from Atlas. This gives me a way to test our roles against both cloud and local instances. If you don’t have VMware Fusion or Workstation, you can change the provider from vmware_desktop to virtualbox and they should work as well.

Before launching the instances, you need to download your ansible roles to run. This is done with the ansible-galaxy command.

% ansible-galaxy install blackbaud.linux-hardening

And then update your playbook to include the roles:

- hosts: all
   become: true
   roles:
     - blackbaud.linux-hardening

Finally, set some variables to be able to connect to your Amazon Environment:

AWS_ACCESS_KEY_ID=KIAI3XQCPIPKSDJHSVQ
AWS_SECRET_ACCESS_KEY=onX5HfdsIpasdH6+E+JJCgNxIfzJWY1btZgU4LfQ
AWS_KEYPAIR_NAME=test_key
MY_PRIVATE_AWS_SSH_KEY_PATH=$HOME/.ssh/test_key.pem

Now we are ready to test the

vagrant up
# Brings up all three instances and tests

vagrant up <aws-linux|centos7|ubuntu>
# Brings up the specified instance and tests

It will launch each instance and run through the the Ansible on each node and show you the results. It will jump right into the next node when it completes the previous one, so keep an eye on the output to see the results. When you are done, you can simply destroy the nodes.

vagrant destroy -f