Browsed by
Tag: AWS

Problems with Terraform

Problems with Terraform

In my last post, I discussed using Terraform to build out the base components of an AWS environment. While running this code to build out the base environment has worked the way I intended, I have run into some pretty major issues with building out the next layer, which consist of a group of private subnets.

I ran into two key problems that I haven’t been able to solve. The first is around passing the counts from one environment to the next. In my base environment I set them as outputs and then import the state file as a data source, but when I try to use it, I get the error “value of count cannot be computed.”

The second issue is a little more complicated, but it comes down to setting variables in the module section of the main.tf file when the data doesn’t exist in base statefile. Essentially, if I don’t create a second nat gateway in the base setup, the no output shows up in the statefile. When I run the second set of Terraform scripts, I would like it to ignore or default if it doesn’t exist, rather than error.

At this point, I am pretty frustrated with it. I have decided that I am going to circle back and take another look at CloudFormation now that they have support for YAML and cross-stack variable and see if I can do everything that I want to do. I’ll post details later this week.

Terraform to Buildout AWS

Terraform to Buildout AWS

I started playing with Terraform a few months ago when I needed to spin up a prototype environment to do an evaluation of the open source version of Cloud Foundry. One of the results was some Terraform code that could bring up the essentials of an AWS VPC, which included the VPC itself, three public subnets, three NATs, three Elastic IPs (EIPs), and a Route53 hosted zone. While it might seem like overkill to use this many Availability Zones (AZs) for a prototype environment, once of the things we needed to test was how Cloud Foundry’s new multi-az support worked.

This was good for what I was working on at the time since I needed to test across muliple AZs, but it was problematic for most of the day-to day testing that I need to do as it would spin up (and charge me for) components that I didn’t really need. Most things I have to test don’t require three of everything. The challenge was, I do not want to maintain different code repositories for different use cases.

Luckily, I came across an article by Yevgeniy Brikman that had some interesting tips and tricks on how to do some loops and conditionals. The most interesting bit for the problem I am trying to solve was learning that in Terraform, a boolean true is converted to a 1 and a boolean false is converted to a 0. Yevgeniy then used an example that I proceed to incorporate into my code. Essentially, what I did was create three new variables in my environments file to define whether or not I wanted to create each of the three public subnets:

# public-subnets module
public-1a_create = true
public-1b_create = false
public-1c_create = false

Then for each resource, I added the count variable:

resource "aws_subnet" "public-1a" {
    count                   = "${var.public-1a_create}"
    vpc_id                  = "${var.vpc_id}"
    cidr_block              = "${var.public-1a_subnet_cidr}"
    map_public_ip_on_launch = true
    availability_zone       = "us-east-1a"
    tags {
        Name                  = "public-1a"
        Description           = "Public Subnet us-east-1a"
        Terraform             = "true"
    }
}

Now I am able to spin up an AWS environment that only is only in one availability zone to do some testing, or bring it up in three for production. There are still a few other things that I am hoping to figure out, such as how to not set environmental variables for the second and third subnets even though you don’t need them and how to let Terraform deployments that build on the base to use the right NAT subnets if they use three AZs and the their are only two NATS. You can find the code in this repo.

Using Vault to Manage AWS Accounts

Using Vault to Manage AWS Accounts

I’ve been putting off setting up the AWS backend in our Vault server for the last few months. I knew that I was going to need it eventually, but other priorities kept taking precedence so I have been pushing it off. This past week one of the application teams came to me with a requirement for needing to write a file to an S3 bucket.

Under normal circumstances, I would probably just go ahead and create an instance profile that could be applied to the system and the problem would be solved. The problem with this approach was that they wanted did not want other applications to be able to access it. Since we run containers, using instance profiles to control access would allow every container on that host access to the buckets.

Preparing the Environment

Setting up the AWS backend is pretty straight forward. To begin, you need to configure your environment to be able to interact with Vault.

export VAULT_ADDR=vault.example.com:8200
export VAULT_TOKEN=a38dc275-86d3-48bd-57ae-237a45d6663b

Once set, you can test your configuration using the curl command to go to the health endpoint.

% curl -k -X GET ${VAULT_ADDR}/v1/sys/health
{"initialized":true,"sealed":false,"standby":false,"server_time_utc":1477441389,"version":"0.6.2","cluster_name":"vault-cluster-2fbd0333","cluster_id":"d8056c7f-acbb-ae59-4ed4-3673f2d27d48"}

Initialize the AWS Backend

Once you have verified that the endpoint is working, you can create and configure the AWS Backend. Since we use multiple AWS accounts for each environment, I will mount different backends for each account.

curl -k -X POST -H "x-Vault-Token: ${VAULT_TOKEN}" -d '{"type": "aws",   "description": "AWS Backend", "config": {"default_lease_ttl": "360", "max_lease_ttl": "720"}}'  ${VAULT_ADDR}/v1/sys/mounts/aws-prototype

This command sets up the aws-prototype backend with a default lease time of 5 minutes and a max lease time of 10 minutes. Since the post doesn’t return anything, you can verify it with the mounts endpoint. If you don’t have jq, I highly recommend you download it, as it makes viewing JSON output much easier

curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/sys/mounts|jq .

Configure AWS Backend

Once the mount is created, you will need to add AWS credentials to the backend. You will need to create an AWS user that has full IAM access so that it can create other users. We have automation that controls our IAM, but you can use a couple of IAM commands to set up the user you will need.

aws iam create-user --user-name HashiVault
aws iam attach-user-policy --user-name HashiVault --policy-arn arn:aws:iam::aws:policy/IAMFullAccess
aws iam create-access-key --user-name HashiVault

Once you create the access & secret keys, you can use them to configure the AWS backend.

curl -k -X POST -H "x-Vault-Token: ${VAULT_TOKEN}" -d '{"access_key": "XXXXXXXXXXXXXXXXXXXX", "secret_key": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", "region": "us-east-1"} ${VAULT_ADDR}/v1/aws-prototype/config/root

After the backend is configured, you can start adding roles to Vault. For the S3 access that we want, we need to create a role that creates a user that has the right policy attached.

curl -k -X POST -H "x-Vault-Token: ${VAULT_TOKEN}" -d '{"arn": "arn:aws:iam::aws:policy/AmazonS3FullAccess"}' ${VAULT_ADDR}/v1/aws-prototype/roles/S3-Access

You can verify that it was created properly by curling the endpoint and getting the credentials back.

curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/aws-prototype/creds/S3-Access

Now that you are getting credentials, you can repeat the process for every account and/or role that you need to have setup.

Testing the S3 Access

I had a little problem testing my S3 access once I had everything configured. I wrote a quick little one-liner to get my creds and set them to the proper environmental variables.

CREDS=$(curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/aws-prototype/creds/S3-Access);export AWS_ACCESS_KEY_ID=$(echo $CREDS |jq -r .data.access_key);export AWS_SECRET_ACCESS_KEY=$(echo $CREDS |jq -r .data.secret_key)

When I tried to download a file, I received and error.

download failed: s3://mybucket/testfile.txt to ./testfile.txt An error occurred (InvalidArgument) when calling the GetObject operation: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4. You can enable AWS Signature Version 4 by running the command:

aws configure set s3.signature_version s3v4

I ran the command, but each typed I tried to run the aws s3 command, I received the same error. What I learned was that running the command updated my .aws/config file and added the following line to it:

[default]
s3 =
    signature_version = s3v4

Since I already have a ‘[profile default]’ set with nothing in it, I moved the S3 bits up underneath that and removed the ‘[default]’ block and everything started working as expected.

Getting my First Certification

Getting my First Certification

I’ve been in the tech industry for more than twenty years, and during that time, I have never really thought that getting a certification was necessary to move forward in your career. Experience generally shows through, regardless of whether or not you have a piece of paper that says you are certified with a paticular technology. As a result, I have never really bothered getting certified in the many technologies that I have worked on over the years.

That changed this past Wednesday, when I sat for and passed the AWS Certified SysOps Administrator – Associate certification test. Over the last two years I have been immersed in Amazon Web Services, learning the ins and outs of the various service offerings. I decided at the beginning of this year that I was going to go ahead and get ALL of the AWS certifications. Currently there are 5 but 3 more specialized certifications are on the way (currently in beta).

My plan is to have all of them before I head to Re-Invent this year. I’m looking forward to the opportunties that diving deeper into the technology provides me.