Browsed by
Category: General

Using Vault as a Certificate Authority

Using Vault as a Certificate Authority

For the next few weeks we are doing a POC on Hashicorp’s Vault. While I am still learning about all of the functionality that Vault provides, there are a few key pieces I have already identified to check out in addition to just storing credentials. One of the big ones is the PKI backend. This would make it a lot easier for not just my team, but developers as well to generate SSL certificates. While I found some basic instructions on how to set it up from various sources (mentioned later), I decided to do my own write-up that would consolidate everything I learned.

Create the Root and Intermediate Certificates

Rather than writing up instructions on how to create a the root or intermediate CA’s, I will just post the instructions that I followed that were written up by Jamie Nguyen entitled OpenSSL Certificate Authority. For the purposes of this document, I followed the sections called Create the root pair and Create the intermediate pair.

It’s also possible to create a root certificate authority (and/or an intermediary certificate), but I prefer to do this outside of Vault so that my root certificate remains secure.

Initialize the PKI Backend

Once you have your root and intermediate certificates generated, the first think that you want to do is prepare them for upload to Vault. You can do that with by combining the intermediate certificate with your key.

cat intermediate/certs/ca-chain.cert.pem > /tmp/ca_bundle.pem
openssl rsa -in intermediate/private/intermediateCA.key.pem >> /tmp/ca_bundle.pem

These two commands will concatenate everything into one file called ca_bundle.pem. The next step is to initialize and configure the PKI backend. I was able to find some pretty good instructions on configuring it between Cuddletech’s website and a post by Joel Bastos. But since most of what I want to use Vault for will be driven from automation, I decide to focus on utilizing only the API (which made things just a little tougher).

The first step is to initialize some environmental variables that will make our commands easier to run. You’ll want to set the VAULT_ADDR to your URL of your vault server and VAULT_TOKEN to your login token.

export VAULT_ADDR=vault.example.com:8200
export VAULT_TOKEN=a38dc275-86d3-48bd-57ae-237a45d6663b

Once set, you can test your configuration using the curl command to go to the health endpoint.

% curl -k -X GET ${VAULT_ADDR}/v1/sys/health
{"initialized":true,"sealed":false,"standby":false,"server_time_utc":1477441389,"version":"0.6.2","cluster_name":"vault-cluster-2fbd0333","cluster_id":"d8056c7f-acbb-ae59-4ed4-3673f2d27d48"}

Configure the PKI Backend

After you have verified that the endpoint works, you can create and configure your PKI backend.

curl -k -X POST-H "x-Vault-Token: ${VAULT_TOKEN}" -d '{"type": "pki",   "description": "Test Root CA", "config": { "max_lease_ttl":     "87600h"}}'  ${VAULT_ADDR}/v1/sys/mounts/pki-test

This command creates a new PKI backend mount called “pki-test” and sets the max_lease_ttl to 10 years. You may want to adjust these settings to whatever is suitable for your environment.
Since the post doesn’t return anything, you can verify it with the mounts endpoint. If you don’t have jq, I highly recommend you download it, as it makes viewing JSON output much easier

curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/sys/mounts|jq .

Once you have initialized the backend, you can upload the certificate bundle that you created by following the instructions noted above.

curl -k -X POST -H "x-Vault-Token: ${VAULT_TOKEN}" -d @<(jq -n --arg a "$(</tmp/ca_bundle.pem)" '{ pem_bundle: $a }') ${VAULT_ADDR}/v1/pki-test/config/ca

This command doesn’t return anything either. You can verify that it uploaded properly by trying to download the intermediate certificate.

curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/pki-test/ca/pem

Create a Role

The final step is to configure a role to issue the certificates.

curl -k -X POST -H "x-Vault-Token: ${VAULT_TOKEN}" -d '{"allow_any_name": "true", "allow_ip_sans": "true", "max_ttl": "17520h"}' ${VAULT_ADDR}/v1/pki-test/roles/example-dot-com

You can verify that the role exists with a GET to the roles endpoint.

curl -k -X GET -H "x-Vault-Token: ${VAULT_TOKEN}" ${VAULT_ADDR}/v1/pki-test/roles/example-dot-com|jq .

Issue Certificates

Now we are all set to issue certificates from our Vault server. This can be done one of two ways. The first is to request a certificate and key from the Vault directly:

curl -H "X-Vault-Token: ${VAULT_TOKEN}"   -d '{ "common_name": "testhost.example.com" }' https://${VAULT_ADDR}/v1/pki-test/issue/example-dot-com | tee >(jq -r .data.certificate > test.example.com.cert) >(jq -r .data.private_key > test.example.com.pem) >(jq -r .data.ca_chain[] > test.example.com-chained.pem)

This will create three files in your directory, one that contains the key, one that contains the certificate, and one that contains the certificate chain. You can also send a CSR that you created to have a certificate generated.

curl -k -X POST -H "X-Vault-Token: ${VAULT_TOKEN}" -d @<(jq -n --arg a "test.example.com" --arg b "$(<../server.csr)" '{ common_name: $a, csr: $b }') ${VAULT_ADDR}/v1/pki-test/sign/example-dot-com| tee >(jq -r .data.certificate > test.example.com.cert) >(jq -r .data.ca_chain[] > test.example.com-chained.pem)

Since the key was generated separately, it won’t create a new key file, but it does generate the certificate file and the certificate chain.

That’s all it takes to get a functioning CA in Vault. I’m sure that I still have a whole lot to learn about configuring and securing the PKI backend, but for our POC I think this will work nicely.

Looping Through Ansible Hosts

Looping Through Ansible Hosts

For the past week I have been working on deploying Hashicorp’s Vault using Terraform and Ansible. As I was installing and configuring the Consul server, I came across an interesting problem with building the server configuration. I’ve been following instructions from DigitalOcean and while most of the configuration has been pretty straight forward, the config.json file proved to be a bit of a challenge.

According to the instructions, for a three node cluster, you only want to put the IP addresses for the other two server into the config.json file for the server, but it took me a little while to figure out how to get Ansible to do that. While it may seem straight forward to others, I had a hard time even finding information on the internet on how to do this, so I figured I would share it here.

My first iteration was to just get it set up to do put all the IP addresses in. This was accomplished by this:

However, this is not what I wanted. I wanted it to exclude the hosts’s IP address. After searching high an low, I finally found a Jinja2 tidbit that would get me what I wanted. I didn’t realize that you could an if right in with the for loop, so I just needed to add “if server != inventory_hostname” into the for loop so that it would exclude the host it was running on.

I can see this little tidbit coming in handy for all sorts of things.

Storing Terraform State Files in S3 When Using assume_role

Storing Terraform State Files in S3 When Using assume_role

We use a lot of different AWS accounts, so rather than managing credentials across all of them we have built a model where one is strictly used for managing users accounts (both locally and via ADFS).  From there, all of our interactive and automated logins use STS to assume roles in other accounts.  As we started to dive more into Terraform, I was excited to find that they supported this with the aws_assume_role resource.

However, as we started to implement this, we quickly ran into a problem: the terraform command itself doesn’t support assuming a role in another account, so we would either need to store the state files in one account (in this case, our auth account), or figure out how to allow the auth account to put the files an S3 bucket of the account we are working with.  since I don’t want to store ANY data in the auth account, I had to go figure out how to give users from my auth account access to the account I am working on.  In the end, it was relatively straight forward.  I just need to add a bucket policy in the target account and a policy in the auth account that I then attached to my team’s user group.

The first step is to create a bucket policy that allows my user to list the contents of the file and to also be able to get and put the state files.  I could probably lock the policy down more, and only restrict it to the terraform-state folder that I have in my bucket, but since I have full access outside of terraform anyways, I didn’t think it was as important.  This is the policy I used:

Once the bucket policy was in place, I added the below role to my auth account and attached it to my user group.  I figure as I put more accounts under Terraform control, I’ll just add additional resources.

Once the two policies were in place, Terraform was able to use the S3 bucket in the account we building out.

Sharing Ansible Roles

Sharing Ansible Roles

When I started back in December, Blackbaud did not have a single line of code written to help build, deploy or manage our environments. The Platform Engineering team has come a long way in these last six months. As of this morning the team is just shy of 50K lines of code, consisting primarily of Ansible and Python.
I’m really proud of the work that the team has been doing around Ansible. Over the next few weeks I am hoping to share some of those innovations, not just in this blog, but also via github. However, the one place that I feel that we have been lacking is in how we have build our Ansible directory out. We have built this monolithic repository that contains everything we use.
Some people (even here at Blackbaud) like the thought of a mono-repo, something where everything exists and dependencies are easy to satisfy. In this instance, however, having a mono-repo makes it very difficult for us to share our ansible with other teams within the organization. It is an all or nothing proposition. If you want to use our linux-hardening role, you have to download everything and try to either move it to a new directory or figure out how to add stuff to our repo.
This is antithetical to everything that we are hoping to accomplish. We want to be able to share individual roles across the organization in an organized way that allows our engineering teams (and operational teams) to use what they need without having to sift through tons of roles they will never need.
To make it easier to share roles, we will begin moving our roles into individual repositories and utilizing Ansible Galaxy to share individual roles between teams. By building self-contained units, not only will it help us better share roles between teams, but it will allow us to share some of our roles with the community.
Over the next few weeks, I’ll be sharing some details of how we are doing that.