First Impressions of OneNote

First Impressions of OneNote

I am always on the lookout for ways to improve my system for keeping on top of all the information that I have and staying focused on getting things done. With the announcement at the end of June by Evernote regarding their pricing and plan changes I have been kicking around whether or not to give OneNote a shot as my exclusive knowledge capture device. So far, the results have been mixed. It has solved a few problems that I had with Evernote, but introduced some new ones that can be a little frustrating.

Getting it setup to work on all my devices seemed to be a little bit of a pain in the ass. I have a Macbook Pro, an iPhone 6 Plus, and an iPad Pro 9.7 that it needs to work on. For the first two days it seemed like every time I touched either my iPhone or iPad I had to enter my password or it would have some type of sync problem. It seemed to clear up after the first two days, but it was almost a show stopper even before I got started.

From an interface perspective, I like it a lot better than Evernote. It has more functionality and looks a lot cleaner. One of the big upsides is its support for the Apple Pencil. Unlike with Evernote, I can write anywhere on my notes and still actually see what else is on them. I also like that it is “free” since it uses my 1TB OneDrive storage that I already get as part of my organizational account.

The biggest problem that I had was with their email to OneNote functionality. Unlike Evernote, which gives you a personalized email address you can send to, with OneNote you send it to a single address which then routes it to your account based on the from address of your account. They say that you can add more to your account if you are using a Microsoft account, since I am using an organizational account I cannot seem to add other addresses to my acceptable list, which means now whenever I forward an email to OneNote I have to remember to switch my email’s from address to the account that it allows.

The only other main problem that I had with OneNote is that it won’t allow me to have more than one note open at a time. This is a feature that I find extremely useful in Evernote, especially when I want to merge incident notes with general knowledge notes or I need to reference something I wrote for one thing while I am writing something else. You can get around it by opening a version using the OneNote app and opening the same folder in the online app via your browser. It works, but not very efficient.

I’m going to continue to use OneNote exclusively through the end of September before I make a decision whether to stay with it or switch back to Evernote. I’m interested to know how well it tracks receipts, bills, and budgets.

DevOps and Security

DevOps and Security

For the last few days, I have been participating in a series of internal meetings about how the company is approaching the cloud and DevOps. A good number of the sessions were either about security or contained some reference to security as part of the discussion. With these conversations still fresh in my head, I came across an interesting article at devops.com by Joe Franscella titled The DevOps Force Multiplier: Competitive Advantage + Security.

In the article, Franscella talks with OJ Reeves, a Bugcrowd security researcher, who points out that he has seen that companies who have a DevOps mindset are often more security focused. He cites a number of factors that could explain why, including that they do a better job of checking the security boxes, make fewer mistakes, and that they communicate better. I certainly agree that communication is a key component and one that helps improve security. However, as a change leader helping to implement DevOps, I’m not sure that I would necessarily agree with the first two – at least not as they are described.

DevOps Checks Boxes

Saying that the DevOps does a better job checking the security boxes may seem true on the surface, but it is extremely vague and if you don’t understand why this seems to be the case you are likely to miss the benefits of it. From my standpoint, one of the key reasons that we tend to do a better job checking the boxes than the traditional Ops side is that we have to think about things much more broadly.

When I was a system administrator building production servers, access was restricted to a handful of like minded teammates. I didn’t have to worry about people needing different levels of access and permissions to do different things. On the DevOps side, I do have to think about these things, and more. One of the biggest side benefits of figuring out how to keep the servers safe from developers is that it also protects it from a lot of the external threats as well.

Making Fewer Mistakes

I would never claim that companies that practice DevOps make fewer mistakes, but I could see how it could look that way to an outsider. I think instead the key point is that when mistakes are made, they are much easier to fix than they are in traditional organizations. Why? Automation. When a mistake in configuration is found, or a change or patch needs to be implemented, all that is generally required is a modification to a configuration management tool or script and within a few minutes any mistakes or problems are solved.

Automation is probably one of the biggest factors in Reeves’ findings regarding DevOps organization. With Automation, it is much easier to weave security into the DNA of what a company is doing, not just to have it as an afterthought.

Testing Ansible Galaxy Roles

Testing Ansible Galaxy Roles

With the push to move our roles to Ansible Galaxy as much as possible, we needed to come up with a good way to test the roles as we write them. Up until now, we would build and test them completely within Ansible against the specific system type that we planned to run on. While this works ok against the focused roles that we were writing, it doesn’t work very well for generalized roles that are expected to run on the many different Linux distributions that we run at Blackbaud.

To solve this, we have come up with a Vagrant configuration that allows us to test against multiple OSs both locally (via VirtualBox or VMware) or in the cloud (AWS). You can check out code here. To get started, simple clone the project to you your local machine.

git clone git@github.com:MarsDominion/vagrant-ansible-testing.git

The Vagrantfile in the master branch provides three test environments: aws-linux, centos7, and ubuntu. The aws-linux role will build an Amazon Linux host in AWS while the CentOS and Ubuntu nodes environments are vmware_desktop based nodes that are pulled from Atlas. This gives me a way to test our roles against both cloud and local instances. If you don’t have VMware Fusion or Workstation, you can change the provider from vmware_desktop to virtualbox and they should work as well.

Before launching the instances, you need to download your ansible roles to run. This is done with the ansible-galaxy command.

% ansible-galaxy install blackbaud.linux-hardening

And then update your playbook to include the roles:

- hosts: all
   become: true
   roles:
     - blackbaud.linux-hardening

Finally, set some variables to be able to connect to your Amazon Environment:

AWS_ACCESS_KEY_ID=KIAI3XQCPIPKSDJHSVQ
AWS_SECRET_ACCESS_KEY=onX5HfdsIpasdH6+E+JJCgNxIfzJWY1btZgU4LfQ
AWS_KEYPAIR_NAME=test_key
MY_PRIVATE_AWS_SSH_KEY_PATH=$HOME/.ssh/test_key.pem

Now we are ready to test the

vagrant up
# Brings up all three instances and tests

vagrant up <aws-linux|centos7|ubuntu>
# Brings up the specified instance and tests

It will launch each instance and run through the the Ansible on each node and show you the results. It will jump right into the next node when it completes the previous one, so keep an eye on the output to see the results. When you are done, you can simply destroy the nodes.

vagrant destroy -f 
Ansible Role Style Guide

Ansible Role Style Guide

Ansible Role Style Guide

In my last post, I discussed how to get started with creating an Ansible Galaxy role. This post will go into more detail on what comprises a roles and how we use it at Blackbaud for building Ansible roles that can be reused and shared throughout the company.

By default, running the ansible-galaxy init command will create the following directory structure:

ansible-role-linux-hardening/
|- defaults/
    |- main.yml
|- handlers/
    |- main.yml
|- meta/
    |- main.yml
|- tasks/
    |- main.yml
|- tests/
    |- inventory
    |- test.yml
|- vars/
    |- main.yml
.travis.yml
README.md

Depending on the role that is being created, it is possible that some of these directories will not be needed. It is recommended that you remove the directories that are not being used. Also, a templates directory isn’t created with ansible-galaxy init, but you can add it if it is necessary.

defaults

The default folder contains the default values of any of the variables that are used throughout the rest of the role (You can learn more about variable precedence here. This should be reserved for variables that mainly don’t often stray from the defined value but that may in certain specific instances.

handlers

The handlers directory contains the process handlers that can be notified when changes occur. For example, you can have a handler that restarts SSH when changes to the configuration file are made:

- name: restart ssh
   service:
     name: "{{ ssh_service}}"
     state: restarted

meta

The meta directory serves two purposes. The first is to define the ansible galaxy role, including the version needed, what platforms are supported, etc, and to define any dependencies that the role may have.

galaxy_info:
  author: Mark Honomichl
  description: Ansible for Hardening a Linux Host
  company: Blackbaud
  license: MIT
  min_ansible_version: 1.2
  platforms:
    - name: EL
      versions:
        - all
    - name: Amazon
      versions:
        - all
    - name: Ubuntu
      versions:
        - all
  galaxy_tags:
    - security
    - linux
dependencies: []

tasks

Tasks is where all the tasks of the role are defined. With larger roles, it is recommended that seperate tasks files are created for similar tasks and then called in the main.yml file via the includes directive.

- name: Update Ubuntu
   apt:
     upgrade: dist
  when: ansible_distribution == "Ubuntu"
  tags: update_os

- include: "hostname.yml"

test

The tests directory is included so that the role can be tested via Travis CI. No changes should need to be made to this directory.

vars

The vars directory is primarily used to define variables that can be different across platforms or environments. For example, in ubuntu the ssh service is called “ssh” while in EL it is called “sshd”. Rather than writing two handlers, we can write it as defined in the above handlers section and then define ssh_service as a varaible in file called Ubuntu.yml and CentOS.yml respectively, and then include the correct file in the tasks section with an includes call.

- name: Include OS Specific Variables
   include_vars: "{{ ansible_distribution }}.yml" 

.travis.yml

The .travis.yml file is generated by default to allow the role to run in Travis CI. By default, it spins up a container, installs ansible, and runs a syntax check against the code. If possible, it is recommended that you actually run the playbook by adding a second ansible-playbook command without the –syntax-check option.

README.md

Once the role is completed, update the README.md file with the information requested. Make sure this document clearly defines how to use the role. Update the Author information with the following:

Blackbaud
Created in 2016 by [Blackbaud](http://blackbaud.com/)
Getting Started with Ansible Galaxy

Getting Started with Ansible Galaxy

I started using Ansible in December when I joined Blackbaud, and while I do feel like the team is doing some really innovative stuff with our Ansible roles, most of the stuff we do is pretty straightforward. After spending the last six months building up a monolithic repository, we are starting to examine how we can break the roles out into Ansible Galaxy roles that can better be shared across the company. I figured since I was documenting how to create a Galaxy compatible role for the company, I would go ahead and share those instructions here as well.

Create a GitHub Repository

The first step is to create an empty git repository on GitHub. This can be done by logging into your account and clicking on the ‘New repository’ button. We prefix all of our roles at Blackbaud with ‘ansible-role-’ so that we can easily distinguish them when looking at the hundreds of repos that we have. For example, the linux-hardening role that we have is called ‘ansible-role-linux-hardening’.
We create the repo without any files (.gitignore, README.me, or LICENSE) so that we can manage all of that properly later. We generally start all of our roles as private so that we can have a chance to build it out and test it before we make it public.

Create the Local Repository

Once our empty repo has been created at GitHub, the next step is to create the Ansible Galaxy skeleton directory structure on a local machine. This is done with the ‘ansible-galaxy’ command.

ansible-galaxy init ansible-role-<ROLE-NAME>

Where <ROLE-NAME> is the name of the role (i.e. linux-hardening). Once the files are created you need to initialize a git repository, add the files, and push them up to GitHub.

cd ansible-role-assume-role
git init
git add .
git commit -m "Commit Skeleton Role"
git remote add origin https://github.com/blackbaud/ansible-role-assume-role.git
git push -u origin master

Now you can begin editing your role.

New Blog Hosting

New Blog Hosting

I’ve been having some trouble with my blog lately and have decided to move it to a new provider.  I haven’t yet decided if I want to move my old content back in or just start over.  Since my posting has been pretty sporadic over the last few years, I’m really leaning towards just chucking it all and starting over.

Sharing Ansible Roles

Sharing Ansible Roles

When I started back in December, Blackbaud did not have a single line of code written to help build, deploy or manage our environments. The Platform Engineering team has come a long way in these last six months. As of this morning the team is just shy of 50K lines of code, consisting primarily of Ansible and Python.
I’m really proud of the work that the team has been doing around Ansible. Over the next few weeks I am hoping to share some of those innovations, not just in this blog, but also via github. However, the one place that I feel that we have been lacking is in how we have build our Ansible directory out. We have built this monolithic repository that contains everything we use.
Some people (even here at Blackbaud) like the thought of a mono-repo, something where everything exists and dependencies are easy to satisfy. In this instance, however, having a mono-repo makes it very difficult for us to share our ansible with other teams within the organization. It is an all or nothing proposition. If you want to use our linux-hardening role, you have to download everything and try to either move it to a new directory or figure out how to add stuff to our repo.
This is antithetical to everything that we are hoping to accomplish. We want to be able to share individual roles across the organization in an organized way that allows our engineering teams (and operational teams) to use what they need without having to sift through tons of roles they will never need.
To make it easier to share roles, we will begin moving our roles into individual repositories and utilizing Ansible Galaxy to share individual roles between teams. By building self-contained units, not only will it help us better share roles between teams, but it will allow us to share some of our roles with the community.
Over the next few weeks, I’ll be sharing some details of how we are doing that.