Hashicorp - Terraform and Packer

Written on April 23, 2018

Building clouds

I’ve been using specific tools like CloudFormation and ARM templates for a number of years, so when I found out about Teraform from HashiCorp I was keen to check it out.

I was involved in a project recently where Terraform was already being used and we wanted to run up some resources on Azure. This was a great opportunity to take a more in depth look at a larger scale deployment.

On the face of it Terraform has a number of resources to make it easy to get started, there are a number of providers for all the major cloud vendors and they are also working with projects like Kubernetes to package deployments. So standing up something in isolation was simple.

But what if you wanted to do this for a larger deployment, having everything in one single template doesn’t feel like a great idea, but how do you stitch these things together and have things like dependencies? Modules are a great way to achieve this, I can have a something that is a standard template and reuse this throughout my environment deployment.

So I now had a simple deployment that would build the Azure Network Infrastructure as a module, I could then reuse this template for Test, Staging and Production environments. I definitely think this approach has it’s merits when compared to the ARM equivalent of Nested Templates.

I then added my Azure Container Service provider and was able to deploy the K8s cluster on the custom VNET I created with the previous Terraform module, which took care of the provisioning order as it figured out the dependency.

State of the nation

So I had a couple of modules and repeatable deployment, but as I got deeper into this, I realised that the state engine was going to be an interesting concept and something that I needed to get right!

I found this out the hard way! I setup the Azure backend State provider option, which allows you to store the state file in Azure Blob storage. Then I messed around with the blob naming scheme as I wasn’t happy, this resulted in corrupting my state, which ended up having to import this again from the running Azure resources which was a pain.

Digging in on state, I found the general recommendation was to use a separate Terraform “repo” for each “service” I was creating, so I had a “VNET” service, “ACS” service and would soon scale out to “App Frontend” and “App Backend” services, all requiring their own state and individual configurations.

This was fine, until I found an open GitHub issue around the state engine storing sensitive info, this was opened a few years ago and a little concerning that a good solution had not been found yet (as of Apr 18). A few people had suggested using private source control and some workarounds for secrets but the general guidance for Terraform is not to use source control for state, hence, why the state backend feature was there.

This was all good though, learning about the inner workings and finding out some of these things was what this was all about. I managed to get this working with ACS in a semi production view of how a good Terraform project should look, my repo is here.


As part of this project, we also need to deploy some dedicated VM to run a few different services like Mongo and Elastic. Wouldn’t it be great if we could have pre-baked machines that could be deployed from a master image in the cloud. Enter Packer, another HashiCorp open source project which does exactly that. The project already had some images using Packer built for a different cloud, but it was just a simple task to change the provider to target Azure. Packer then does the hard work of creating a new VM, running the build commands and creating an Azure VM Image which can then be used in the VM deployment, there is a great walk through here.


Great to get some practical use of these tools and keen to explore the other offerings they have in Vault and Consul.

Written on April 23, 2018