Terraforming workloads with Docker and Digital Ocean

Terraform is a great tool for automating creation of infrastructure and support IaaS, PaaS, and SaaS products.

Docker is a great tool for creating containers which allow apps to be portable.

Digital Ocean is a great IaaS with a great api and fast download speed.

Problem

I’m lazy and my internet is slow. Cloud Foundry is now up to 3gb+ so building a new one and uploading to s3 so the community has a release they could download and use without building and downloading all the blobs. This process takes two hours…

Docker was great to setup a portable environment that I could use on my laptop and on Digital Ocean. Digital Ocean also has much faster upload speed then my home. Terraform allow me to setup infrastructure easily and run my workload and delete my vm on Digital Ocean.

This problem can be very generalized with any workload that you could create a docker image to do work finish and delete image.

Let’s get started!

Terraform is very easy to use.

cf-upload.tf

provider "digitalocean" {
    token = "${var.do_token}"
}
resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc3"
    size = "8gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

variables.tf

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "do_token" {}
variable "key_path" {}
variable "cf_version" {}
variable "ssh_key_id" {}

So what’s going on here.

provider "digitalocean" {
    token = "${var.do_token}"
}

This creates our connection to Digital Ocean

resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc2"
    size = "2gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

This creates a Docker vm on Digital Ocean and then runs the Docker container

And that’s all you need to run a workload!

I’ve even made this a simple make file that will create vm, run the workload and delete vm

all: plan apply destroy
plan:
	terraform plan -var-file terraform.tfvars -out terraform.tfplan
apply:
	terraform apply -var-file terraform.tfvars
destroy:
	terraform plan -destroy -var-file terraform.tfvars 	-out terraform.tfplan
	terraform apply terraform.tfplan
clean:
	rm terraform.tfplan
	rm terraform.tfstate

So automating this can be done with simple make. This can be added to CI server to further automate it.

Repo: https://github.com/longnguyen11288/terraform-cf-upload

Conclusion

This is a combination of 3 great tools that allows us as developers to automate workload that may require cpu power or network bandwidth. Hopefully this can help you automate a workload!

Spread the word

twitter icon facebook icon linkedin icon