Quake Speedrun Level 1: Kops

Introdution:

A follow along tutorial to build your own Platform. This was inspired by the gaming speedrun community.

A speedrun is a play-through, or a recording thereof, of a whole video game or a selected part of it (such as a single level), performed with the intention of completing it as fast as possible. While all speedruns aim for quick completion, some speedruns are characterized by additional goals or limitations that players subject themselves to…

So our speedrun will deploy a Platform based on Kubernetes with a CI/CD Solution using these major Components:

[ “Argo“, “Kops“, “Cloud Foundry“: { “Quarks” , “Eirini” } ]

To avoid confusion, this is Part/Level I. After getting an Introduction into the utilized Stack, this first part will let you deploy Terraform based Infrastructure and a Kops based Kubernetes Cluster. Watch out for the continuation of this series to learn about Argo in Part II & III, and lastly Cloud Foundry, Quarks & Eirini in Part IV.

Going out to customers in the wild, you learn that: just “Kubernetes” is not what people want. What people want is to be running on a Kubernetes that is well integrated and provides extra functionality. Being able to create LoadBalancers, DNS, Data Service Instances is the first half of the story, the second half of it is automation and management. Functionality and Automation are integral parts of a well designed developer experience.

So, in reality we are dealing with more than just Kubernetes. The result of these requirements is what is usually called a Platform. This series will take you through all steps of building and automating a platform. It will also give you an introduction into some widely used tools and approaches to automate their usage, as well as how to tie all of it together.

Every Level Post is accompanied by an “In Depth Recap” which will elaborate on what we did. Networks, Firewalls & Certs on a particular IaaS can be created via native tools (UI, CLIs, Markup like CloudFormation/Openstack Heat) or third party abstractions (e.g. Terraform).

Following this series we will use AWS and Terraform to bootstrap our base Infrastructure and Cluster. Continuing into the series we automate the deployment and operations of our Platform Components via Argo and Helm. Lastly, we will deploy CloudFoundry on top of Kubernetes via Quarks & Eirini to investigate the additional abstractions/benefits provided by CloudFoundry compared to pure Kubernetes.

As the scripts are still undergoing change, I recommend pulling often :).

System Requirements:

We assume git and direnv are present on your system, because why would you not have that already. To start our journey, we need to be aware about a few projects and what they will be used for.

Quake-installer:

Your little helper. It contains Scripts, a Terraform Project, Kops Manifest Templates, Helm Configs for require Platform Components. Quake comes pre-wired and with a little bit of GitOps-Attitude.

Kops

“kops helps you create, destroy, upgrade and maintain production-grade, highly available, Kubernetes clusters from the command line. AWS (Amazon Web Services) is currently officially supported, with GCE and OpenStack in beta support, and VMware vSphere in alpha, and other platforms planned.”

Kops is little helper around creating VM based (as opposed to managed) Kubernetes Clusters. It’s able to orchestrate Rolling-Updates, it contains config options for various Add-ons (CNIs, External-DNS, … ) and a templating engine. It can also output the config files to create TF based clusters if you wish.

AWS Account Access:

You will also need access to an AWS Account with appropriate Roles/Permissions to create Route53, EC2, AutoScaling, S3, IAM, and VPC Resources. For the sake of the tutorial, it’s probably easiest to run this with admin access to an AWS Account. Diving into tying down AWS Policies could use its own blog post.

Installation:

We’re going to utilize the quake-installer Repo to bootstrap our AWS Account. This will take care of most things like installing CLIs, running Terraform etc. You can use it as a toolbox / building base for your own Platform and example library for automation.

It will create the required Terraform resources for KOPS to take over. Once that is done, we will create our Cluster Manifest for Kops, and finally deploy the Cluster. All generated manifests/yamls/state files will be available in the state folder of your cloned repo. This way you can experiment with the base tools that we use outside of the tutorial.

Deploy a base Environment:

You’re about to edit two config files and run three commands. We will bring up a Kubernetes Cluster with a 3 Master HA and autoscalable Worker ( 1 Node on start ) Setup. Additionally, Calico is installed as a CNI. Your API will be exposed behind a LoadBalancer with proper TLS Certs. You’ll finish with a working, albeit empty Kubernetes Cluster as your base for the upcoming Posts.

git clone https://github.com/nouseforaname/quake-installercd quake-installer
#edit .awsrc and provide your CREDS/REGIOn
#edit configs/settings.rc and provide your QUAKE_TLD and CLUSTER_NAME
#run
direnv allow

QUAKE_TLD should be a domain your AWS Account will have access to. This is mostly related to DNS Setup and Certificate Validation. I used ‘kkiess.starkandwayne.com’ and then configured our CloudFlare DNS to the hosted zone that will get created later on (You do not need to take care of this yet)
QUAKE_CLUSTER_NAME is an arbitrarily chosen HOSTNAME Valid String. I called mine “cf-kube.”

CLI Installation:

The next step is to install all required CLIs/Binaries.

If you do not have trust issues, you can run/use the quake installer to simplify downloading required binaries and to avoid version issues.

It will run the subscript cli-install. It’s as easy as running:

quake --install

You’ll need to have these CLIs:
[ “Kops“, “YQ“, “JQ“, “Terraform“, “Git“, “ArgoCD“, “Argo“, “Helm” ]

Base Infrastructure:

Now we need to deploy the TF Stack. It runs the subscript bootstrap

quake --bootstrap

It will also output a config file under that contains the relevant Terraform outputs in YAML format:

ls state/kops/vars-<CLUSTER-NAME>.<TLD>.yml

This is where I unfortunately cannot guide you. You will need to Delegate the DNS for your Chosen QUAKE_TLD to the AWS Hosted Zone that just got created. You can find the AWS Docs for this here. Start from “Updating Your DNS Service with Name Server Records for the Subdomain,” the rest was just done by Quake.

Cluster Templating & Deployment:

With our fresh TF Resources on AWS we’re ready to deploy our actual Kubernetes Cluster. The deploy script includes interpolating the base cluster teamplate with the defaults and our vars.yml.

installers/quake --deploy

With that, we should have everything in place for the next blog post, which will be released in a few days. You can already start using your cluster by running:

#Double Check that your $KOPS_STATE_STORE variable was set properly by the Scripts
echo $KOPS_STATE_STORE
#it should output s3://<TLD>-state
#if it is not set, running "direnv allow" again or setting it manually should fix it
kops get clusters
#this will output the deployed cluster, you can copy the name from there or run
kops export kubecfg $QUAKE_CLUSTER_NAME.$QUAKE_TLD

Your kubectl should now be setup to access the cluster but most probably it’s still booting. Run the below command and wait until all nodes are shown and marked as “Ready.”

watch kubectl get nodes

And that’s it for today. Stay tuned for the next Post about deploying Argo on our Cluster.

You can continue reading about KOPS here or come back next week to continue with the next part where you’ll deploy Argo.

Thanks for reading and let me know your thoughts in the comments.
If you should find an issue, please create one on GitHub.

Spread the word

twitter icon facebook icon linkedin icon