Ruben Koster, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/rubenkoster/ Cloud-Native Consultants Thu, 30 Sep 2021 15:48:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Ruben Koster, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/rubenkoster/ 32 32 PXE boot CoreOS using Digital Rebar Provision on Bare Metal https://www.starkandwayne.com/blog/rackn-hd-bosh/ Mon, 20 Jan 2020 14:45:00 +0000 https://www.starkandwayne.com//rackn-hd-bosh/

In this blog post I will walk you through the steps needed to PXE boot a bare metal machine into a live CoreOS image, using Digital Rebar Provision (DRP), which is an Open Source project developed by RackN. The goal of this post is to demonstrate how the immutable infrastructure pattern can be implemented by using some of pre-built tasks available from the DRP community catalog.

The method outlined in this blog post can be used in a home lab context, but should work equally well in an edge computing environment. All you need to get started is a target node with intel x86-64 architecture support (physical or VM) and which is able to PXE or iPXE boot. Additionally, you will need a place to install DRP, for the purpose of this tutorial we will be targeting docker.

Install Digital Rebar Provision

The one-liner below will install DRP running in a single Docker container (named drp) with a single volume (named drp-data) for storing its state. DRP has a built in DHCP server which it uses send PXE boot instructions, because of this the docker container will need to connect to the host network, which should be the same network as that of the target node.

curl -kfsSL https://get.rebar.digital/stable | bash -s -- --container install

Install the drpcli

The drpcli can be downloaded from the built in file server which is hosted on port 8091. Assuming the commands below are being executed on the Docker host, we should be able to reach DRP through localhost.

curl -s -o /usr/local/bin/drpcli \  http://localhost:8091/files/drpcli.amd64.linux
chmod +x /usr/local/bin/drpcli

Install content packs and iso's

All configuration in DRP can be done through the UI via the RackN Portal, but also through the CLI. Configuration can also be distributed in yaml format in the form of Content Packs. There are a bunch of handy pre build tasks and workflow available in the Digital Rebar Community Catalog. Let's use the CLI to install the coreos content pack and its dependencies (like iso's).

drpcli catalog item install task-library
drpcli catalog item install drp-community-content
drpcli catalog item install coreos
drpcli bootenvs uploadiso discovery
drpcli bootenvs list \
  | jq -r '.[].Name' \
  | grep -e 'coreos-.*-live' \
  | xargs -L1 drpcli bootenvs uploadiso

Configure a subnet

To be able to discover new machines on your network, DRP needs to know on which subnet to listen for DHCP requests. Since you probably already have a DHCP server responsible in your network for handing out IPs, we will configure DRP in proxy mode. This means it will only be taking responsibility sending PXE boot related information, and nothing related to configuring the network.

DRP should already have discovered the subnet on interface eth0 via DHCP, so let's use that information to create a subnet with proxy mode enabled. If you want DRP to also be in charge of handing out ip-addresses please refer to the documentation.

drpcli interfaces show eth0 \
  | jq '.ActiveAddress | {
    Name: "eth0",
    Proxy: true,
    Strategy: "MAC",
    Enabled: true,
    subnet: .}' \
  | drpcli subnets create -

Discover a Machine

Before we can provision a machine it first needs to be known to DRP. This can be achieved by configuring unknown machines to boot into the sledgehammer image and perform a discover workflow. The main purpose of this workflow is running gohai (a tool for collecting system information) and updating the machine params with the results. This information can later be used to for example search for a machine with a certain type of storage (NVMe vs SSD vs HDD).

drpcli prefs set defaultBootEnv sledgehammer
drpcli prefs set defaultWorkflow discover-base
drpcli prefs set unknownBootEnv discovery

With the above preferences set, go ahead and PXE boot your machine. This can usually be configured in your BIOS. Alternatively this repo contains some helper scripts to create PXE booting virtual machines using VirtualBox. It should show up within 1 minute (could be slower depending on your network and disk performance).

Provision a Machine

Machines can have properties set on an individual level, like was done for storing the gohai results, you can however also set properties on a group of machines through the use of a profile. All machines will inherent the properties set on the global profile. We will use this profile to set a default public ssh key.

jq --arg user $(whoami) \
   --arg key "$(curl -s https://github.com/$(whoami).keys)" \
   -n '{"access-keys": {"\($user)": $key}}' \
   | drpcli profiles params global -

Now we are all set, let's create a workflow in which the machine goes directly into the coreos-live stage. This stage boots the machine using a CoreOS live image (loaded into memory). It will also install the drpcli and run it in agent mode (waiting for further life cycle events) additionally it will install the public ssh key. Once the machine is ready you can ssh into it using the core user.

jq -n '{Name: "coreos-live", Stages: ["coreos-live"]}' \
  | drpcli workflows create -
drpcli machines list \
  | jq -r '.[].Uuid' \
| xargs -L1 -I{} drpcli machines workflow {} coreos-live

Whats next

Having just a plain CoreOS machine is not as useful in itself, however CoreOS can be used as a building block for building clusters. DRP exposes some powerful primitives for orchestrating common cluster patterns, through the use of shared profiles which are updated by the machines themselves. This simple primitive can be used to implement master election, rolling updates, and distribution of bootstrap token.

To further explore the power of Digital Rebar Provision take a look at one of the following projects, both of which use the common cluster-add stage to elect a master node and setup among other things Etcd.

The post PXE boot CoreOS using Digital Rebar Provision on Bare Metal appeared first on Stark & Wayne.

]]>

In this blog post I will walk you through the steps needed to PXE boot a bare metal machine into a live CoreOS image, using Digital Rebar Provision (DRP), which is an Open Source project developed by RackN. The goal of this post is to demonstrate how the immutable infrastructure pattern can be implemented by using some of pre-built tasks available from the DRP community catalog.

The method outlined in this blog post can be used in a home lab context, but should work equally well in an edge computing environment. All you need to get started is a target node with intel x86-64 architecture support (physical or VM) and which is able to PXE or iPXE boot. Additionally, you will need a place to install DRP, for the purpose of this tutorial we will be targeting docker.

Install Digital Rebar Provision

The one-liner below will install DRP running in a single Docker container (named drp) with a single volume (named drp-data) for storing its state. DRP has a built in DHCP server which it uses send PXE boot instructions, because of this the docker container will need to connect to the host network, which should be the same network as that of the target node.

curl -kfsSL https://get.rebar.digital/stable | bash -s -- --container install

Install the drpcli

The drpcli can be downloaded from the built in file server which is hosted on port 8091. Assuming the commands below are being executed on the Docker host, we should be able to reach DRP through localhost.

curl -s -o /usr/local/bin/drpcli \  http://localhost:8091/files/drpcli.amd64.linux
chmod +x /usr/local/bin/drpcli

Install content packs and iso's

All configuration in DRP can be done through the UI via the RackN Portal, but also through the CLI. Configuration can also be distributed in yaml format in the form of Content Packs. There are a bunch of handy pre build tasks and workflow available in the Digital Rebar Community Catalog. Let's use the CLI to install the coreos content pack and its dependencies (like iso's).

drpcli catalog item install task-library
drpcli catalog item install drp-community-content
drpcli catalog item install coreos
drpcli bootenvs uploadiso discovery
drpcli bootenvs list \
  | jq -r '.[].Name' \
  | grep -e 'coreos-.*-live' \
  | xargs -L1 drpcli bootenvs uploadiso

Configure a subnet

To be able to discover new machines on your network, DRP needs to know on which subnet to listen for DHCP requests. Since you probably already have a DHCP server responsible in your network for handing out IPs, we will configure DRP in proxy mode. This means it will only be taking responsibility sending PXE boot related information, and nothing related to configuring the network.

DRP should already have discovered the subnet on interface eth0 via DHCP, so let's use that information to create a subnet with proxy mode enabled. If you want DRP to also be in charge of handing out ip-addresses please refer to the documentation.

drpcli interfaces show eth0 \
  | jq '.ActiveAddress | {
    Name: "eth0",
    Proxy: true,
    Strategy: "MAC",
    Enabled: true,
    subnet: .}' \
  | drpcli subnets create -

Discover a Machine

Before we can provision a machine it first needs to be known to DRP. This can be achieved by configuring unknown machines to boot into the sledgehammer image and perform a discover workflow. The main purpose of this workflow is running gohai (a tool for collecting system information) and updating the machine params with the results. This information can later be used to for example search for a machine with a certain type of storage (NVMe vs SSD vs HDD).

drpcli prefs set defaultBootEnv sledgehammer
drpcli prefs set defaultWorkflow discover-base
drpcli prefs set unknownBootEnv discovery

With the above preferences set, go ahead and PXE boot your machine. This can usually be configured in your BIOS. Alternatively this repo contains some helper scripts to create PXE booting virtual machines using VirtualBox. It should show up within 1 minute (could be slower depending on your network and disk performance).

Provision a Machine

Machines can have properties set on an individual level, like was done for storing the gohai results, you can however also set properties on a group of machines through the use of a profile. All machines will inherent the properties set on the global profile. We will use this profile to set a default public ssh key.

jq --arg user $(whoami) \
   --arg key "$(curl -s https://github.com/$(whoami).keys)" \
   -n '{"access-keys": {"\($user)": $key}}' \
   | drpcli profiles params global -

Now we are all set, let's create a workflow in which the machine goes directly into the coreos-live stage. This stage boots the machine using a CoreOS live image (loaded into memory). It will also install the drpcli and run it in agent mode (waiting for further life cycle events) additionally it will install the public ssh key. Once the machine is ready you can ssh into it using the core user.

jq -n '{Name: "coreos-live", Stages: ["coreos-live"]}' \
  | drpcli workflows create -
drpcli machines list \
  | jq -r '.[].Uuid' \
| xargs -L1 -I{} drpcli machines workflow {} coreos-live

Whats next

Having just a plain CoreOS machine is not as useful in itself, however CoreOS can be used as a building block for building clusters. DRP exposes some powerful primitives for orchestrating common cluster patterns, through the use of shared profiles which are updated by the machines themselves. This simple primitive can be used to implement master election, rolling updates, and distribution of bootstrap token.

To further explore the power of Digital Rebar Provision take a look at one of the following projects, both of which use the common cluster-add stage to elect a master node and setup among other things Etcd.

The post PXE boot CoreOS using Digital Rebar Provision on Bare Metal appeared first on Stark & Wayne.

]]>
Guide to deploying Genesis kits to BOSH/CredHub https://www.starkandwayne.com/blog/deploy-minio-with-genesis-on-your-laptop/ Thu, 12 Dec 2019 09:54:11 +0000 https://www.starkandwayne.com//deploy-minio-with-genesis-on-your-laptop/

Genesis is an awesome deployment framework for deploying systems with BOSH to any infrastructure cloud. It has a whole catalog of open source production ready kits which make it super easy to deploy, scale, and upgrade systems such as Cloud Foundry Application Runtime (PaaS), Vault (Secrets), Concourse CI, SHIELD (backup/restore), and Minio (Object Store).

Genesis is built to support any deployment pipeline dev -> {your stage here} -> prod across any number of infrastructures and their regions.

Genesis Kits require a special-purpose BOSH/Vault environment. This article introduces how to deploy Genesis Kits to a BOSH/Credhub environment, thanks to a Vault <--> Credhub proxy, which adds Vault API compatibility to Credhub.

We can now announce experimental Genesis compatibility in BUCC, a way to quickly deploy a BOSH/UAA/Credhub/Concourse environment. With BUCC v0.8.0 you can now run a production BUCC or --lite BUCC and begin deploying our Genesis Kits to either any BOSH infrastructure or your local laptop.

Let's show how easy it is to deploy Minio using the minio-genesis-kit starting with BUCC. Thanks to the Concourse pipeline provided by Genesis, our Minio system will forever stay upgraded.

Prerequisites

To get started, make sure to have the following tools installed:

On a linux distro with apt support the Stark & Wayne apt repository can be used:

apt-get update && apt install gnupg wget -ywget -q -O - https://raw.githubusercontent.com/starkandwayne/homebrew-cf/master/public.key | apt-key add -
echo "deb http://apt.starkandwayne.com stable main" | tee /etc/apt/sources.list.d/starkandwayne.list
apt-get update
sudo apt install spruce safe bosh-cli genesis curl hub virtualbox jq

Create a BUCC VM

Let's clone the BUCC repo and use the BUCC CLI to create our VM.

git clone https://github.com/starkandwayne/bucc ~/workspace/bucc
~/workspace/bucc/bin/bucc up --cpi virtualbox --lite
source <(~/workspace/bucc/bin/bucc env)

Create a Genesis Deployment

When generating a deployment file, Genesis will look at your locally configured Safe and BOSH CLI targets to figure out where to store secrets and where to get the BOSH cloud-config from. Let's make sure these targets are configured using the bucc CLI authentication helpers.

bucc bosh # performs a bosh login with the credentials generated by bucc up
bucc safe # installs the safe cli and targets the vault-credhub-proxy

We will also need a proper cloud and runtime config for BOSH. The ones below have been tested with BUCC lite and the minio-genesis-kit. If you are not using the bosh_warden_cpi (this is what makes BUCC lite) or are using a different kit, you might need to make changes.

read -r -d '' cloud_config <<'EOF'
azs: [{ name: z1 }, {name: z2}, {name: z3}]
compilation:
  az: z1
  network: minio
  reuse_compilation_vms: true
  vm_type: default
  workers: 5
disk_types: [{ disk_size: 10240, name: minio }]
networks:
- name: minio
  subnets:
  - azs: [z1, z2, z3]
    dns: [8.8.8.8]
    gateway: 10.244.0.1
    range: 10.244.0.0/24
    reserved: [10.244.0.129 - 10.244.0.254]
    static: []
  type: manual
vm_types:
 - name: default
EOF
bosh -e bucc update-cloud-config <(echo -e "${cloud_config}")
bosh -e bucc update-runtime-config <(echo "{}")

Now it's time to let Genesis do its magic and create our deployments repo and deployment file. We are targeting the BOSH environment named bucc and will store secrets generated by Genesis in CredHub via a vault proxy.

genesis init --kit minio --cwd ~/workspace
genesis new bucc \
  --cwd ~/workspace/minio-deployments \
  --environment bucc

At the prompts, choose the following answers:

  • Select choice > Please have Genesis create a self-signed certificate for Minio
  • External Domain or IP: > 10.244.0.134
  • [y|n] > n

We could now run genesis deploy 'bucc' and be done with it, but let's take it a step further and use Concourse to automate our deploy instead.

Create the Concourse Pipeline

Genesis can generate a pipeline for us, but to do so it needs some help in the form of a ci.yml file the contents of which are documented here. However, since we are using BUCC, almost all (expect GitHub) secrets we need are already available in CredHub (use bucc credhub && credhub find to see for yourself).

We can consume these via Concourse CredHub Credential Manager. Now use the snippets below to generate the config file, use BUCC to set the fly target, and instruct Genesis to generate a pipeline config.

cd ~/workspace/minio-deployments
cat << EOF > ci.yml
pipeline:
  name: minio-deployments
  git:
    owner: ((github.owner))
    repo:  ((github.repo))
    private_key: ((github.private))
  email:
  vault:
    url:    ((vault_url))
    secret: ((vault_secret))
    role: none
    verify: no
  boshes:
    bucc:
      url:      ((bosh_environment))
      ca_cert:  ((bosh_ca_cert))
      username: ((bosh_client))
      password: ((bosh_client_secret))
      stemcells: [ default ]
  layouts:
    default: |+
      auto *bucc
      bucc
EOF
bucc fly
genesis repipe -t bucc

All that's left now is making sure Concourse can access our deployment file. For this we will be using GitHub.

Create Minio-Deployments GitHub Repo

To simplify the process of creating a GitHub repository we are using the hub CLI tool. The created repo can be private or public, since all secrets will be stored inside CredHub. To create a public repo instead of a private one, remove the --private flag from the snippet below.

cd ~/workspace/minio-deployments
git config hub.protocol https
hub create --private
git add . && git commit -m "initial minio bucc deployment"
git push -u origin master

With our repo created, it's time to add a deployment key. This SSH key will be used by Concourse to clone and push changes to the repo. We will use the Safe CLI to generate a SSH key pair in CredHub. Copy the public key and enter it in the GitHub page that's opened when the hub browse command from the snippet blow is executed. Make sure to check Allow write access so genesis can update the repo after the deploy.

safe ssh /concourse/main/minio-deployments/github
safe get /concourse/main/minio-deployments/github:public
hub browse -- settings/keys

The last thing is making sure our Concourse pipeline knows where to find the repo. For this it needs to know the owner and the repo name, which can be extracted from your git config and stored in CredHub with the snippet below.

cd ~/workspace/minio-deployments
owner=$(git remote -v | head -n1 | cut -d/ -f4)
repo=$(git remote -v | head -n1 | cut -d/ -f5 | cut -d. -f1)
echo "Repo: ${repo} Owner: ${owner}"
safe set /concourse/main/minio-deployments/github owner=${owner}
safe set /concourse/main/minio-deployments/github repo=${repo}

At this point the Concourse pipeline is not uploading/updating the stemcells so this is still a manual step to take. Instruct the bosh director to download the latest stemcell from bosh.io:

bosh -e bucc upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-xenial-go_agent

Start Deploying with Concourse

To kick off a build in Concourse using the fly CLI, the following snippet can be used.

source <(~/workspace/bucc/bin/bucc env)
bucc fly
fly -t bucc trigger-job --watch --job minio-deployments/bucc-minio

Alternatively, details for logging in to the Concourse web-ui can be found this way.

bucc info

Uploading a file to our created Minio instance

Since we are using the bosh_warden_cpi, our deployed minio instance is actually running inside a container. As such we can not access it via it's ip address. We can however use the bosh cli to setup port forwarding:

bosh -d bucc-minio ssh minio --opts='-NCL 8443:127.0.0.1:443'
# --opts Options to pass through to SSH
# -N tells SSH that no command will be sent once the tunnel is up
# -C compresses data before sending
# -L given port on the local host forwarded to port on remote side.

Now open Minio in your browser https://127.0.0.1:8443, and use genesis to lookup the credentials:

genesis info bucc.yml

Use to web interface to create a bucket (bottom right corner) and upload a file.

I want more Genesis!

If you do think Genesis is awesome sauce, check out the other kits and see if anything tickles your fancy. Official Genesis Kits live on GitHub, in the Genesis Community organization. Notable Kits include:

  • SHIELD - A data protection solution for the cloud. Schedule backups and perform restores on databases, key-value stores, even file systems.
  • Cloud Foundry - The Cloud Foundry PaaS itself. Now deployed via Genesis.
  • Blacksmith - Data services, on-demand, leveraging BOSH. Available for CF marketplaces and Kubernetes!

Additionally if you are interested in replicating deployments to multiple different environments and building pipelines to keep them all in sync, you should take a look the Genesis management plane.

The post Guide to deploying Genesis kits to BOSH/CredHub appeared first on Stark & Wayne.

]]>

Genesis is an awesome deployment framework for deploying systems with BOSH to any infrastructure cloud. It has a whole catalog of open source production ready kits which make it super easy to deploy, scale, and upgrade systems such as Cloud Foundry Application Runtime (PaaS), Vault (Secrets), Concourse CI, SHIELD (backup/restore), and Minio (Object Store).

Genesis is built to support any deployment pipeline dev -> {your stage here} -> prod across any number of infrastructures and their regions.

Genesis Kits require a special-purpose BOSH/Vault environment. This article introduces how to deploy Genesis Kits to a BOSH/Credhub environment, thanks to a Vault <--> Credhub proxy, which adds Vault API compatibility to Credhub.

We can now announce experimental Genesis compatibility in BUCC, a way to quickly deploy a BOSH/UAA/Credhub/Concourse environment. With BUCC v0.8.0 you can now run a production BUCC or --lite BUCC and begin deploying our Genesis Kits to either any BOSH infrastructure or your local laptop.

Let's show how easy it is to deploy Minio using the minio-genesis-kit starting with BUCC. Thanks to the Concourse pipeline provided by Genesis, our Minio system will forever stay upgraded.

Prerequisites

To get started, make sure to have the following tools installed:

On a linux distro with apt support the Stark & Wayne apt repository can be used:

apt-get update && apt install gnupg wget -ywget -q -O - https://raw.githubusercontent.com/starkandwayne/homebrew-cf/master/public.key | apt-key add -
echo "deb http://apt.starkandwayne.com stable main" | tee /etc/apt/sources.list.d/starkandwayne.list
apt-get update
sudo apt install spruce safe bosh-cli genesis curl hub virtualbox jq

Create a BUCC VM

Let's clone the BUCC repo and use the BUCC CLI to create our VM.

git clone https://github.com/starkandwayne/bucc ~/workspace/bucc
~/workspace/bucc/bin/bucc up --cpi virtualbox --lite
source <(~/workspace/bucc/bin/bucc env)

Create a Genesis Deployment

When generating a deployment file, Genesis will look at your locally configured Safe and BOSH CLI targets to figure out where to store secrets and where to get the BOSH cloud-config from. Let's make sure these targets are configured using the bucc CLI authentication helpers.

bucc bosh # performs a bosh login with the credentials generated by bucc up
bucc safe # installs the safe cli and targets the vault-credhub-proxy

We will also need a proper cloud and runtime config for BOSH. The ones below have been tested with BUCC lite and the minio-genesis-kit. If you are not using the bosh_warden_cpi (this is what makes BUCC lite) or are using a different kit, you might need to make changes.

read -r -d '' cloud_config <<'EOF'
azs: [{ name: z1 }, {name: z2}, {name: z3}]
compilation:
  az: z1
  network: minio
  reuse_compilation_vms: true
  vm_type: default
  workers: 5
disk_types: [{ disk_size: 10240, name: minio }]
networks:
- name: minio
  subnets:
  - azs: [z1, z2, z3]
    dns: [8.8.8.8]
    gateway: 10.244.0.1
    range: 10.244.0.0/24
    reserved: [10.244.0.129 - 10.244.0.254]
    static: []
  type: manual
vm_types:
 - name: default
EOF
bosh -e bucc update-cloud-config <(echo -e "${cloud_config}")
bosh -e bucc update-runtime-config <(echo "{}")

Now it's time to let Genesis do its magic and create our deployments repo and deployment file. We are targeting the BOSH environment named bucc and will store secrets generated by Genesis in CredHub via a vault proxy.

genesis init --kit minio --cwd ~/workspace
genesis new bucc \
  --cwd ~/workspace/minio-deployments \
  --environment bucc

At the prompts, choose the following answers:

  • Select choice > Please have Genesis create a self-signed certificate for Minio
  • External Domain or IP: > 10.244.0.134
  • [y|n] > n

We could now run genesis deploy 'bucc' and be done with it, but let's take it a step further and use Concourse to automate our deploy instead.

Create the Concourse Pipeline

Genesis can generate a pipeline for us, but to do so it needs some help in the form of a ci.yml file the contents of which are documented here. However, since we are using BUCC, almost all (expect GitHub) secrets we need are already available in CredHub (use bucc credhub && credhub find to see for yourself).

We can consume these via Concourse CredHub Credential Manager. Now use the snippets below to generate the config file, use BUCC to set the fly target, and instruct Genesis to generate a pipeline config.

cd ~/workspace/minio-deployments
cat << EOF > ci.yml
pipeline:
  name: minio-deployments
  git:
    owner: ((github.owner))
    repo:  ((github.repo))
    private_key: ((github.private))
  email:
  vault:
    url:    ((vault_url))
    secret: ((vault_secret))
    role: none
    verify: no
  boshes:
    bucc:
      url:      ((bosh_environment))
      ca_cert:  ((bosh_ca_cert))
      username: ((bosh_client))
      password: ((bosh_client_secret))
      stemcells: [ default ]
  layouts:
    default: |+
      auto *bucc
      bucc
EOF
bucc fly
genesis repipe -t bucc

All that's left now is making sure Concourse can access our deployment file. For this we will be using GitHub.

Create Minio-Deployments GitHub Repo

To simplify the process of creating a GitHub repository we are using the hub CLI tool. The created repo can be private or public, since all secrets will be stored inside CredHub. To create a public repo instead of a private one, remove the --private flag from the snippet below.

cd ~/workspace/minio-deployments
git config hub.protocol https
hub create --private
git add . && git commit -m "initial minio bucc deployment"
git push -u origin master

With our repo created, it's time to add a deployment key. This SSH key will be used by Concourse to clone and push changes to the repo. We will use the Safe CLI to generate a SSH key pair in CredHub. Copy the public key and enter it in the GitHub page that's opened when the hub browse command from the snippet blow is executed. Make sure to check Allow write access so genesis can update the repo after the deploy.

safe ssh /concourse/main/minio-deployments/github
safe get /concourse/main/minio-deployments/github:public
hub browse -- settings/keys

The last thing is making sure our Concourse pipeline knows where to find the repo. For this it needs to know the owner and the repo name, which can be extracted from your git config and stored in CredHub with the snippet below.

cd ~/workspace/minio-deployments
owner=$(git remote -v | head -n1 | cut -d/ -f4)
repo=$(git remote -v | head -n1 | cut -d/ -f5 | cut -d. -f1)
echo "Repo: ${repo} Owner: ${owner}"
safe set /concourse/main/minio-deployments/github owner=${owner}
safe set /concourse/main/minio-deployments/github repo=${repo}

At this point the Concourse pipeline is not uploading/updating the stemcells so this is still a manual step to take. Instruct the bosh director to download the latest stemcell from bosh.io:

bosh -e bucc upload-stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-xenial-go_agent

Start Deploying with Concourse

To kick off a build in Concourse using the fly CLI, the following snippet can be used.

source <(~/workspace/bucc/bin/bucc env)
bucc fly
fly -t bucc trigger-job --watch --job minio-deployments/bucc-minio

Alternatively, details for logging in to the Concourse web-ui can be found this way.

bucc info

Uploading a file to our created Minio instance

Since we are using the bosh_warden_cpi, our deployed minio instance is actually running inside a container. As such we can not access it via it's ip address. We can however use the bosh cli to setup port forwarding:

bosh -d bucc-minio ssh minio --opts='-NCL 8443:127.0.0.1:443'
# --opts Options to pass through to SSH
# -N tells SSH that no command will be sent once the tunnel is up
# -C compresses data before sending
# -L given port on the local host forwarded to port on remote side.

Now open Minio in your browser https://127.0.0.1:8443, and use genesis to lookup the credentials:

genesis info bucc.yml

Use to web interface to create a bucket (bottom right corner) and upload a file.

I want more Genesis!

If you do think Genesis is awesome sauce, check out the other kits and see if anything tickles your fancy. Official Genesis Kits live on GitHub, in the Genesis Community organization. Notable Kits include:

  • SHIELD - A data protection solution for the cloud. Schedule backups and perform restores on databases, key-value stores, even file systems.
  • Cloud Foundry - The Cloud Foundry PaaS itself. Now deployed via Genesis.
  • Blacksmith - Data services, on-demand, leveraging BOSH. Available for CF marketplaces and Kubernetes!

Additionally if you are interested in replicating deployments to multiple different environments and building pipelines to keep them all in sync, you should take a look the Genesis management plane.

The post Guide to deploying Genesis kits to BOSH/CredHub appeared first on Stark & Wayne.

]]>
Deploy Cloud Foundry on MoltenCore https://www.starkandwayne.com/blog/deploy-cloudfoundry-on-moltencore/ Tue, 12 Nov 2019 15:00:00 +0000 https://www.starkandwayne.com//deploy-cloudfoundry-on-moltencore/

In the previous blog posts we introduced MoltenCore and showed you how to deploy MoltenCore on Packet bare-metal cloud. In this blog post I will walk you through the steps needed to deploy Cloud Foundry on your freshly deployed MoltenCore Cluster.

Since MoltenCore uses BOSH with the docker-bosh-cpi for resources isolation, a Cloud Foundry running on top of MoltenCore could technically be considered a Containerised Cloud Foundry. It however differs from project-quarks by taking a BOSH native instead of Kubernetes native approach.

Get a MoltenCore Cluster

If you haven't, please go ahead and deploy your cluster now using the instructions which apply to your environment. Once your cluster is up, ssh into node z0. This machine hosts your BUCC.

Depending on the download speed and IOPS of your node it takes between 5 - 15 minutes for your BUCC to be deployed. The deployment is managed by systemd so we can use systemctl and journalctl to check the status and progress.

# on node-z0systemctl status bucc.service
journalctl -f -u bucc.service

Copy the deploy-cf Concourse pipeline

While we wait, go ahead and clone the molten-core repo, since we will be using the deploy-cloudfoundry.yml Concourse pipeline from the examples directory. Also use the copy-pipeline script to create a copy paste friendly snippet.

git clone https://github.com/starkandwayne/molten-core && cd molten-core
./examples/copy-pipeline cf | xclip -selection clipboard # on linux
./examples/copy-pipeline cf | pbcopy                     # on macOS

Deploy Cloud Foundry

Once your BUCC is up and running use the mc (the MoltenCore binary, which was installed onto all nodes during cluster bootstrap) to start a management shell. In this shell, the fly CLI is already configured to talk to your BUCC, so all you need to do is paste the snippet you generated in the step above.

mc shell
root@someuuid:/# < Paste here >

The Concourse pipeline will be paused initially. We can use the fly CLI to unpause it and to trigger the deployment job. Concourse will use the bosh-deployment-resource to instruct BOSH, to deploy Cloud Foundry using the deployment manifest from the official cf-deployment repo.

fly -t mc unpause-pipeline -p deploy-cf
fly -t mc trigger-job -j deploy-cf/deploy-cf --watch

Login to Cloud Foundry

During the deploy, BOSH will instruct CredHub to generate passwords and certificates to protect your installation. The deployment pipeline also includes a task which automates the lookup of the admin password. It outputs a copy-pastable snippet to configure the cf CLI (which can be installed using these instructions), to target your freshly deployed Cloud Foundry.

fly -t mc trigger-job -j deploy-cf/generate-cf-cli-login-snippet --watch

Conclusion

At this point you should have a Cloud Foundry running on a Molten Core Cluster. Which is great for trying out new features (for example, the metric-store). Just add an additional cf-deployment ops files to your local copy of deploy-cloudfoundry.yml, run copy-pipeline and kick off another deploy.

The post Deploy Cloud Foundry on MoltenCore appeared first on Stark & Wayne.

]]>

In the previous blog posts we introduced MoltenCore and showed you how to deploy MoltenCore on Packet bare-metal cloud. In this blog post I will walk you through the steps needed to deploy Cloud Foundry on your freshly deployed MoltenCore Cluster.

Since MoltenCore uses BOSH with the docker-bosh-cpi for resources isolation, a Cloud Foundry running on top of MoltenCore could technically be considered a Containerised Cloud Foundry. It however differs from project-quarks by taking a BOSH native instead of Kubernetes native approach.

Get a MoltenCore Cluster

If you haven't, please go ahead and deploy your cluster now using the instructions which apply to your environment. Once your cluster is up, ssh into node z0. This machine hosts your BUCC.

Depending on the download speed and IOPS of your node it takes between 5 - 15 minutes for your BUCC to be deployed. The deployment is managed by systemd so we can use systemctl and journalctl to check the status and progress.

# on node-z0systemctl status bucc.service
journalctl -f -u bucc.service

Copy the deploy-cf Concourse pipeline

While we wait, go ahead and clone the molten-core repo, since we will be using the deploy-cloudfoundry.yml Concourse pipeline from the examples directory. Also use the copy-pipeline script to create a copy paste friendly snippet.

git clone https://github.com/starkandwayne/molten-core && cd molten-core
./examples/copy-pipeline cf | xclip -selection clipboard # on linux
./examples/copy-pipeline cf | pbcopy                     # on macOS

Deploy Cloud Foundry

Once your BUCC is up and running use the mc (the MoltenCore binary, which was installed onto all nodes during cluster bootstrap) to start a management shell. In this shell, the fly CLI is already configured to talk to your BUCC, so all you need to do is paste the snippet you generated in the step above.

mc shell
root@someuuid:/# < Paste here >

The Concourse pipeline will be paused initially. We can use the fly CLI to unpause it and to trigger the deployment job. Concourse will use the bosh-deployment-resource to instruct BOSH, to deploy Cloud Foundry using the deployment manifest from the official cf-deployment repo.

fly -t mc unpause-pipeline -p deploy-cf
fly -t mc trigger-job -j deploy-cf/deploy-cf --watch

Login to Cloud Foundry

During the deploy, BOSH will instruct CredHub to generate passwords and certificates to protect your installation. The deployment pipeline also includes a task which automates the lookup of the admin password. It outputs a copy-pastable snippet to configure the cf CLI (which can be installed using these instructions), to target your freshly deployed Cloud Foundry.

fly -t mc trigger-job -j deploy-cf/generate-cf-cli-login-snippet --watch

Conclusion

At this point you should have a Cloud Foundry running on a Molten Core Cluster. Which is great for trying out new features (for example, the metric-store). Just add an additional cf-deployment ops files to your local copy of deploy-cloudfoundry.yml, run copy-pipeline and kick off another deploy.

The post Deploy Cloud Foundry on MoltenCore appeared first on Stark & Wayne.

]]>
Deploy MoltenCore on Packet Bare-Metal Cloud https://www.starkandwayne.com/blog/deploy-moltencore-on-packet-bare-metal-cloud/ Tue, 29 Oct 2019 14:00:00 +0000 https://www.starkandwayne.com//deploy-moltencore-on-packet-bare-metal-cloud/

MoltenCore allows running containerized container platforms on bare-metal in a BOSH native way, using a highly available scale out architecture. The main repo can be found on GitHub. In this blog post, we will walk you through the steps needed to deploy a MoltenCore cluster on Packet.

Clone the project

The packet-molten-core repo contains the terraform we need for the task at hand. Go ahead and clone the repo, we will also copy the vars template which we need in further steps:

git clone https://github.com/starkandwayne/packet-molten-corecd packet-molten-core
cp terraform.tfvars.example terraform.tfvars # futher steps will refer to this file

Create a Packet Account

You can sign up for a Packet.com account here. If you want some perks, please reach out to Brian Wong or Joshua “JC” Boliek at Packet. Say you saw this blog post and let them know Stark & Wayne sent you!

After sign up, create a project and copy your project id into the terraform.tfvars file:

copy your project id

You will also need to generate a personal (not project) api key into the terraform.tfvars file:

go to personal settings - api keys

add and copy your api key

Optionally customize the defaults

With your Project ID and API Key filled in, you should be good to go, however you might want the change the following defaults in your terraform.tfvars file:

  • packet_facility: the geographic location of the server datacenter full list here.
  • node_type: the type of server to use, available types here.
  • node_count: number of nodes you want, best to use odd numbers when deploying cf or k8s to keep quorum.

Deploy

Now we can go ahead and deploy our cluster:

terraform init    # to retrieve all the modules
terraform plan    # to verify all variables
terraform apply   # actually deploy a MoltenCore

Using MoltenCore

All management interactions with your MoltenCore cluster are performed from the first node (zone zero, z0 for short). This node also hosts the embedded BUCC. Use the helper script (./utils/ssh) to access node-z0 by default it will go to z0 other nodes can be reached by passing a number (eg. ./utils/ssh 1 to go the second node)

./utils/ssh                   # ssh to node-z0
journalctl -f -u bucc.service # wait for BUCC to be deployed
mc shell                      # start interactive shell for interacting with BUCC

For more things to do with your cluster refer to the molten-core repo.

Cleanup

Terraform can be used to delete your MoltenCore cluster. To do so run the following command:

terraform destroy

The post Deploy MoltenCore on Packet Bare-Metal Cloud appeared first on Stark & Wayne.

]]>

MoltenCore allows running containerized container platforms on bare-metal in a BOSH native way, using a highly available scale out architecture. The main repo can be found on GitHub. In this blog post, we will walk you through the steps needed to deploy a MoltenCore cluster on Packet.

Clone the project

The packet-molten-core repo contains the terraform we need for the task at hand. Go ahead and clone the repo, we will also copy the vars template which we need in further steps:

git clone https://github.com/starkandwayne/packet-molten-corecd packet-molten-core
cp terraform.tfvars.example terraform.tfvars # futher steps will refer to this file

Create a Packet Account

You can sign up for a Packet.com account here. If you want some perks, please reach out to Brian Wong or Joshua “JC” Boliek at Packet. Say you saw this blog post and let them know Stark & Wayne sent you!

After sign up, create a project and copy your project id into the terraform.tfvars file:

copy your project id

You will also need to generate a personal (not project) api key into the terraform.tfvars file:

go to personal settings - api keys
add and copy your api key

Optionally customize the defaults

With your Project ID and API Key filled in, you should be good to go, however you might want the change the following defaults in your terraform.tfvars file:

  • packet_facility: the geographic location of the server datacenter full list here.
  • node_type: the type of server to use, available types here.
  • node_count: number of nodes you want, best to use odd numbers when deploying cf or k8s to keep quorum.

Deploy

Now we can go ahead and deploy our cluster:

terraform init    # to retrieve all the modules
terraform plan    # to verify all variables
terraform apply   # actually deploy a MoltenCore

Using MoltenCore

All management interactions with your MoltenCore cluster are performed from the first node (zone zero, z0 for short). This node also hosts the embedded BUCC. Use the helper script (./utils/ssh) to access node-z0 by default it will go to z0 other nodes can be reached by passing a number (eg. ./utils/ssh 1 to go the second node)

./utils/ssh                   # ssh to node-z0
journalctl -f -u bucc.service # wait for BUCC to be deployed
mc shell                      # start interactive shell for interacting with BUCC

For more things to do with your cluster refer to the molten-core repo.

Cleanup

Terraform can be used to delete your MoltenCore cluster. To do so run the following command:

terraform destroy

The post Deploy MoltenCore on Packet Bare-Metal Cloud appeared first on Stark & Wayne.

]]>
Forging Bare-Metal; Introducing MoltenCore https://www.starkandwayne.com/blog/forging-bare-metal-introducing-molte-core/ Fri, 25 Oct 2019 15:50:00 +0000 https://www.starkandwayne.com//forging-bare-metal-introducing-molte-core/

What would my Cloud look like if I could start fresh and, moreover, if I could pick any technology I wanted? This exact opportunity happened to me during the preparations for the European Cloud Foundry Summit 2019 in The Hague, because Stark & Wayne agreed to sponsor the Hands-On Labs sessions. This meant we would pay for the shared infrastructure, which included a shared Cloud Foundry.

In an effort to save costs, we took the opportunity to re-examine all the different layers which are used to deploy a Cloud Foundry. Starting all the way down from the physical infrastructure, up to way we perform ingress into our system. For this we took an MVP approach to each component in our stack. Using this approach we came up with the following table of functions and technical solutions.

Function Solution Technology
resource isolation containerization bosh_docker_cpi
resource allocation static placement BOSH multi-cpi
inter-host communication overlay network Flannel
ingress traffic network address translation Docker HostPortBinding

All we needed now was something to host these technologies. Our ideal solution should require no maintenance, be self updating, and should work on any environment. We found our solution in CoreOS Container Linux; which, out of the box supports docker and Flannel (+ ETCD a dependency of Flannel).

Phase 1: Proof Of Concept

In this phase, we created Terraform automation to create a CoreOS cluster on the Packet.com bare-metal cloud. The Terraform TLS provider was used to generate self-signed certificates for the docker daemons and Ignition was used to configure the daemons with these certificates through Systemd.

We started out using BOSH dynamic networking on top of Flannel; however, we encountered issues with this approach after reboots. This was failing because the container IP's would change, which is not something BOSH templates are built for (they are not re-rendered after reboot). We found that if we prevent the docker daemon from using the Flannel subnet for the default bridge network, we can create a custom docker network with the same subnet which works with BOSH manual networks.

The BOSH cloud and CPI configurations came from a shell script extracting from the Terraform output to obtain the docker daemon endpoints, TLS client certificates and Flannel subnets (which we statically generated in Terraform). Also a helper script was injected to start a shell inside a Docker container from which all interaction with the deployed BUCC could be performed.

We successfully used the proof of concept code to keep a 3 node (32 GB of memory per node) cluster running which hosted the shared Cloud Foundry environment for the European Cloud Foundry Summit 2019 Hands-On Labs sessions. However, some corners were cut in the development of the POC, so we took our initial findings and re-developed the next version from scratch.

Phase 2: Portability

Due to the nature of Ignition configs (they are only applied once during first boot), we had to recreate our testing environment (using Packet.com) many times. The development feedback cycle time was about 10 minutes; also, the lack of a local development environment was problematic. The POC codebase was not architected with supporting different IaaS providers in mind, as a result there was a lot of logic embedded in terraform templates in a non reusable way.

Since we already had an ETCD cluster (for Flannel), we decided to move the TLS certificates for the docker daemons into ETCD, and generate them using a custom binary on node startup. We also moved the management of Systemd unit file changes into this same binary. This approach allowed us to decouple from Ignition, which created a faster feedback cycle (around 5 seconds).

The mc binary is written in golang, and is responsible for converting a vanilla CoreOS cluster (with a configured ETCD cluster + Flannel) into a MoltenCore cluster. It is fully cloud agnostic and, as of now, has been tested successfully with coreos-vagrant and Packet.com. Since all docker daemon endpoints end certificates are now available in ETCD we can generate BOSH cloud and cpi configs right after our BUCC is running.

Phase 3: Auto Updates?

As it stands currently, MoltenCore is great for smaller scale Cloud Foundry or Kubernetes clusters; however, I would not yet recommend it for production use. For this, we will need to support clean node reboots (which happen as part of CoreOS during its auto update cycle). We hope to achieve this, at a later time, by taking inspiration from the container-linux-update-operator, which performs this function for k8s.

We also hope to reduce the spin up time of a cluster by packaging BUCC in a Docker image, this would hopefully allow us to move away from `bosh create-env`. Most of the time it takes to spin up BUCC (currently around 10 minutes) is spent on moving around big files (untaring and verifying BOSH releases). Having faster (re)start times for BUCC helps with faster recovery in the case of node updates.

Conclusion

MoltenCore allows running containerized container platforms on bare-metal in a BOSH native way, using a highly available scale-out architecture. The source code of the mc binary is hosted in the molten-core repo together with the scripts to create a local cluster using vagrant. That being said, the best way to create a MoltenCore cluster is by using one of the platform specific Terraform projects (eg. packet-molten-core).

The post Forging Bare-Metal; Introducing MoltenCore appeared first on Stark & Wayne.

]]>

What would my Cloud look like if I could start fresh and, moreover, if I could pick any technology I wanted? This exact opportunity happened to me during the preparations for the European Cloud Foundry Summit 2019 in The Hague, because Stark & Wayne agreed to sponsor the Hands-On Labs sessions. This meant we would pay for the shared infrastructure, which included a shared Cloud Foundry.

In an effort to save costs, we took the opportunity to re-examine all the different layers which are used to deploy a Cloud Foundry. Starting all the way down from the physical infrastructure, up to way we perform ingress into our system. For this we took an MVP approach to each component in our stack. Using this approach we came up with the following table of functions and technical solutions.

Function Solution Technology
resource isolation containerization bosh_docker_cpi
resource allocation static placement BOSH multi-cpi
inter-host communication overlay network Flannel
ingress traffic network address translation Docker HostPortBinding

All we needed now was something to host these technologies. Our ideal solution should require no maintenance, be self updating, and should work on any environment. We found our solution in CoreOS Container Linux; which, out of the box supports docker and Flannel (+ ETCD a dependency of Flannel).

Phase 1: Proof Of Concept

In this phase, we created Terraform automation to create a CoreOS cluster on the Packet.com bare-metal cloud. The Terraform TLS provider was used to generate self-signed certificates for the docker daemons and Ignition was used to configure the daemons with these certificates through Systemd.

We started out using BOSH dynamic networking on top of Flannel; however, we encountered issues with this approach after reboots. This was failing because the container IP's would change, which is not something BOSH templates are built for (they are not re-rendered after reboot). We found that if we prevent the docker daemon from using the Flannel subnet for the default bridge network, we can create a custom docker network with the same subnet which works with BOSH manual networks.

The BOSH cloud and CPI configurations came from a shell script extracting from the Terraform output to obtain the docker daemon endpoints, TLS client certificates and Flannel subnets (which we statically generated in Terraform). Also a helper script was injected to start a shell inside a Docker container from which all interaction with the deployed BUCC could be performed.

We successfully used the proof of concept code to keep a 3 node (32 GB of memory per node) cluster running which hosted the shared Cloud Foundry environment for the European Cloud Foundry Summit 2019 Hands-On Labs sessions. However, some corners were cut in the development of the POC, so we took our initial findings and re-developed the next version from scratch.

Phase 2: Portability

Due to the nature of Ignition configs (they are only applied once during first boot), we had to recreate our testing environment (using Packet.com) many times. The development feedback cycle time was about 10 minutes; also, the lack of a local development environment was problematic. The POC codebase was not architected with supporting different IaaS providers in mind, as a result there was a lot of logic embedded in terraform templates in a non reusable way.

Since we already had an ETCD cluster (for Flannel), we decided to move the TLS certificates for the docker daemons into ETCD, and generate them using a custom binary on node startup. We also moved the management of Systemd unit file changes into this same binary. This approach allowed us to decouple from Ignition, which created a faster feedback cycle (around 5 seconds).

The mc binary is written in golang, and is responsible for converting a vanilla CoreOS cluster (with a configured ETCD cluster + Flannel) into a MoltenCore cluster. It is fully cloud agnostic and, as of now, has been tested successfully with coreos-vagrant and Packet.com. Since all docker daemon endpoints end certificates are now available in ETCD we can generate BOSH cloud and cpi configs right after our BUCC is running.

Phase 3: Auto Updates?

As it stands currently, MoltenCore is great for smaller scale Cloud Foundry or Kubernetes clusters; however, I would not yet recommend it for production use. For this, we will need to support clean node reboots (which happen as part of CoreOS during its auto update cycle). We hope to achieve this, at a later time, by taking inspiration from the container-linux-update-operator, which performs this function for k8s.

We also hope to reduce the spin up time of a cluster by packaging BUCC in a Docker image, this would hopefully allow us to move away from `bosh create-env`. Most of the time it takes to spin up BUCC (currently around 10 minutes) is spent on moving around big files (untaring and verifying BOSH releases). Having faster (re)start times for BUCC helps with faster recovery in the case of node updates.

Conclusion

MoltenCore allows running containerized container platforms on bare-metal in a BOSH native way, using a highly available scale-out architecture. The source code of the mc binary is hosted in the molten-core repo together with the scripts to create a local cluster using vagrant. That being said, the best way to create a MoltenCore cluster is by using one of the platform specific Terraform projects (eg. packet-molten-core).

The post Forging Bare-Metal; Introducing MoltenCore appeared first on Stark & Wayne.

]]>
Introducing BUCC on Docker Desktop for macOS https://www.starkandwayne.com/blog/bucc-docker/ Wed, 12 Jun 2019 11:22:44 +0000 https://www.starkandwayne.com//bucc-docker/

Developing against BOSH, UAA, CredHub, and Concourse has never been easier, with the new Docker Desktop for macOS support of BUCC (introduced in version 0.7.1).

If you have not already, get Docker Desktop (tested with 2.0.0.3) here.

Make sure to allocate enough memory (tested with 8GB) to the Docker Desktop VM:

Now let's deploy BUCC:

git clone https://github.com/starkandwayne/bucc; cd bucc./bin/bucc up --cpi docker-desktop --lite
eval "$(./bin/bucc env)"

That's it, you can now start using for example Concourse by running:

bucc info

When you are done, the environment can removed with bucc down --clean

The post Introducing BUCC on Docker Desktop for macOS appeared first on Stark & Wayne.

]]>

Developing against BOSH, UAA, CredHub, and Concourse has never been easier, with the new Docker Desktop for macOS support of BUCC (introduced in version 0.7.1).

If you have not already, get Docker Desktop (tested with 2.0.0.3) here.

Make sure to allocate enough memory (tested with 8GB) to the Docker Desktop VM:

Now let's deploy BUCC:

git clone https://github.com/starkandwayne/bucc; cd bucc./bin/bucc up --cpi docker-desktop --lite
eval "$(./bin/bucc env)"

That's it, you can now start using for example Concourse by running:

bucc info

When you are done, the environment can removed with bucc down --clean

The post Introducing BUCC on Docker Desktop for macOS appeared first on Stark & Wayne.

]]>
Some Google Cloud Shell love for the Cloud Foundry Ecosystem https://www.starkandwayne.com/blog/google-cloud-shell-loves-cloud-foundry-ecosystem/ Fri, 22 Mar 2019 14:07:33 +0000 https://www.starkandwayne.com//google-cloud-shell-loves-cloud-foundry-ecosystem/

Google Cloud Shell is great, we use it a lot during training. And recently it has gotten a whole lot better, since you can bring your own docker image as a basis for your environment.

As part of our daily operations, Stark & Wayne publishes several docker images that include all the valuable tools we use when performing Cloud Foundry related tasks. Since Google Power Shell has very specific requirements for an environment image (e.g. it must be hosted on Google Container Registry), we have setup a public registry so you don't have to.

Configuring Google Cloud Shell

To open the Cloud Shell, open your Google Console and click on the Cloud Shell icon in the application bar:

Configure using cloudshell cli

Run the following command in your Cloud Shell session:

cloudshell env update-default-image --image gcr.io/starkandwayne-registry/gcp-cloudshell:latest

Now skip to Restart Cloud Shell.

Configure using the user interface

Restarting the Cloud Shell will open a bottom drawer with your Shell Environment. Now lets open up the Cloud Shell Environment settings page by click on the laptop button:

From the setting page click Edit:

Now for Image select 'Select image from project', enter gcr.io/starkandwayne-registry/gcp-cloudshell:latest for the image location and click Save:

Restart Cloud Shell

To use our custom image, the Cloud Shell needs to be restarted. Which can be done by click restart from the "More" menu:

more button when Cloud Shell is opened in bottom drawer

more button when Cloud Shell is opened in new window

After restarting your Cloud Shell, the image url should appear in the first line of your session:

Missing Something

If we have missed a tool which you expected to be there, please create an issue, or send us a Pull Request to add it to the Cloud Shell Dockerfile.

The post Some Google Cloud Shell love for the Cloud Foundry Ecosystem appeared first on Stark & Wayne.

]]>

Google Cloud Shell is great, we use it a lot during training. And recently it has gotten a whole lot better, since you can bring your own docker image as a basis for your environment.

As part of our daily operations, Stark & Wayne publishes several docker images that include all the valuable tools we use when performing Cloud Foundry related tasks. Since Google Power Shell has very specific requirements for an environment image (e.g. it must be hosted on Google Container Registry), we have setup a public registry so you don't have to.

Configuring Google Cloud Shell

To open the Cloud Shell, open your Google Console and click on the Cloud Shell icon in the application bar:

Configure using cloudshell cli

Run the following command in your Cloud Shell session:

cloudshell env update-default-image --image gcr.io/starkandwayne-registry/gcp-cloudshell:latest

Now skip to Restart Cloud Shell.

Configure using the user interface

Restarting the Cloud Shell will open a bottom drawer with your Shell Environment. Now lets open up the Cloud Shell Environment settings page by click on the laptop button:

From the setting page click Edit:

Now for Image select 'Select image from project', enter gcr.io/starkandwayne-registry/gcp-cloudshell:latest for the image location and click Save:

Restart Cloud Shell

To use our custom image, the Cloud Shell needs to be restarted. Which can be done by click restart from the "More" menu:

more button when Cloud Shell is opened in bottom drawer
more button when Cloud Shell is opened in new window

After restarting your Cloud Shell, the image url should appear in the first line of your session:

Missing Something

If we have missed a tool which you expected to be there, please create an issue, or send us a Pull Request to add it to the Cloud Shell Dockerfile.

The post Some Google Cloud Shell love for the Cloud Foundry Ecosystem appeared first on Stark & Wayne.

]]>
Visualize BOSH deployments with UML https://www.starkandwayne.com/blog/visualize-bosh-deployments-with-uml/ Mon, 28 Jan 2019 16:13:34 +0000 https://www.starkandwayne.com//visualize-bosh-deployments-with-uml/

BOSH is great, yaml is even greater, but sometimes you have to explain its magic in a more universal language. That's why we created a small project to generate UML like diagrams from bosh deployment manifests.

Let's use it to generate a diagram for cf-deployment:

Prerequisites

Please make sure the following tools are installed:

When using macOS, these can be installed via brew:

brew tap starkandwayne/cfbrew install starkandwayne/cf/spruce
brew install jq plantuml

Now just clone the repo:

git clone https://github.com/cloudfoundry-community/bosh-deployment-visualizer ~/workspace/bosh-deployment-visualizer

Let's get cf-deployment:

git clone https://github.com/cloudfoundry/cf-deployment ~/workspace/cf-deployment

Last generate your image:

~/workspace/bosh-deployment-visualizer/visualize.sh ~/workspace/cf-deployment/cf-deployment.yml

To view the result:

open cf.png

The post Visualize BOSH deployments with UML appeared first on Stark & Wayne.

]]>

BOSH is great, yaml is even greater, but sometimes you have to explain its magic in a more universal language. That's why we created a small project to generate UML like diagrams from bosh deployment manifests.

Let's use it to generate a diagram for cf-deployment:

Prerequisites

Please make sure the following tools are installed:

When using macOS, these can be installed via brew:

brew tap starkandwayne/cfbrew install starkandwayne/cf/spruce
brew install jq plantuml

Now just clone the repo:

git clone https://github.com/cloudfoundry-community/bosh-deployment-visualizer ~/workspace/bosh-deployment-visualizer

Let's get cf-deployment:

git clone https://github.com/cloudfoundry/cf-deployment ~/workspace/cf-deployment

Last generate your image:

~/workspace/bosh-deployment-visualizer/visualize.sh ~/workspace/cf-deployment/cf-deployment.yml

To view the result:

open cf.png

The post Visualize BOSH deployments with UML appeared first on Stark & Wayne.

]]>
BUCC supports Backup & Restore finally! https://www.starkandwayne.com/blog/bucc-bbr-finally/ Thu, 01 Mar 2018 13:08:00 +0000 https://www.starkandwayne.com//bucc-bbr-finally/

Many people have asked about how to backup and restore BUCC (an introduction to BUCC can be found here. The wait is over, because as of v0.4.0, there is full support for BBR (BOSH Backup & Restore). The technical details will be given at the end of this blogpost, but first, here is the short version:

Backup

The BUCC CLI has a new wrapper command (bucc bbr) which wraps bbr director and configures the SSH key, user, and IP. So, all that's left to do is run a backup:

bucc bbr backup

This will result in a timestamped directory in your current path with the backup artifacts. To store backups securely we recommend using SHIELD (out of scope for this blogpost), which has support for BBR .

Restore

Doing a full restore in the case of a disaster takes two steps:

# 1. creating a fresh BUCC with the state/creds.yml from backup
cd bucc
tar -xf ${backup_dir}/bosh-0-bucc-creds.tar -C state
bucc up
# 2. Restore databases and BOSH blobstore using bbr
bucc bbr restore --artifact-path=${backup_dir}

That's it! Your BUCC should be the same as it was when the backup was taken.

Technical Details

Why did it take a bit longer than expected to add BBR support? First, BBR support has only recently been worked on by the upstream BOSH releases used by BUCC. And not all of them work well with bosh create-env. Or work with a recent Postgres version.

That's why we had to create bucc-bbr-boshrelease to bundle all of the workarounds and to keep track of the upstream issues. Our hope is to get rid of these workarounds when the upstream issues are resolved. To ensure our workarounds work, the above backup and restore steps are actually tested as part of our pipeline.

In addition to the workarounds, we also had to make sure to backup the credentials which are generated by the BOSH CLI (--vars-store). For example, losing the CredHub encryption key and NATS CA Certificate would result in undecryptable credentials and be unable to communicate with BOSH agents.

The solution came in the form of a bucc-creds BOSH job and an accompanying ops-file. This ops-file is automatically updated by our pipeline to ensure it stays in sync with state/creds.yml.

The post BUCC supports Backup & Restore finally! appeared first on Stark & Wayne.

]]>

Many people have asked about how to backup and restore BUCC (an introduction to BUCC can be found here. The wait is over, because as of v0.4.0, there is full support for BBR (BOSH Backup & Restore). The technical details will be given at the end of this blogpost, but first, here is the short version:

Backup

The BUCC CLI has a new wrapper command (bucc bbr) which wraps bbr director and configures the SSH key, user, and IP. So, all that's left to do is run a backup:

bucc bbr backup

This will result in a timestamped directory in your current path with the backup artifacts. To store backups securely we recommend using SHIELD (out of scope for this blogpost), which has support for BBR .

Restore

Doing a full restore in the case of a disaster takes two steps:

# 1. creating a fresh BUCC with the state/creds.yml from backup
cd bucc
tar -xf ${backup_dir}/bosh-0-bucc-creds.tar -C state
bucc up
# 2. Restore databases and BOSH blobstore using bbr
bucc bbr restore --artifact-path=${backup_dir}

That's it! Your BUCC should be the same as it was when the backup was taken.

Technical Details

Why did it take a bit longer than expected to add BBR support? First, BBR support has only recently been worked on by the upstream BOSH releases used by BUCC. And not all of them work well with bosh create-env. Or work with a recent Postgres version.

That's why we had to create bucc-bbr-boshrelease to bundle all of the workarounds and to keep track of the upstream issues. Our hope is to get rid of these workarounds when the upstream issues are resolved. To ensure our workarounds work, the above backup and restore steps are actually tested as part of our pipeline.

In addition to the workarounds, we also had to make sure to backup the credentials which are generated by the BOSH CLI (--vars-store). For example, losing the CredHub encryption key and NATS CA Certificate would result in undecryptable credentials and be unable to communicate with BOSH agents.

The solution came in the form of a bucc-creds BOSH job and an accompanying ops-file. This ops-file is automatically updated by our pipeline to ensure it stays in sync with state/creds.yml.

The post BUCC supports Backup & Restore finally! appeared first on Stark & Wayne.

]]>