Photo by Alex Gorzen on Flickr
More Limes, More Coconuts
In previous blog posts I reviewed how to deploy KubeCF on EKS, which gives you a nice stable deployment of KubeCF
, the downside is this costs you money for every hour it is run on AWS.
I used to giving Amazon money, but I typically get a small cardboard box in exchange every few days.
So, how do you run KubeCF
on your Mac for free(ish)? Tune in below.
You need at least a 16GB of memory installed on your Apple MacOS device, the install will use around 11GB of the memory once it is fully spun up.
The install is fragile and frustrating at times, this is geared more towards operators who are trying out skunkworks on the platform such as testing custom buildpacks, hacking db queries and other potentially destructive activities. The install does NOT survive reboots and become extra brittle after 24+ hours of running. This is not KubeCF's fault, when run on EKS it will happily continue to run without issues. You've been warned!
tuntap
and start the shimkind
and deploy a clustermetallb
cf-operator
and deploy kubecf
The next few sections are borrowed heavily from https://www.thehumblelab.com/kind-and-metallb-on-mac/, I encourage you to skim this document to understand why the tuntap
shim is needed and how to verify the configuration for metallb
.
I won't go into great detail as these tools are likely already installed:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install --cask docker
# Then launch docker from Applications to complete the install and start docker
tuntap
and start the shimRunning docker
on MacOS has some "deficiencies" that can be overcome by installing a networking shim, to perform this install:
brew install git
brew install --cask tuntap
git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx.git
cd docker-tuntap-osx
./sbin/docker_tap_install.sh
./sbin/docker_tap_up.sh
kind
and deploy a clusterSweet'n'Simple:
brew install kind
kind create cluster
metallb
We'll be using metallb
as a LoadBalancer resource and open up a local route so that MacOS can route traffic locally to the cluster.
sudo route -v add -net 172.18.0.1 -netmask 255.255.0.0 10.0.75.2
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
cat << EOF > metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.0.150-172.18.0.200
EOF
kubectl create -f metallb-config.yaml
cf-operator
and deploy kubecf
brew install wget
brew install helm
brew install watch
wget https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-bundle-v2.7.13.tgz
tar -xvzf kubecf-bundle-v2.7.13.tgz
kubectl create namespace cf-operator
helm install cf-operator \
--namespace cf-operator \
--set "global.singleNamespace.name=kubecf" \
cf-operator.tgz \
--wait
helm install kubecf \
--namespace kubecf \
--set system_domain=172.18.0.150.nip.io \
--set features.eirini.enabled=false \
--set features.ingress.enabled=false \
--set services.router.externalIPs={172.18.0.150}\
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-v2.7.13.tgz
watch kubectl get pods -A
Now, go take a walk, it will take 30-60 minutes for the kubecf
helm chart to be fully picked up by the cf-operator
CRDs and are scheduled for running. When complete, you should see output similar to:
Every 2.0s: kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cf-operator quarks-cd9d4b96f-rtkbt 1/1 Running 0 40m
cf-operator quarks-job-6d8d744bc6-pmfnd 1/1 Running 0 40m
cf-operator quarks-secret-7d76f854dc-9wp2f 1/1 Running 0 40m
cf-operator quarks-statefulset-f6dc85fb8-x6jfb 1/1 Running 0 40m
kube-system coredns-558bd4d5db-ncmh5 1/1 Running 0 41m
kube-system coredns-558bd4d5db-zlpgg 1/1 Running 0 41m
kube-system etcd-kind-control-plane 1/1 Running 0 41m
kube-system kindnet-w4m9n 1/1 Running 0 41m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 41m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 41m
kube-system kube-proxy-ln6hb 1/1 Running 0 41m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 41m
kubecf api-0 17/17 Running 1 19m
kubecf auctioneer-0 6/6 Running 2 20m
kubecf cc-worker-0 6/6 Running 0 20m
kubecf cf-apps-dns-76947f98b5-tbfql 1/1 Running 0 39m
kubecf coredns-quarks-7cf8f9f58d-msq9m 1/1 Running 0 38m
kubecf coredns-quarks-7cf8f9f58d-pbkt9 1/1 Running 0 38m
kubecf credhub-0 8/8 Running 0 20m
kubecf database-0 2/2 Running 0 38m
kubecf database-seeder-0bc49e7bcb1f9453-vnvjm 0/2 Completed 0 38m
kubecf diego-api-0 9/9 Running 2 20m
kubecf diego-cell-0 12/12 Running 2 20m
kubecf doppler-0 6/6 Running 0 20m
kubecf log-api-0 9/9 Running 0 20m
kubecf log-cache-0 10/10 Running 0 19m
kubecf nats-0 7/7 Running 0 20m
kubecf router-0 7/7 Running 0 20m
kubecf routing-api-0 6/6 Running 1 20m
kubecf scheduler-0 12/12 Running 2 19m
kubecf singleton-blobstore-0 8/8 Running 0 20m
kubecf tcp-router-0 7/7 Running 0 20m
kubecf uaa-0 9/9 Running 0 20m
local-path-storage local-path-provisioner-547f784dff-r8trf 1/1 Running 1 41m
metallb-system controller-fb659dc8-dhpnb 1/1 Running 0 41m
metallb-system speaker-h9lh9 1/1 Running 0 41m
To login with the admin
uaa user account:
cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
acp=$(kubectl get secret \
--namespace kubecf var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${acp}"
cf create-space test -o system
cf target -o system -s test
Or, to use smoke_tests
uaa client account (because you're a rebel or something):
cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
myclient=$(kubectl get secret \
--namespace kubecf var-uaa-clients-cf-smoke-tests-secret \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth cf_smoke_tests "${myclient}" --client-credentials
cf create-space test -o system
cf target -o system -s test
If you are done with the deployment of KubeCF you have two options:
kind-control-place
and click "Stop". Go have fun, when you come back, click "Start". After a few minutes the pods will recreate and become healthy kind delete cluster
sudo route delete 172.18.0.0
./sbin/docker_tap_uninstall.sh
Get used to seeing:
Request error: Get "https://api.172.18.0.150.nip.io": dial tcp 172.18.0.150:443: i/o timeout
The api server is flaky, try whatever you were doing again after verifying all pods are running as seen in the Install cf-operator and deploy kubecf
section.
Have questions? There is an excellent community for KubeCF which can be found at https://cloudfoundry.slack.com/archives/CQ2U3L6DC as kubecf-dev
in Slack. You can ping me there via @cweibel
I also have Terraform code which will spin up a VPC + EKS + KubeCF for a more permanent solution to running KubeCF
not on a Mac, check out https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf_v2 for more details.
Enjoy!
The post Running KubeCF using KIND on MacOS appeared first on Stark & Wayne.
]]>Photo by Alex Gorzen on Flickr
More Limes, More Coconuts
In previous blog posts I reviewed how to deploy KubeCF on EKS, which gives you a nice stable deployment of KubeCF
, the downside is this costs you money for every hour it is run on AWS.
I used to giving Amazon money, but I typically get a small cardboard box in exchange every few days.
So, how do you run KubeCF
on your Mac for free(ish)? Tune in below.
You need at least a 16GB of memory installed on your Apple MacOS device, the install will use around 11GB of the memory once it is fully spun up.
The install is fragile and frustrating at times, this is geared more towards operators who are trying out skunkworks on the platform such as testing custom buildpacks, hacking db queries and other potentially destructive activities. The install does NOT survive reboots and become extra brittle after 24+ hours of running. This is not KubeCF's fault, when run on EKS it will happily continue to run without issues. You've been warned!
tuntap
and start the shimkind
and deploy a clustermetallb
cf-operator
and deploy kubecf
The next few sections are borrowed heavily from https://www.thehumblelab.com/kind-and-metallb-on-mac/, I encourage you to skim this document to understand why the tuntap
shim is needed and how to verify the configuration for metallb
.
I won't go into great detail as these tools are likely already installed:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install --cask docker
# Then launch docker from Applications to complete the install and start docker
tuntap
and start the shimRunning docker
on MacOS has some "deficiencies" that can be overcome by installing a networking shim, to perform this install:
brew install git
brew install --cask tuntap
git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx.git
cd docker-tuntap-osx
./sbin/docker_tap_install.sh
./sbin/docker_tap_up.sh
kind
and deploy a clusterSweet'n'Simple:
brew install kind
kind create cluster
metallb
We'll be using metallb
as a LoadBalancer resource and open up a local route so that MacOS can route traffic locally to the cluster.
sudo route -v add -net 172.18.0.1 -netmask 255.255.0.0 10.0.75.2
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
cat << EOF > metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.18.0.150-172.18.0.200
EOF
kubectl create -f metallb-config.yaml
cf-operator
and deploy kubecf
brew install wget
brew install helm
brew install watch
wget https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-bundle-v2.7.13.tgz
tar -xvzf kubecf-bundle-v2.7.13.tgz
kubectl create namespace cf-operator
helm install cf-operator \
--namespace cf-operator \
--set "global.singleNamespace.name=kubecf" \
cf-operator.tgz \
--wait
helm install kubecf \
--namespace kubecf \
--set system_domain=172.18.0.150.nip.io \
--set features.eirini.enabled=false \
--set features.ingress.enabled=false \
--set services.router.externalIPs={172.18.0.150}\
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-v2.7.13.tgz
watch kubectl get pods -A
Now, go take a walk, it will take 30-60 minutes for the kubecf
helm chart to be fully picked up by the cf-operator
CRDs and are scheduled for running. When complete, you should see output similar to:
Every 2.0s: kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
cf-operator quarks-cd9d4b96f-rtkbt 1/1 Running 0 40m
cf-operator quarks-job-6d8d744bc6-pmfnd 1/1 Running 0 40m
cf-operator quarks-secret-7d76f854dc-9wp2f 1/1 Running 0 40m
cf-operator quarks-statefulset-f6dc85fb8-x6jfb 1/1 Running 0 40m
kube-system coredns-558bd4d5db-ncmh5 1/1 Running 0 41m
kube-system coredns-558bd4d5db-zlpgg 1/1 Running 0 41m
kube-system etcd-kind-control-plane 1/1 Running 0 41m
kube-system kindnet-w4m9n 1/1 Running 0 41m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 41m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 41m
kube-system kube-proxy-ln6hb 1/1 Running 0 41m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 41m
kubecf api-0 17/17 Running 1 19m
kubecf auctioneer-0 6/6 Running 2 20m
kubecf cc-worker-0 6/6 Running 0 20m
kubecf cf-apps-dns-76947f98b5-tbfql 1/1 Running 0 39m
kubecf coredns-quarks-7cf8f9f58d-msq9m 1/1 Running 0 38m
kubecf coredns-quarks-7cf8f9f58d-pbkt9 1/1 Running 0 38m
kubecf credhub-0 8/8 Running 0 20m
kubecf database-0 2/2 Running 0 38m
kubecf database-seeder-0bc49e7bcb1f9453-vnvjm 0/2 Completed 0 38m
kubecf diego-api-0 9/9 Running 2 20m
kubecf diego-cell-0 12/12 Running 2 20m
kubecf doppler-0 6/6 Running 0 20m
kubecf log-api-0 9/9 Running 0 20m
kubecf log-cache-0 10/10 Running 0 19m
kubecf nats-0 7/7 Running 0 20m
kubecf router-0 7/7 Running 0 20m
kubecf routing-api-0 6/6 Running 1 20m
kubecf scheduler-0 12/12 Running 2 19m
kubecf singleton-blobstore-0 8/8 Running 0 20m
kubecf tcp-router-0 7/7 Running 0 20m
kubecf uaa-0 9/9 Running 0 20m
local-path-storage local-path-provisioner-547f784dff-r8trf 1/1 Running 1 41m
metallb-system controller-fb659dc8-dhpnb 1/1 Running 0 41m
metallb-system speaker-h9lh9 1/1 Running 0 41m
To login with the admin
uaa user account:
cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
acp=$(kubectl get secret \
--namespace kubecf var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${acp}"
cf create-space test -o system
cf target -o system -s test
Or, to use smoke_tests
uaa client account (because you're a rebel or something):
cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
myclient=$(kubectl get secret \
--namespace kubecf var-uaa-clients-cf-smoke-tests-secret \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth cf_smoke_tests "${myclient}" --client-credentials
cf create-space test -o system
cf target -o system -s test
If you are done with the deployment of KubeCF you have two options:
kind-control-place
and click "Stop". Go have fun, when you come back, click "Start". After a few minutes the pods will recreate and become healthy kind delete cluster
sudo route delete 172.18.0.0
./sbin/docker_tap_uninstall.sh
Get used to seeing:
Request error: Get "https://api.172.18.0.150.nip.io": dial tcp 172.18.0.150:443: i/o timeout
The api server is flaky, try whatever you were doing again after verifying all pods are running as seen in the Install cf-operator and deploy kubecf
section.
Have questions? There is an excellent community for KubeCF which can be found at https://cloudfoundry.slack.com/archives/CQ2U3L6DC as kubecf-dev
in Slack. You can ping me there via @cweibel
I also have Terraform code which will spin up a VPC + EKS + KubeCF for a more permanent solution to running KubeCF
not on a Mac, check out https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf_v2 for more details.
Enjoy!
The post Running KubeCF using KIND on MacOS appeared first on Stark & Wayne.
]]>In the fall of 2020, VMware sent a curt email to all subscribers on its PWS — Pivotal Web Services — platform informing them that come January 15th, 2021, they would need to move all of their applications to ... elsewhere.
Understandably, this caused a great deal of confusion among the PWS customer base. It's not that easy to find a place where you can just `cf push` your application, let alone one with the breadth of marketplace services that PWS has. Well, had.
Perhaps surprisingly, it also engendered an unease amongst customers running their own Cloud Foundry installations. It didn't seem to matter if you were running a home-rolled Cloud Foundry, based on open source components, Stark & Wayne's own Genesis-deployed Cloud Foundry, or the official commercial Pivotal Cloud Foundry (later rebranded as VMware Tanzu Application Service). CF owners and operators the world over were left asking:
“Is this the beginning of the end of Cloud Foundry?”
Before answering that question, let's define some terms and parse some syntax. What is Cloud Foundry?
Fundamentally, Cloud Foundry is a promise; a studiously adhered-to contract between platform and developer. The platform agrees that if the developer abides by a few rules, then their application can be packaged, scheduled, and run without the developer's involvement. These rules are called "the 12 factors." Applications that abide by these rules are dubbed “12-factor Apps.” Examples include:
“A 12-factor application has no state.”
“A 12-factor application binds a single port, which it gets from the environment.”
“A 12-factor application crashes (or exits gracefully) if it cannot do its job.”
Cloud Foundry then is about agreement. Anything that can take on and uphold that agreement can rightfully call itself a Cloud Foundry. As of today, there's a couple of systems out there, in the wild, trying to do just that.
The first of those is also the oldest and most venerable: “OG Cloud Foundry.” The original one. Deployed on VMs, by BOSH. The components are BOSH releases, written in a pidgin of Go and Ruby, seasoned heavily with Java. In much the same way that penny-farthing bicycles came to be known as such, we now refer to this flavor of CF as "VM-based Cloud Foundry."
Next up is KubeCF. The KubeCF team made an early call that Kubernetes was the way (Mandalorian reference anyone?) and that the best future prospects for Cloud Foundry involved replacing the BOSH-and-VMs with Kubernetes-and-Containers. However, they made a crucial (and technologically inspiring) decision to simply directly consume and re-purpose the BOSH release artifacts. This allowed for consumption of the same BOSH releases other Cloud Foundry teams were already releasing. KubeCF is now considered one of the most stable and faithful containerized Cloud Foundry implementations.
Another project called “cf-for-k8s” was created for running Cloud Foundry on Kubernetes. It’s core premise is to embrace Kubernetes ecosystem components and package internal Cloud Foundry components directly as containerized workloads to run natively on Kubernetes. VMware had also arrived at the realization that Kubernetes was completely dominating the platform and container orchestration world, and decided to start pivoting slowly off of a BOSH-and-VMs approach. However, since VMware is the main driver of the CF roadmap (and SuSE / KubeCF folks are not), they were able to set the priorities and ensure a re-targeting to a Kubernetes native approach. The Cloud Foundry core teams were now looking to start shipping Open Container Images as their released asset rather than BOSH releases. This was a massive change to not just the implementation but also the architecture. Diego (the VM-based Cloud Foundry container orchestration engine) was removed as Cloud Foundry didn't need to lug around its own container runtime as this was provided out of the box by Kubernetes itself. The routing tier also got an overhaul, albeit one that was probably inevitable whether CF stayed VM-bound or not, using the Istio and Envoy technologies from the Kubernetes ecosystem. This further reduced the amount of code the core Cloud Foundry team themselves needed to maintain as projects.
As cf-for-k8s gains more beta adopters, and teams stop releasing these artifacts that KubeCF relies on, KubeCF will adopt components of cf-for-k8s as they become production viable. Eg. the composition of KubeCF will change to adopt these new container-first components. KubeCF serves as a vital bridge between VM-based and future containerized Cloud Foundry architectures.
And then there's `kf` from the folks at Google.
`kf` is a client-side reimplementation of the Cloud Foundry `cf` tool, the primary means of interacting with the platform to deploy and manage applications. The goal is simple: with a simple shell alias you can go from deploying your source code to a Cloud Foundry (VM-based, cf-for-k8s, or KubeCF), and instead deploy your applications to a Google managed Kubernetes cluster, atop GKE.
Google's looking at it primarily as a migration tool, but they've come surprisingly far in their support of the most commonly used commands and flags. It may seem weird and unconventional, but if you're already a Google shop, either in the cloud on GCP, or on-premises via Anthos, it's definitely a viable way forward. Our friends over at Engineer Better have had a good experience with it.
Because of the operational cost of VM infrastructure the solid and scalable CF on VMs has traditionally been viable only for massive organizations. We remain highly hopeful that a CF on Kubernetes option becomes viable so that the massive mid-market can also benefit from the advantages and experience of CF.
Ultimately, getting back to the main premise that Cloud Foundry is a promise between the platform and its developer-customers: We would not be surprised if someone came along and simply re-implemented that contract as a Kubernetes operator / controller, eg. directly using Kubernetes primitives. Then, with a single `helm install`, without bringing any additional tooling into the mix, or fundamentally changing how you manage deployments, you get a CF-compatible Kubernetes cluster. Developers can keep pushing their apps, either by hand or via CI/CD, and you can normalize your tool chain and defragment your execution runtime!
Now that’s a true win-win-win! (Owner-Operator-Consumer) ❤️
The post The Future of Cloud Foundry, BOSH, KubeCF, and cf-for-k8s appeared first on Stark & Wayne.
]]>In the fall of 2020, VMware sent a curt email to all subscribers on its PWS — Pivotal Web Services — platform informing them that come January 15th, 2021, they would need to move all of their applications to ... elsewhere.
Understandably, this caused a great deal of confusion among the PWS customer base. It's not that easy to find a place where you can just `cf push` your application, let alone one with the breadth of marketplace services that PWS has. Well, had.
Perhaps surprisingly, it also engendered an unease amongst customers running their own Cloud Foundry installations. It didn't seem to matter if you were running a home-rolled Cloud Foundry, based on open source components, Stark & Wayne's own Genesis-deployed Cloud Foundry, or the official commercial Pivotal Cloud Foundry (later rebranded as VMware Tanzu Application Service). CF owners and operators the world over were left asking:
“Is this the beginning of the end of Cloud Foundry?”
Before answering that question, let's define some terms and parse some syntax. What is Cloud Foundry?
Fundamentally, Cloud Foundry is a promise; a studiously adhered-to contract between platform and developer. The platform agrees that if the developer abides by a few rules, then their application can be packaged, scheduled, and run without the developer's involvement. These rules are called "the 12 factors." Applications that abide by these rules are dubbed “12-factor Apps.” Examples include:
“A 12-factor application has no state.”
“A 12-factor application binds a single port, which it gets from the environment.”
“A 12-factor application crashes (or exits gracefully) if it cannot do its job.”
Cloud Foundry then is about agreement. Anything that can take on and uphold that agreement can rightfully call itself a Cloud Foundry. As of today, there's a couple of systems out there, in the wild, trying to do just that.
The first of those is also the oldest and most venerable: “OG Cloud Foundry.” The original one. Deployed on VMs, by BOSH. The components are BOSH releases, written in a pidgin of Go and Ruby, seasoned heavily with Java. In much the same way that penny-farthing bicycles came to be known as such, we now refer to this flavor of CF as "VM-based Cloud Foundry."
Next up is KubeCF. The KubeCF team made an early call that Kubernetes was the way (Mandalorian reference anyone?) and that the best future prospects for Cloud Foundry involved replacing the BOSH-and-VMs with Kubernetes-and-Containers. However, they made a crucial (and technologically inspiring) decision to simply directly consume and re-purpose the BOSH release artifacts. This allowed for consumption of the same BOSH releases other Cloud Foundry teams were already releasing. KubeCF is now considered one of the most stable and faithful containerized Cloud Foundry implementations.
Another project called “cf-for-k8s” was created for running Cloud Foundry on Kubernetes. It’s core premise is to embrace Kubernetes ecosystem components and package internal Cloud Foundry components directly as containerized workloads to run natively on Kubernetes. VMware had also arrived at the realization that Kubernetes was completely dominating the platform and container orchestration world, and decided to start pivoting slowly off of a BOSH-and-VMs approach. However, since VMware is the main driver of the CF roadmap (and SuSE / KubeCF folks are not), they were able to set the priorities and ensure a re-targeting to a Kubernetes native approach. The Cloud Foundry core teams were now looking to start shipping Open Container Images as their released asset rather than BOSH releases. This was a massive change to not just the implementation but also the architecture. Diego (the VM-based Cloud Foundry container orchestration engine) was removed as Cloud Foundry didn't need to lug around its own container runtime as this was provided out of the box by Kubernetes itself. The routing tier also got an overhaul, albeit one that was probably inevitable whether CF stayed VM-bound or not, using the Istio and Envoy technologies from the Kubernetes ecosystem. This further reduced the amount of code the core Cloud Foundry team themselves needed to maintain as projects.
As cf-for-k8s gains more beta adopters, and teams stop releasing these artifacts that KubeCF relies on, KubeCF will adopt components of cf-for-k8s as they become production viable. Eg. the composition of KubeCF will change to adopt these new container-first components. KubeCF serves as a vital bridge between VM-based and future containerized Cloud Foundry architectures.
And then there's `kf` from the folks at Google.
`kf` is a client-side reimplementation of the Cloud Foundry `cf` tool, the primary means of interacting with the platform to deploy and manage applications. The goal is simple: with a simple shell alias you can go from deploying your source code to a Cloud Foundry (VM-based, cf-for-k8s, or KubeCF), and instead deploy your applications to a Google managed Kubernetes cluster, atop GKE.
Google's looking at it primarily as a migration tool, but they've come surprisingly far in their support of the most commonly used commands and flags. It may seem weird and unconventional, but if you're already a Google shop, either in the cloud on GCP, or on-premises via Anthos, it's definitely a viable way forward. Our friends over at Engineer Better have had a good experience with it.
Because of the operational cost of VM infrastructure the solid and scalable CF on VMs has traditionally been viable only for massive organizations. We remain highly hopeful that a CF on Kubernetes option becomes viable so that the massive mid-market can also benefit from the advantages and experience of CF.
Ultimately, getting back to the main premise that Cloud Foundry is a promise between the platform and its developer-customers: We would not be surprised if someone came along and simply re-implemented that contract as a Kubernetes operator / controller, eg. directly using Kubernetes primitives. Then, with a single `helm install`, without bringing any additional tooling into the mix, or fundamentally changing how you manage deployments, you get a CF-compatible Kubernetes cluster. Developers can keep pushing their apps, either by hand or via CI/CD, and you can normalize your tool chain and defragment your execution runtime!
Now that’s a true win-win-win! (Owner-Operator-Consumer) ❤️
The post The Future of Cloud Foundry, BOSH, KubeCF, and cf-for-k8s appeared first on Stark & Wayne.
]]>Photo by Natalie Su on Unsplash
Why, hello there!
In a previous blog post I wrote about deploying EKS via the CLI eksctl
command and then deploying v0.2.0
of KubeCF.
The post, like myself, has not aged gracefully.
This is a good news / bad news situation.
The good news is the KubeCF folks have continued to make the tool simpler, easier and better to use so a few tweaks are required to consume these changes. The bad news is home schooling my kids is leading to over eating and more gray hair taking over. One of these is easily fixed, so let's dig in!
In the previous blog I used eksctl
to spin an EKS cluster. Nice tool, however I've tried to standardize on a single tool to configure AWS resources so I've switched to Terraform. In another blog post I covered how to deploy a simple EKS cluster with a Managed Node Group and Fargate Profile. It's only a few lines of Terraform that rely on a few community repos.
To deploy KubeCF a few changes needed to be made:
Photo by Bill Jelen on Unsplash
I've created a GitHub repo to deploy an EKS cluster specifically for KubeCF at https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf
, clone the repo and modify cluster.tf
if you want a different version of Kubernetes or a different cluster name change the values accordingly:
locals { cluster_name = "my-eks-cluster"
cluster_version = "1.18"
}
Deploy the cluster with the following, swap in your own values for the AWS keys:
export AWS_ACCESS_KEY_ID=AKIAGETYOUROWNCREDS2
export AWS_SECRET_ACCESS_KEY=Nqo8XDD0cz8kffU234eCP0tKy9xHWBwg1JghXvM4
export AWS_DEFAULT_REGION=us-east-2
terraform init
terraform apply
Sit back and let it simmer for 20 minutes, no one said this would be fast! When done you'll see a line similar to:
...
Apply complete! Resources: 60 added, 0 changed, 0 destroyed.
Now go ahead and configure your kubeconfig by running:
$ aws eks --region us-west-2 update-kubeconfig --name my-eks-cluster
You know, you love it, let's use it! In the subsequent steps you'll be deploying KubeCF v2.6.1 which is the newest version as of this writing (10/27/2020).
Start by launching the cf-operator
Helm Chart which takes in BOSH syntax yaml and turns it into Kubernetes yaml on the fly:
kubectl create namespace cf-operator
helm install cf-operator \
--namespace cf-operator \
--set "global.singleNamespace.name=kubecf" \
https://github.com/cloudfoundry-incubator/quarks-operator/releases/download/v6.1.17/cf-operator-6.1.17+0.gec409fd7.tgz \
--wait
Before skipping ahead, give the cf-operator pods time to spin up. The --wait
"should" wait for all 3 pods to come as Running
before the CLI returns. To verify the state run the following:
$ kubectl -n cf-operator get pods
NAME READY STATUS RESTARTS AGE
cf-operator-666f64849c-6ncwb 1/1 Running 0 28s
cf-operator-quarks-job-65f9f7b584-sggs9 1/1 Running 0 28s
cf-operator-quarks-secret-867fcf579f-ff9cg 1/1 Running 0 28s
Now you can run the Helm command install KubeCF. The only variable which you'll need to set is the system_domain
. For this example I'm using system.kubecf.lab.starkandwayne.com
since I have DNS control via CloudFlare:
helm install kubecf \
--namespace kubecf \
--set system_domain=system.kubecf.lab.starkandwayne.com \
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.6.1/kubecf-v2.6.1.tgz
If you get an error similar to the following, you did not wait for the cf-operator pods to finish spinning up:
Error: unable to build Kubernetes objects from release manifest: [unable to recognize "": no matches for kind "BOSHDeployment" in version "quarks.cloudfoundry.org/v1alpha1", unable to recognize "": no matches for kind "QuarksSecret" in version "quarks.cloudfoundry.org/v1alpha1", unable to recognize "": no matches for kind "QuarksStatefulSet" in version "quarks.cloudfoundry.org/v1alpha1"]
This step will take 20 or so minute to coalesce, when done it should look like:
$ kubectl get pods --namespace kubecf
NAME READY STATUS RESTARTS AGE
api-0 17/17 Running 1 10m
auctioneer-0 6/6 Running 1 10m
bosh-dns-86c4557c69-8kmmw 1/1 Running 0 19m
bosh-dns-86c4557c69-trvb5 1/1 Running 0 19m
cc-worker-0 6/6 Running 0 10m
cf-apps-dns-58bd59c444-49fgl 1/1 Running 0 20m
credhub-0 8/8 Running 1 10m
database-0 2/2 Running 0 19m
database-seeder-8dda20ebe6fa756f-jvzx2 0/2 Completed 0 19m
diego-api-0 9/9 Running 2 10m
diego-cell-0 12/12 Running 9 10m
doppler-0 6/6 Running 0 10m
log-api-0 9/9 Running 0 10m
log-cache-0 10/10 Running 0 10m
nats-0 7/7 Running 0 10m
router-0 7/7 Running 1 10m
routing-api-0 6/6 Running 0 10m
scheduler-0 13/13 Running 1 10m
singleton-blobstore-0 8/8 Running 0 10m
tcp-router-0 7/7 Running 0 10m
uaa-0 9/9 Running 1 10m
Note that the version of the cf-operator
and kubecf
helm charts need to be kept in sync. Refer to the release notes of KubeCF versions to know the corresponding version of the cf-operator
.
To make this a more "I would actually use this in production" see:
Configuring advanced features like Eirini and External Databases (for the 7 control plane databases) can be found at https://kubecf.io/docs/deployment/advanced-topics/. I encourage you to use an RDS instance for anything even close to a production environment.
See https://kubecf.io/docs/tutorials/run-smoke-tests/for instructions on how to run the cf smoke-tests
.
There are 3 load balancers which are created during the deployment, the one I need can be viewed by:
kubectl get service router-public -n kubecf
kubecf kubecf-router-public LoadBalancer 172.20.50.146 a34d2e33633c511eaa0df0efe1a642cf-1224111110.us-west-2.elb.amazonaws.com 80:30027/TCP,443:30400/TCP 43m
You will need to associate the system_domain
in helm install command to the URL associated with the LoadBalancer named kubecf-router-public.
In CloudFlare, I added a cname record pointing the ELB to the system_domain
for my deployment:
If you are using Amazon Route53, you can follow the instructions here.
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system domain URL to the one registered in the previous step:
cf api --skip-ssl-validation "https://api.system.kubecf.lab.starkandwayne.com"
admin_pass=$(kubectl get secret \
--namespace kubecf var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${admin_pass}"
That's it! You can now create all the drnic
orgs and spaces you desire with the CF CLI and deploy the 465 copies of Spring Music to the platform you know and love!
Photo by Mohamed Nohassi on Unsplash
This should be done in two steps, marshmallows and sticks are optional:
kubectl delete namespace kubecf
will clean up the pods and other resources, including the Load Balancer resources that would prevent the VPC from being removedterraform destroy
will tear down the Managed Node Group, EKS Cluster, subnets and finally the VPC. If the first step was skipped this step may fail. Either rerun both or go into the AWS Console to clean up the VPC manually.The folks on the KubeCF project already have an excellent write up on deploying KubeCF on Kind. Check out https://kubecf.io/docs/deployment/kubernetes-deploy/. A fair warning, you'll need a computer with a bit of horsepower to run this locally but otherwise is great for getting a local copy of CF of your very own.
Here are a few interesting blogs around KubeCF, check them out and come back to our blog for more!
Enjoy!
The post Deploying KubeCF to EKS, Revisited appeared first on Stark & Wayne.
]]>Photo by Natalie Su on Unsplash
Why, hello there!
In a previous blog post I wrote about deploying EKS via the CLI eksctl
command and then deploying v0.2.0
of KubeCF.
The post, like myself, has not aged gracefully.
This is a good news / bad news situation.
The good news is the KubeCF folks have continued to make the tool simpler, easier and better to use so a few tweaks are required to consume these changes. The bad news is home schooling my kids is leading to over eating and more gray hair taking over. One of these is easily fixed, so let's dig in!
In the previous blog I used eksctl
to spin an EKS cluster. Nice tool, however I've tried to standardize on a single tool to configure AWS resources so I've switched to Terraform. In another blog post I covered how to deploy a simple EKS cluster with a Managed Node Group and Fargate Profile. It's only a few lines of Terraform that rely on a few community repos.
To deploy KubeCF a few changes needed to be made:
Photo by Bill Jelen on Unsplash
I've created a GitHub repo to deploy an EKS cluster specifically for KubeCF at https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf
, clone the repo and modify cluster.tf
if you want a different version of Kubernetes or a different cluster name change the values accordingly:
locals { cluster_name = "my-eks-cluster"
cluster_version = "1.18"
}
Deploy the cluster with the following, swap in your own values for the AWS keys:
export AWS_ACCESS_KEY_ID=AKIAGETYOUROWNCREDS2
export AWS_SECRET_ACCESS_KEY=Nqo8XDD0cz8kffU234eCP0tKy9xHWBwg1JghXvM4
export AWS_DEFAULT_REGION=us-east-2
terraform init
terraform apply
Sit back and let it simmer for 20 minutes, no one said this would be fast! When done you'll see a line similar to:
...
Apply complete! Resources: 60 added, 0 changed, 0 destroyed.
Now go ahead and configure your kubeconfig by running:
$ aws eks --region us-west-2 update-kubeconfig --name my-eks-cluster
You know, you love it, let's use it! In the subsequent steps you'll be deploying KubeCF v2.6.1 which is the newest version as of this writing (10/27/2020).
Start by launching the cf-operator
Helm Chart which takes in BOSH syntax yaml and turns it into Kubernetes yaml on the fly:
kubectl create namespace cf-operator
helm install cf-operator \
--namespace cf-operator \
--set "global.singleNamespace.name=kubecf" \
https://github.com/cloudfoundry-incubator/quarks-operator/releases/download/v6.1.17/cf-operator-6.1.17+0.gec409fd7.tgz \
--wait
Before skipping ahead, give the cf-operator pods time to spin up. The --wait
"should" wait for all 3 pods to come as Running
before the CLI returns. To verify the state run the following:
$ kubectl -n cf-operator get pods
NAME READY STATUS RESTARTS AGE
cf-operator-666f64849c-6ncwb 1/1 Running 0 28s
cf-operator-quarks-job-65f9f7b584-sggs9 1/1 Running 0 28s
cf-operator-quarks-secret-867fcf579f-ff9cg 1/1 Running 0 28s
Now you can run the Helm command install KubeCF. The only variable which you'll need to set is the system_domain
. For this example I'm using system.kubecf.lab.starkandwayne.com
since I have DNS control via CloudFlare:
helm install kubecf \
--namespace kubecf \
--set system_domain=system.kubecf.lab.starkandwayne.com \
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.6.1/kubecf-v2.6.1.tgz
If you get an error similar to the following, you did not wait for the cf-operator pods to finish spinning up:
Error: unable to build Kubernetes objects from release manifest: [unable to recognize "": no matches for kind "BOSHDeployment" in version "quarks.cloudfoundry.org/v1alpha1", unable to recognize "": no matches for kind "QuarksSecret" in version "quarks.cloudfoundry.org/v1alpha1", unable to recognize "": no matches for kind "QuarksStatefulSet" in version "quarks.cloudfoundry.org/v1alpha1"]
This step will take 20 or so minute to coalesce, when done it should look like:
$ kubectl get pods --namespace kubecf
NAME READY STATUS RESTARTS AGE
api-0 17/17 Running 1 10m
auctioneer-0 6/6 Running 1 10m
bosh-dns-86c4557c69-8kmmw 1/1 Running 0 19m
bosh-dns-86c4557c69-trvb5 1/1 Running 0 19m
cc-worker-0 6/6 Running 0 10m
cf-apps-dns-58bd59c444-49fgl 1/1 Running 0 20m
credhub-0 8/8 Running 1 10m
database-0 2/2 Running 0 19m
database-seeder-8dda20ebe6fa756f-jvzx2 0/2 Completed 0 19m
diego-api-0 9/9 Running 2 10m
diego-cell-0 12/12 Running 9 10m
doppler-0 6/6 Running 0 10m
log-api-0 9/9 Running 0 10m
log-cache-0 10/10 Running 0 10m
nats-0 7/7 Running 0 10m
router-0 7/7 Running 1 10m
routing-api-0 6/6 Running 0 10m
scheduler-0 13/13 Running 1 10m
singleton-blobstore-0 8/8 Running 0 10m
tcp-router-0 7/7 Running 0 10m
uaa-0 9/9 Running 1 10m
Note that the version of the cf-operator
and kubecf
helm charts need to be kept in sync. Refer to the release notes of KubeCF versions to know the corresponding version of the cf-operator
.
To make this a more "I would actually use this in production" see:
Configuring advanced features like Eirini and External Databases (for the 7 control plane databases) can be found at https://kubecf.io/docs/deployment/advanced-topics/. I encourage you to use an RDS instance for anything even close to a production environment.
See https://kubecf.io/docs/tutorials/run-smoke-tests/for instructions on how to run the cf smoke-tests
.
There are 3 load balancers which are created during the deployment, the one I need can be viewed by:
kubectl get service router-public -n kubecf
kubecf kubecf-router-public LoadBalancer 172.20.50.146 a34d2e33633c511eaa0df0efe1a642cf-1224111110.us-west-2.elb.amazonaws.com 80:30027/TCP,443:30400/TCP 43m
You will need to associate the system_domain
in helm install command to the URL associated with the LoadBalancer named kubecf-router-public.
In CloudFlare, I added a cname record pointing the ELB to the system_domain
for my deployment:
If you are using Amazon Route53, you can follow the instructions here.
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system domain URL to the one registered in the previous step:
cf api --skip-ssl-validation "https://api.system.kubecf.lab.starkandwayne.com"
admin_pass=$(kubectl get secret \
--namespace kubecf var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${admin_pass}"
That's it! You can now create all the drnic
orgs and spaces you desire with the CF CLI and deploy the 465 copies of Spring Music to the platform you know and love!
Photo by Mohamed Nohassi on Unsplash
This should be done in two steps, marshmallows and sticks are optional:
kubectl delete namespace kubecf
will clean up the pods and other resources, including the Load Balancer resources that would prevent the VPC from being removedterraform destroy
will tear down the Managed Node Group, EKS Cluster, subnets and finally the VPC. If the first step was skipped this step may fail. Either rerun both or go into the AWS Console to clean up the VPC manually.The folks on the KubeCF project already have an excellent write up on deploying KubeCF on Kind. Check out https://kubecf.io/docs/deployment/kubernetes-deploy/. A fair warning, you'll need a computer with a bit of horsepower to run this locally but otherwise is great for getting a local copy of CF of your very own.
Here are a few interesting blogs around KubeCF, check them out and come back to our blog for more!
Enjoy!
The post Deploying KubeCF to EKS, Revisited appeared first on Stark & Wayne.
]]>Photo by Kent Weitkamp on Unsplash
Great question.
The following is a cloud agnostic guide to installing a 3-node RKE cluster, installing the Rancher UI, and using them to run KubeCF on top for a quick, cheap development Cloud Foundry environment. Depending on the IaaS you are deploying on top of you may need to modify some of the configurations where applicable - i.e. cloud_provider
. Examples of these modifications for vSphere are included.
The first step in creating our 3-node RKE cluster is prepping the machines themselves. These machines can be bare-metal, on-prem virtual, or cloud instances, it doesn't really matter as long as they are capable of running a distribution of linux with a supporting container runtime (i.e. Docker). For the sake of this blog, we will be creating 3 Ubuntu virtual machines on vSphere each with 2 CPU, 4GB RAM, and 100GB Disk.
Once you have the VMs up and running with Ubuntu Server installed, it's time to install Docker and the Rancher toolset.
The following commands can be run to add the relevant apt repo. GPG keys and Docker are installed by:
$ sudo apt update## Install GPG
$ sudo apt install apt-transport-https ca-certificates curl software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
## Add docker repo and install
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
$ sudo apt update && sudo apt install docker-ce
Presuming all went smoothly, you should be able to check the status and see that Docker is now running:
$ sudo service docker status
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-02-13 11:14:33 EST; 1 months 16 days ago
Docs: https://docs.docker.com
Main PID: 1166 (dockerd)
Tasks: 42
Memory: 315.6M
CPU: 4d 8h 32min 36.342s
CGroup: /system.slice/docker.service
└─1166 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
To create and orchestrate the cluster, RKE uses SSH for access to each of the machines. In this case, we are going to create a new ssh key with ssh-keygen
and add it to all of the machines with ssh-copy-id
. For ease of deployment, avoid adding a passphrase.
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rke/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rke/.ssh/id_rsa.
Your public key has been saved in /home/rke/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:2qtjgSJg4kj/VCT2x9lbLytYhFLJTHbz4bX8bVVIy1A rke@rancher
The key's randomart image is:
+---[RSA 2048]----+
| +o.o.+Eo.|
| o ..=. +o=.o|
| . + o + ooo.|
|oo + = o . +|
|B . .. S . o . +|
|o......o o . o |
| . .o ... o o |
| .o o . . |
| ..o. . |
+----[SHA256]-----+
The following can then be performed for each of the new machines. The command will copy the ssh keys you generated to the other 2 nodes.
$ ssh-copy-id -i ~/.ssh/id_rsa.pub rke@<ip-addr>
Now that we have Docker installed and ready and ssh configured, we need to install the tools used to create and manage the cluster. For this, all we need are rke
, helm
, and kubectl
.
Both rke
, helm
, and kubectl
need to be downloaded, made executable, and added to a place in your PATH:
## Install rke cli
$ wget https://github.com/rancher/rke/releases/download/v1.0.6/rke_linux-amd64
$ chmod +x rke_linux-amd64
$ sudo mv rke_linux-amd64 /usr/local/bin/rke
## Install helm cli
$ wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
$ tar -xvf helm-v3.1.2-linux-amd64.tar.gz linux-amd64/helm --strip 1
$ sudo mv helm /usr/local/bin/helm
## Install kubectl cli
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/kubectl
A quick note here about versions:
At this point, we are ready to configure and provision the new K8s cluster. While there are a lot of potential options to fiddle with, rke
will walk you through them and set up sane defaults to get going quickly. For our use case, we will be enabling all three roles (Control Plane, Worker, etcd) on each of our nodes.
The rke config
command will start a wizard bringing you through a series of questions with the goal of generating a cluster.yml
file. If you answer one of the questions you'll be able to manually edit the cluster.yml
file before deploying the cluster. An example of the wizard is below:
$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]: 3
[+] SSH Address of host (1) [none]: 10.128.54.1
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (10.128.54.1) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.128.54.1) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (10.128.54.1) [ubuntu]: rke
[+] Is host (10.128.54.1) a Control Plane host (y/n)? [y]: y
[+] Is host (10.128.54.1) a Worker host (y/n)? [n]: y
[+] Is host (10.128.54.1) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (10.128.54.1) [none]: rke1
[+] Internal IP of host (10.128.54.1) [none]:
[+] Docker socket path on host (10.128.54.1) [/var/run/docker.sock]:
[+] SSH Address of host (2) [none]:
...
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.2-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:
In running the interactive command above and answering the questions regarding our machines, network config, and K8s options rke
has generated a lengthy cluster.yml
file that is the main source of truth for the deployment.
You may want to consider modifying the cluster.yml file to add in cloud_provider
options based on your underlying IaaS or change any answers you gave in the previous step before deployment. An example cloud_provider config is shown below for vSphere - we will be going into a deeper dive in another post regarding the vSphere cloud provider specifically if you run into issues or have questions regarding that. For other IaaSs - please refer to the Rancher documentation here.
cloud_provider:
name: vsphere
vsphereCloudProvider:
global:
insecure-flag: false
virtual_center:
vsphere.lab.example.com:
user: "vsphere-user"
password: "vsphere-password"
port: 443
datacenters: /Lab-Datacenter
workspace:
server: vsphere.lab.example.com
folder: /Lab-Datacenter/vm/k8s-demo-lab/vms
default-datastore: /Lab-Datacenter/datastore/Datastore-1
datacenter: /Lab-Datacenter
In addition to adding the cloud_provider section above for your specific IaaS, you also should add the below section under the services
key so that it looks like the following. This allows the cluster to sign certificate requests which is required by the KubeCF deployment for our dev environment.
services:
kube-controller:
extra_args:
cluster-signing-cert-file: /etc/kubernetes/ssl/kube-ca.pem
cluster-signing-key-file: /etc/kubernetes/ssl/kube-ca-key.pem
Now that we have our cluster.yml
prepared and ready to deploy, it can be rolled out using rke up
:
$ rke up --config cluster.yml
INFO[0000] Running RKE version: v1.0.4
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [10.128.54.0]
INFO[0002] [network] No hosts added existing cluster, skipping port check
INFO[0002] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0002] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0003] Image [rancher/rke-tools:v0.1.52] exists on host [10.128.54.0]
INFO[0010] Starting container [cert-deployer] on host [10.128.54.0], try #1
INFO[0025] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0031] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0031] Removing container [cert-deployer] on host [10.128.54.0], try #1
INFO[0031] [reconcile] Rebuilding and updating local kube config
INFO[0031] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0031] [reconcile] host [10.128.54.0] is active master on the cluster
INFO[0031] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0031] [reconcile] Reconciling cluster state
INFO[0031] [reconcile] Check etcd hosts to be deleted
INFO[0031] [reconcile] Check etcd hosts to be added
INFO[0031] [reconcile] Rebuilding and updating local kube config
INFO[0031] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0031] [reconcile] host [10.128.54.0] is active master on the cluster
INFO[0031] [reconcile] Reconciled cluster state successfully
INFO[0031] Pre-pulling kubernetes images
...
INFO[0038] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0038] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0038] [addons] Executing deploy job rke-ingress-controller
INFO[0038] [ingress] ingress controller nginx deployed successfully
INFO[0038] [addons] Setting up user addons
INFO[0038] [addons] no user addons defined
INFO[0038] Finished building Kubernetes cluster successfully
At this point we should have a cluster up and running and a few new files will have been generated - cluster.rkestate
and kube_config_cluster.yml
. In order to perform future updates against the cluster you need to preserve the cluster.rkestate
file otherwise rke wont be able to properly interact with the cluster.
We can run some basic commands to ensure that the new cluster is up and running and then move on to installing the Rancher UI:
$ export KUBECONFIG=$(pwd)/kube_config_cluster.yml
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rancher Ready controlplane,etcd,worker 21d v1.17.2
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
canal-p7jgr 2/2 Running 2 21d
coredns-7c5566588d-hrhtr 1/1 Running 1 21d
coredns-autoscaler-65bfc8d47d-fz285 1/1 Running 1 21d
metrics-server-6b55c64f86-mq99l 1/1 Running 1 21d
rke-coredns-addon-deploy-job-7vgcd 0/1 Completed 0 21d
rke-ingress-controller-deploy-job-97tln 0/1 Completed 0 21d
rke-metrics-addon-deploy-job-lk4qk 0/1 Completed 0 21d
rke-network-plugin-deploy-job-vlhvq 0/1 Completed 0 21d
Assuming everything looks similar you should be ready to proceed.
One prerequisite to installing Rancher UI is cert-manager
presuming you are not using are not bringing your own certs or using letsencrypt. Thankfully, the installation process is just one command:
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.yaml
And to check that it is working, make sure all the pods come up ok:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-64b6c865d9-kss6c 1/1 Running 0 21d
cert-manager-cainjector-bfcf448b8-q98q6 1/1 Running 0 21d
cert-manager-webhook-7f5bf9cbdf-d66k8 1/1 Running 0 21d
Now we can install Rancher via helm
:
$ helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.lab.example.com
And wait for the deployment to roll out:
$ kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
Presuming all went according to plan (and you configured DNS accordingly to point at your nodes) - the Rancher UI should now be available at the domain you configured.
After setting up the admin account, you should be able to sign in and view your new cluster:
KubeCF is currently deployed in two parts - the cf-operator
and the kubecf deployment
which leverages the cf-operator to turn a traditional manifest into K8s native spec.
Another quick note about versions:
cf-operator
and KubeCF
are very important. As of this writing there is not a matrix of operator compatability but the examples provided below have been tested to work. The release notes for KubeCF reference the version of the cf-operator that particular version was tested with. For example, the release for KubeCF we are deploying can be found here has cf-operator
v3.3.0 as listed under dependencies.So let's start with deploying cf-operator via helm
:
$ kubectl create namespace cf-operator
$ helm install cf-operator \
--namespace cf-operator \
--set "global.operator.watchNamespace=kubecf" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-3.3.0%2B0.gf32b521e.tgz
After deploying, there should be two pods created in the cf-operator
namespace. We should check to make sure they are both up and ready (STATUS=Running) before deploying KubeCF:
$ kubectl -n cf-operator get pods
NAME READY STATUS RESTARTS AGE
cf-operator-69848766f6-lw82r 1/1 Running 0 29s
cf-operator-quarks-job-5bb6fc7bd6-qlg8l 1/1 Running 0 29s
Now it's time to deploy KubeCF, for this environment we are going to deploy with the defaults with the exception of using Eirini for application workloads. For more information regarding the different deployment options and features of KubeCF, check out our previous blog here.
$ helm install kubecf \
--namespace kubecf \
--set system_domain=system.kubecf.example.com \
--set features.eirini.enabled=true \
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v1.0.1/kubecf-v1.0.1.tgz
After running the helm deploy, it'll take a few minutes to start spinning CF pods in the kubecf
namespace. We can then watch the pods come up and wait for them all to have a ready status - on average it should take between 20-45 minutes depending on the options you selected and the specs of cluster you are deploying to. You may see some of the pods failing and restarting a few times as the cluster comes up as they are waiting for different dependencies to be available.
$ watch kubectl get po -n kubecf
NAME READY STATUS RESTARTS AGE
ig-kubecf-51d0cf09745042ad-l7xnb 0/20 Init:4/37 0 3m11s
kubecf-database-0 2/2 Running 0 3m24s
While this is deploying, check out the IP address associated with the kubecf-router-public
loadbalancer and add a wildcard DNS record for the system_domain
you specified above as well as any additional application domains:
$ kubectl get svc -n kubecf | grep -i load
kubecf-cc-uploader ClusterIP 10.43.196.54 <none> 9090/TCP,9091/TCP
kubecf-router-public LoadBalancer 10.43.212.247 10.128.54.241 80:32019/TCP,443:32255/TCP
kubecf-ssh-proxy-public LoadBalancer 10.43.174.207 10.128.54.240 2222:31768/TCP
kubecf-tcp-router-public LoadBalancer 10.43.167.176 10.128.54.242 80:30897/TCP,20000:30896/TCP,20001
The final deployed state should look like the following:
$ kubectl get po -n kubecf
NAME READY STATUS RESTARTS AGE
kubecf-adapter-0 4/4 Running 0 24m
kubecf-api-0 15/15 Running 1 24m
kubecf-bits-0 6/6 Running 0 23m
kubecf-bosh-dns-59cd464989-bh2dp 1/1 Running 0 24m
kubecf-bosh-dns-59cd464989-mgw7z 1/1 Running 0 24m
kubecf-cc-worker-0 4/4 Running 0 23m
kubecf-credhub-0 5/6 Running 0 24m
kubecf-database-0 2/2 Running 0 36m
kubecf-diego-api-0 6/6 Running 2 24m
kubecf-doppler-0 9/9 Running 0 24m
kubecf-eirini-0 9/9 Running 0 23m
kubecf-log-api-0 7/7 Running 0 23m
kubecf-nats-0 4/4 Running 0 24m
kubecf-router-0 5/5 Running 0 23m
kubecf-routing-api-0 4/4 Running 0 23m
kubecf-scheduler-0 8/8 Running 0 23m
kubecf-singleton-blobstore-0 6/6 Running 0 24m
kubecf-tcp-router-0 5/5 Running 0 24m
kubecf-uaa-0 7/7 Running 6 24m
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system_domain
to the one you deployed with:
$ cf api --skip-ssl-validation "https://api.<system_domain>"
$ admin_pass=$(kubectl get secret \
--namespace kubecf kubecf.var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
$ cf auth admin "${admin_pass}"
Now that our new foundation is up and running, it's time to test it by adding a space and pushing an application. Let's start by creating the system
space within the system
org.
$ cf target -o system
$ cf create-space system
$ cf target -s system
Now the app we will be deploying is called cf-env
, it is a simple application used for debugging/testing. It displays its running Environment and HTTP Request Headers.
To deploy it, clone the repo and push it to the new foundation
$ git clone git@github.com:cloudfoundry-community/cf-env.git
$ cd cf-env
$ cf push -n test
The first deployment usually takes a couple minutes to stage and start running, but after the app comes up you should be able to visit http://test.<system_domain>
and see output similar to the following.
KubeCF uses cf-deployment
under the hood as the blueprint for deploying Cloud Foundry. Inside of cf-deployment
you can run "smoke-tests" to run non-destructive validation that your Cloud Foundry deployment is in a happy state.
To run the smoke-tests at any time run a simple kubectl
patch command to invoke the smoke tests:
$ kubectl patch qjob kubecf-smoke-tests --namespace kubecf --type merge --patch '{ "spec": { "trigger": { "strategy": "now" } } }'
In v4 of the cf-operator, replace kubecf-smoke-tests
with smoke-tests
.
This will create a new job
and pod
each prefixed with kubecf-smoke-tests-*
.
There are a few containers which will spin up in the pod, if you tail the logs on the smoke-tests-smoke-tests
container you will see the logs:
$ k logs kubecf-smoke-tests-4078f266ae3dff68-rdhz4 -c smoke-tests-smoke-tests -n kubecf -f
Running smoke tests...
Running binaries smoke/isolation_segments/isolation_segments.test
smoke/logging/logging.test
smoke/runtime/runtime.test
[1585940920] CF-Isolation-Segment-Smoke-Tests - 4 specs - 4 nodes SSSS SUCCESS! 29.974196268s
[1585940920] CF-Logging-Smoke-Tests - 2 specs - 4 nodes S• SUCCESS! 1m56.090729823s
[1585940920] CF-Runtime-Smoke-Tests - 2 specs - 4 nodes S• SUCCESS! 2m37.907767486s
Ginkgo ran 3 suites in 5m4.100902481s
Test Suite Passed
Now that the foundation is happily running, it's time to add it to a Rancher project for ease of visibility and management. Rancher projects allow you to group a collection of namespaces together within the Rancher UI and also allows for setting of quotas and sharing of secrets across all the underlying namespaces.
From the cluster dashboard, click on Projects/Namespaces
.
As you can see from the following image, the three KubeCF namespaces ( kubecf
, kubecf-eirini
, and cf-operator
) all do not currently belong to a project. Let's fix that, starting by selecting Add Project
.
For this deployment we are just going to fill in a name, leave all of the other options as default, and click Create
.
Then from the Projects/Namespaces
screen, we are going to select the three KubeCF namespaces and then click Move
.
Select the new project you just created and confirm by selecting Move
. At this point, the namespaces are added to your new project and their resources can now be easily accessed from the UI.
At this point, your new foundation on top of RKE is ready to roll.
The post Cloud Foundry on Rancher (RKE): Where to Begin appeared first on Stark & Wayne.
]]>Photo by Kent Weitkamp on Unsplash
Great question.
The following is a cloud agnostic guide to installing a 3-node RKE cluster, installing the Rancher UI, and using them to run KubeCF on top for a quick, cheap development Cloud Foundry environment. Depending on the IaaS you are deploying on top of you may need to modify some of the configurations where applicable - i.e. cloud_provider
. Examples of these modifications for vSphere are included.
The first step in creating our 3-node RKE cluster is prepping the machines themselves. These machines can be bare-metal, on-prem virtual, or cloud instances, it doesn't really matter as long as they are capable of running a distribution of linux with a supporting container runtime (i.e. Docker). For the sake of this blog, we will be creating 3 Ubuntu virtual machines on vSphere each with 2 CPU, 4GB RAM, and 100GB Disk.
Once you have the VMs up and running with Ubuntu Server installed, it's time to install Docker and the Rancher toolset.
The following commands can be run to add the relevant apt repo. GPG keys and Docker are installed by:
$ sudo apt update## Install GPG
$ sudo apt install apt-transport-https ca-certificates curl software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
## Add docker repo and install
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
$ sudo apt update && sudo apt install docker-ce
Presuming all went smoothly, you should be able to check the status and see that Docker is now running:
$ sudo service docker status
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2020-02-13 11:14:33 EST; 1 months 16 days ago
Docs: https://docs.docker.com
Main PID: 1166 (dockerd)
Tasks: 42
Memory: 315.6M
CPU: 4d 8h 32min 36.342s
CGroup: /system.slice/docker.service
└─1166 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
To create and orchestrate the cluster, RKE uses SSH for access to each of the machines. In this case, we are going to create a new ssh key with ssh-keygen
and add it to all of the machines with ssh-copy-id
. For ease of deployment, avoid adding a passphrase.
$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/rke/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/rke/.ssh/id_rsa.
Your public key has been saved in /home/rke/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:2qtjgSJg4kj/VCT2x9lbLytYhFLJTHbz4bX8bVVIy1A rke@rancher
The key's randomart image is:
+---[RSA 2048]----+
| +o.o.+Eo.|
| o ..=. +o=.o|
| . + o + ooo.|
|oo + = o . +|
|B . .. S . o . +|
|o......o o . o |
| . .o ... o o |
| .o o . . |
| ..o. . |
+----[SHA256]-----+
The following can then be performed for each of the new machines. The command will copy the ssh keys you generated to the other 2 nodes.
$ ssh-copy-id -i ~/.ssh/id_rsa.pub rke@<ip-addr>
Now that we have Docker installed and ready and ssh configured, we need to install the tools used to create and manage the cluster. For this, all we need are rke
, helm
, and kubectl
.
Both rke
, helm
, and kubectl
need to be downloaded, made executable, and added to a place in your PATH:
## Install rke cli
$ wget https://github.com/rancher/rke/releases/download/v1.0.6/rke_linux-amd64
$ chmod +x rke_linux-amd64
$ sudo mv rke_linux-amd64 /usr/local/bin/rke
## Install helm cli
$ wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
$ tar -xvf helm-v3.1.2-linux-amd64.tar.gz linux-amd64/helm --strip 1
$ sudo mv helm /usr/local/bin/helm
## Install kubectl cli
$ wget https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl
$ chmod +x kubectl
$ sudo mv kubectl /usr/local/bin/kubectl
A quick note here about versions:
At this point, we are ready to configure and provision the new K8s cluster. While there are a lot of potential options to fiddle with, rke
will walk you through them and set up sane defaults to get going quickly. For our use case, we will be enabling all three roles (Control Plane, Worker, etcd) on each of our nodes.
The rke config
command will start a wizard bringing you through a series of questions with the goal of generating a cluster.yml
file. If you answer one of the questions you'll be able to manually edit the cluster.yml
file before deploying the cluster. An example of the wizard is below:
$ rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]: 3
[+] SSH Address of host (1) [none]: 10.128.54.1
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (10.128.54.1) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (10.128.54.1) [none]:
[-] You have entered empty SSH key, defaulting to cluster level SSH key: ~/.ssh/id_rsa
[+] SSH User of host (10.128.54.1) [ubuntu]: rke
[+] Is host (10.128.54.1) a Control Plane host (y/n)? [y]: y
[+] Is host (10.128.54.1) a Worker host (y/n)? [n]: y
[+] Is host (10.128.54.1) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (10.128.54.1) [none]: rke1
[+] Internal IP of host (10.128.54.1) [none]:
[+] Docker socket path on host (10.128.54.1) [/var/run/docker.sock]:
[+] SSH Address of host (2) [none]:
...
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.2-rancher1]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:
In running the interactive command above and answering the questions regarding our machines, network config, and K8s options rke
has generated a lengthy cluster.yml
file that is the main source of truth for the deployment.
You may want to consider modifying the cluster.yml file to add in cloud_provider
options based on your underlying IaaS or change any answers you gave in the previous step before deployment. An example cloud_provider config is shown below for vSphere - we will be going into a deeper dive in another post regarding the vSphere cloud provider specifically if you run into issues or have questions regarding that. For other IaaSs - please refer to the Rancher documentation here.
cloud_provider:
name: vsphere
vsphereCloudProvider:
global:
insecure-flag: false
virtual_center:
vsphere.lab.example.com:
user: "vsphere-user"
password: "vsphere-password"
port: 443
datacenters: /Lab-Datacenter
workspace:
server: vsphere.lab.example.com
folder: /Lab-Datacenter/vm/k8s-demo-lab/vms
default-datastore: /Lab-Datacenter/datastore/Datastore-1
datacenter: /Lab-Datacenter
In addition to adding the cloud_provider section above for your specific IaaS, you also should add the below section under the services
key so that it looks like the following. This allows the cluster to sign certificate requests which is required by the KubeCF deployment for our dev environment.
services:
kube-controller:
extra_args:
cluster-signing-cert-file: /etc/kubernetes/ssl/kube-ca.pem
cluster-signing-key-file: /etc/kubernetes/ssl/kube-ca-key.pem
Now that we have our cluster.yml
prepared and ready to deploy, it can be rolled out using rke up
:
$ rke up --config cluster.yml
INFO[0000] Running RKE version: v1.0.4
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [certificates] Generating admin certificates and kubeconfig
INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
INFO[0000] Building Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [10.128.54.0]
INFO[0002] [network] No hosts added existing cluster, skipping port check
INFO[0002] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0002] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0003] Image [rancher/rke-tools:v0.1.52] exists on host [10.128.54.0]
INFO[0010] Starting container [cert-deployer] on host [10.128.54.0], try #1
INFO[0025] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0031] Checking if container [cert-deployer] is running on host [10.128.54.0], try #1
INFO[0031] Removing container [cert-deployer] on host [10.128.54.0], try #1
INFO[0031] [reconcile] Rebuilding and updating local kube config
INFO[0031] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0031] [reconcile] host [10.128.54.0] is active master on the cluster
INFO[0031] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0031] [reconcile] Reconciling cluster state
INFO[0031] [reconcile] Check etcd hosts to be deleted
INFO[0031] [reconcile] Check etcd hosts to be added
INFO[0031] [reconcile] Rebuilding and updating local kube config
INFO[0031] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0031] [reconcile] host [10.128.54.0] is active master on the cluster
INFO[0031] [reconcile] Reconciled cluster state successfully
INFO[0031] Pre-pulling kubernetes images
...
INFO[0038] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0038] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0038] [addons] Executing deploy job rke-ingress-controller
INFO[0038] [ingress] ingress controller nginx deployed successfully
INFO[0038] [addons] Setting up user addons
INFO[0038] [addons] no user addons defined
INFO[0038] Finished building Kubernetes cluster successfully
At this point we should have a cluster up and running and a few new files will have been generated - cluster.rkestate
and kube_config_cluster.yml
. In order to perform future updates against the cluster you need to preserve the cluster.rkestate
file otherwise rke wont be able to properly interact with the cluster.
We can run some basic commands to ensure that the new cluster is up and running and then move on to installing the Rancher UI:
$ export KUBECONFIG=$(pwd)/kube_config_cluster.yml
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
rancher Ready controlplane,etcd,worker 21d v1.17.2
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
canal-p7jgr 2/2 Running 2 21d
coredns-7c5566588d-hrhtr 1/1 Running 1 21d
coredns-autoscaler-65bfc8d47d-fz285 1/1 Running 1 21d
metrics-server-6b55c64f86-mq99l 1/1 Running 1 21d
rke-coredns-addon-deploy-job-7vgcd 0/1 Completed 0 21d
rke-ingress-controller-deploy-job-97tln 0/1 Completed 0 21d
rke-metrics-addon-deploy-job-lk4qk 0/1 Completed 0 21d
rke-network-plugin-deploy-job-vlhvq 0/1 Completed 0 21d
Assuming everything looks similar you should be ready to proceed.
One prerequisite to installing Rancher UI is cert-manager
presuming you are not using are not bringing your own certs or using letsencrypt. Thankfully, the installation process is just one command:
$ kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.1/cert-manager.yaml
And to check that it is working, make sure all the pods come up ok:
$ kubectl get pods --namespace cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-64b6c865d9-kss6c 1/1 Running 0 21d
cert-manager-cainjector-bfcf448b8-q98q6 1/1 Running 0 21d
cert-manager-webhook-7f5bf9cbdf-d66k8 1/1 Running 0 21d
Now we can install Rancher via helm
:
$ helm install rancher rancher-latest/rancher \
--namespace cattle-system \
--set hostname=rancher.lab.example.com
And wait for the deployment to roll out:
$ kubectl -n cattle-system rollout status deploy/rancher
Waiting for deployment "rancher" rollout to finish: 0 of 3 updated replicas are available...
deployment "rancher" successfully rolled out
Presuming all went according to plan (and you configured DNS accordingly to point at your nodes) - the Rancher UI should now be available at the domain you configured.
After setting up the admin account, you should be able to sign in and view your new cluster:
KubeCF is currently deployed in two parts - the cf-operator
and the kubecf deployment
which leverages the cf-operator to turn a traditional manifest into K8s native spec.
Another quick note about versions:
cf-operator
and KubeCF
are very important. As of this writing there is not a matrix of operator compatability but the examples provided below have been tested to work. The release notes for KubeCF reference the version of the cf-operator that particular version was tested with. For example, the release for KubeCF we are deploying can be found here has cf-operator
v3.3.0 as listed under dependencies.So let's start with deploying cf-operator via helm
:
$ kubectl create namespace cf-operator
$ helm install cf-operator \
--namespace cf-operator \
--set "global.operator.watchNamespace=kubecf" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-3.3.0%2B0.gf32b521e.tgz
After deploying, there should be two pods created in the cf-operator
namespace. We should check to make sure they are both up and ready (STATUS=Running) before deploying KubeCF:
$ kubectl -n cf-operator get pods
NAME READY STATUS RESTARTS AGE
cf-operator-69848766f6-lw82r 1/1 Running 0 29s
cf-operator-quarks-job-5bb6fc7bd6-qlg8l 1/1 Running 0 29s
Now it's time to deploy KubeCF, for this environment we are going to deploy with the defaults with the exception of using Eirini for application workloads. For more information regarding the different deployment options and features of KubeCF, check out our previous blog here.
$ helm install kubecf \
--namespace kubecf \
--set system_domain=system.kubecf.example.com \
--set features.eirini.enabled=true \
https://github.com/cloudfoundry-incubator/kubecf/releases/download/v1.0.1/kubecf-v1.0.1.tgz
After running the helm deploy, it'll take a few minutes to start spinning CF pods in the kubecf
namespace. We can then watch the pods come up and wait for them all to have a ready status - on average it should take between 20-45 minutes depending on the options you selected and the specs of cluster you are deploying to. You may see some of the pods failing and restarting a few times as the cluster comes up as they are waiting for different dependencies to be available.
$ watch kubectl get po -n kubecf
NAME READY STATUS RESTARTS AGE
ig-kubecf-51d0cf09745042ad-l7xnb 0/20 Init:4/37 0 3m11s
kubecf-database-0 2/2 Running 0 3m24s
While this is deploying, check out the IP address associated with the kubecf-router-public
loadbalancer and add a wildcard DNS record for the system_domain
you specified above as well as any additional application domains:
$ kubectl get svc -n kubecf | grep -i load
kubecf-cc-uploader ClusterIP 10.43.196.54 <none> 9090/TCP,9091/TCP
kubecf-router-public LoadBalancer 10.43.212.247 10.128.54.241 80:32019/TCP,443:32255/TCP
kubecf-ssh-proxy-public LoadBalancer 10.43.174.207 10.128.54.240 2222:31768/TCP
kubecf-tcp-router-public LoadBalancer 10.43.167.176 10.128.54.242 80:30897/TCP,20000:30896/TCP,20001
The final deployed state should look like the following:
$ kubectl get po -n kubecf
NAME READY STATUS RESTARTS AGE
kubecf-adapter-0 4/4 Running 0 24m
kubecf-api-0 15/15 Running 1 24m
kubecf-bits-0 6/6 Running 0 23m
kubecf-bosh-dns-59cd464989-bh2dp 1/1 Running 0 24m
kubecf-bosh-dns-59cd464989-mgw7z 1/1 Running 0 24m
kubecf-cc-worker-0 4/4 Running 0 23m
kubecf-credhub-0 5/6 Running 0 24m
kubecf-database-0 2/2 Running 0 36m
kubecf-diego-api-0 6/6 Running 2 24m
kubecf-doppler-0 9/9 Running 0 24m
kubecf-eirini-0 9/9 Running 0 23m
kubecf-log-api-0 7/7 Running 0 23m
kubecf-nats-0 4/4 Running 0 24m
kubecf-router-0 5/5 Running 0 23m
kubecf-routing-api-0 4/4 Running 0 23m
kubecf-scheduler-0 8/8 Running 0 23m
kubecf-singleton-blobstore-0 6/6 Running 0 24m
kubecf-tcp-router-0 5/5 Running 0 24m
kubecf-uaa-0 7/7 Running 6 24m
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system_domain
to the one you deployed with:
$ cf api --skip-ssl-validation "https://api.<system_domain>"
$ admin_pass=$(kubectl get secret \
--namespace kubecf kubecf.var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
$ cf auth admin "${admin_pass}"
Now that our new foundation is up and running, it's time to test it by adding a space and pushing an application. Let's start by creating the system
space within the system
org.
$ cf target -o system
$ cf create-space system
$ cf target -s system
Now the app we will be deploying is called cf-env
, it is a simple application used for debugging/testing. It displays its running Environment and HTTP Request Headers.
To deploy it, clone the repo and push it to the new foundation
$ git clone git@github.com:cloudfoundry-community/cf-env.git
$ cd cf-env
$ cf push -n test
The first deployment usually takes a couple minutes to stage and start running, but after the app comes up you should be able to visit http://test.<system_domain>
and see output similar to the following.
KubeCF uses cf-deployment
under the hood as the blueprint for deploying Cloud Foundry. Inside of cf-deployment
you can run "smoke-tests" to run non-destructive validation that your Cloud Foundry deployment is in a happy state.
To run the smoke-tests at any time run a simple kubectl
patch command to invoke the smoke tests:
$ kubectl patch qjob kubecf-smoke-tests --namespace kubecf --type merge --patch '{ "spec": { "trigger": { "strategy": "now" } } }'
In v4 of the cf-operator, replace kubecf-smoke-tests
with smoke-tests
.
This will create a new job
and pod
each prefixed with kubecf-smoke-tests-*
.
There are a few containers which will spin up in the pod, if you tail the logs on the smoke-tests-smoke-tests
container you will see the logs:
$ k logs kubecf-smoke-tests-4078f266ae3dff68-rdhz4 -c smoke-tests-smoke-tests -n kubecf -f
Running smoke tests...
Running binaries smoke/isolation_segments/isolation_segments.test
smoke/logging/logging.test
smoke/runtime/runtime.test
[1585940920] CF-Isolation-Segment-Smoke-Tests - 4 specs - 4 nodes SSSS SUCCESS! 29.974196268s
[1585940920] CF-Logging-Smoke-Tests - 2 specs - 4 nodes S• SUCCESS! 1m56.090729823s
[1585940920] CF-Runtime-Smoke-Tests - 2 specs - 4 nodes S• SUCCESS! 2m37.907767486s
Ginkgo ran 3 suites in 5m4.100902481s
Test Suite Passed
Now that the foundation is happily running, it's time to add it to a Rancher project for ease of visibility and management. Rancher projects allow you to group a collection of namespaces together within the Rancher UI and also allows for setting of quotas and sharing of secrets across all the underlying namespaces.
From the cluster dashboard, click on Projects/Namespaces
.
As you can see from the following image, the three KubeCF namespaces ( kubecf
, kubecf-eirini
, and cf-operator
) all do not currently belong to a project. Let's fix that, starting by selecting Add Project
.
For this deployment we are just going to fill in a name, leave all of the other options as default, and click Create
.
Then from the Projects/Namespaces
screen, we are going to select the three KubeCF namespaces and then click Move
.
Select the new project you just created and confirm by selecting Move
. At this point, the namespaces are added to your new project and their resources can now be easily accessed from the UI.
At this point, your new foundation on top of RKE is ready to roll.
The post Cloud Foundry on Rancher (RKE): Where to Begin appeared first on Stark & Wayne.
]]>Photo by Herry Sutanto on Unsplash
In a previous blog post we discovered how to deploy a single KubeCF with a single cf-operator. Exciting stuff! What if you wanted to deploy a second KubeCF? A third?
With a couple minor changes to subsequent installs you can deploy as many instancess of KubeCF as you like, each in their own namespaces.
Creating the first KubeCF is done by Helm installing the cf-operator, configuring a values.yaml
file for KubeCF and finally Helm installing KubeCF. Each of the two operators will exist in their own namespace. With the feature.eirini.enable: true
option set in the values.yaml
of the KubeCF Helm chart a third namespace named <kubecf-name>-eirini
will be created for all the Cloud Foundry apps to live in.
For each additional instance of KubeCF, you'll need an install of the cf-operator
referencing an install of kubecf
. Each pair of cf-operator
+kubecf
will get their own namespaces so if you intend on deploying a great number of KubeCF installs consistent naming will become important.
A quick overview of installing a single instance of KubeCF is below. For complete instruction visit the blog post.
cf-operator
configured to watch a particular namespacevalues.yaml
file for KubeCF, leave the default NodePort of 32123 for the Eirini service.For each additional deployment of KubeCF there is a 1:1 install of the cf-operator required as well.
cf-operator
The cf-operator
configuration needs a few additional pieces:
In the example below, a second cf-operater is deployed, extra points if you can guess the naming for the third one:
kubectl create namespace cf-operator2helm install cf-operator2 \
--namespace cf-operator2 \
--set "global.operator.watchNamespace=kubecf2" \
--set "fullnameOverride=cf-operator2" \
--set "applyCRD=false" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-v2.0.0-0.g0142d1e9.tgz
values.yaml
for KubeCFThere are two important pieces of information which must be unique between each of the installs:
system_domain
- Don't reuse the same system_domain for any existing Cloud Foundry deployment which is visible by your DNS, regardless of whether it is KubeCF, Tanzu Pivotal Platform (PCF), or cf-deployment
-based. Debugging is hard enough without having to figure out which coconut we are talking to.features.eirini.registry.service.nodePort
must be a unique number across the entire cluster. Verify the port you hard code is not in use before deploying.system_domain: kubecf2.10.10.10.10.netip.cc # Must be unique
...
features:
eirini:
enabled: true
registry:
service:
nodePort: 32124 # Must be unique cluster-wide
An example of a modified values.yaml
:
There are a couple minor changes from the install of the first KubeCF:
global.operator.watchNamespace
of the corresponding cf-operator
values.yaml
file for this install of KubeCFhelm install kubecf2 \
--namespace kubecf2 \
--values /Users/chris/projects/kubecf/kubecf2/values.yaml \
https://github.com/SUSE/kubecf/releases/download/v0.2.0/kubecf-0.2.0.tgz
At some point you will run yourself out of Kubernetes resources if you keep spinning additional KubeCF installs. Two concurrent installs run happily on 3 worker nodes with 2vCPU and 16GB of memory each.
We'll have instructions soon for installing KubeCF on Rancher. Are there other infrastructures that you'd like to see KubeCF deployed to? Respond in the comments sections below!
The post More Limes: Running Multiple KubeCF Deployments on One Kubernetes Cluster appeared first on Stark & Wayne.
]]>Photo by Herry Sutanto on Unsplash
In a previous blog post we discovered how to deploy a single KubeCF with a single cf-operator. Exciting stuff! What if you wanted to deploy a second KubeCF? A third?
With a couple minor changes to subsequent installs you can deploy as many instancess of KubeCF as you like, each in their own namespaces.
Creating the first KubeCF is done by Helm installing the cf-operator, configuring a values.yaml
file for KubeCF and finally Helm installing KubeCF. Each of the two operators will exist in their own namespace. With the feature.eirini.enable: true
option set in the values.yaml
of the KubeCF Helm chart a third namespace named <kubecf-name>-eirini
will be created for all the Cloud Foundry apps to live in.
For each additional instance of KubeCF, you'll need an install of the cf-operator
referencing an install of kubecf
. Each pair of cf-operator
+kubecf
will get their own namespaces so if you intend on deploying a great number of KubeCF installs consistent naming will become important.
A quick overview of installing a single instance of KubeCF is below. For complete instruction visit the blog post.
cf-operator
configured to watch a particular namespacevalues.yaml
file for KubeCF, leave the default NodePort of 32123 for the Eirini service.For each additional deployment of KubeCF there is a 1:1 install of the cf-operator required as well.
cf-operator
The cf-operator
configuration needs a few additional pieces:
In the example below, a second cf-operater is deployed, extra points if you can guess the naming for the third one:
kubectl create namespace cf-operator2helm install cf-operator2 \
--namespace cf-operator2 \
--set "global.operator.watchNamespace=kubecf2" \
--set "fullnameOverride=cf-operator2" \
--set "applyCRD=false" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-v2.0.0-0.g0142d1e9.tgz
values.yaml
for KubeCFThere are two important pieces of information which must be unique between each of the installs:
system_domain
- Don't reuse the same system_domain for any existing Cloud Foundry deployment which is visible by your DNS, regardless of whether it is KubeCF, Tanzu Pivotal Platform (PCF), or cf-deployment
-based. Debugging is hard enough without having to figure out which coconut we are talking to.features.eirini.registry.service.nodePort
must be a unique number across the entire cluster. Verify the port you hard code is not in use before deploying.system_domain: kubecf2.10.10.10.10.netip.cc # Must be unique
...
features:
eirini:
enabled: true
registry:
service:
nodePort: 32124 # Must be unique cluster-wide
An example of a modified values.yaml
:
There are a couple minor changes from the install of the first KubeCF:
global.operator.watchNamespace
of the corresponding cf-operator
values.yaml
file for this install of KubeCFhelm install kubecf2 \
--namespace kubecf2 \
--values /Users/chris/projects/kubecf/kubecf2/values.yaml \
https://github.com/SUSE/kubecf/releases/download/v0.2.0/kubecf-0.2.0.tgz
At some point you will run yourself out of Kubernetes resources if you keep spinning additional KubeCF installs. Two concurrent installs run happily on 3 worker nodes with 2vCPU and 16GB of memory each.
We'll have instructions soon for installing KubeCF on Rancher. Are there other infrastructures that you'd like to see KubeCF deployed to? Respond in the comments sections below!
The post More Limes: Running Multiple KubeCF Deployments on One Kubernetes Cluster appeared first on Stark & Wayne.
]]>One of the brilliant aspects of BOSH that has been brought across to Kubernetes by the Quarks & KubeCF teams has been the generation of internal secrets. Internal client needs a secret to talk to internal Redis? No one cares what it is; just generate a good one and share it with the two entities. Need x509 certificates? Great, generate and share those too using a self-signed root CA, which was also internally generated. Brilliant stuff.
But after a Secret is generated initially, how do I ask Quarks to regenerate it? I know how to create a daily cronjob in Kubernetes – a fabulous feature of Kubernetes is a builtin cronjob facility – but I don't know what to do in my cronjob to rotate the Quarks-managed Secrets.
Quarks is a project name within the Cloud Foundry Foundation for a collection of projects that are helping to phase in the migration of non-Kubernetes Cloud Foundry into Cloud Foundry as-a-first-class-citizen of Kubernetes over time. During 2019, the Cloud Foundry code bases were only packaged for BOSH. The Quarks initiative was to consume these BOSH releases – with their scripts for package compilation, configuration files, and runtime bindings – and produce artifacts that could be run atop any Kubernetes.
The resulting deployable project is called KubeCF, previously named SCF v3. It is a Helm chart. It installs resources into Kubernetes, and eventually you have a running Cloud Foundry that uses Kubernetes itself to run your applications (thanks to the Eirini project).
If you are looking for a commercial distribution of KubeCF, then please get in contact with the great folks at SUSE and ask for SUSE Cloud Application Platform. Also, myself and several others from Stark & Wayne will be at SUSECON 2020 in Dublin in March.
Does the KubeCF Helm chart install Kubernetes Deployments and Services? No. The Quarks initiative adds an invisible (to you the human operator) step, thanks to cf-operator.
The cf-operator project is the hardcode power behind Quarks initiative. It converts BOSH releases, BOSH manifests, and BOSH operator files, such as for the upstream Cloud Foundry BOSH releases, into running Kubernetes pods.
The KubeCF Helm chart actually installs a Kubernetes Custom Resource called BOSHDeployment, and the cf-operator watches for it, and reconciles it eventually into a set of pods that are Cloud Foundry.
In addition to pods, cf-operator converts the BOSHDeployment resource into dozens of secrets, such as internal passwords, and certificates.
To generate a password with cf-operator, create a QuarksSecret. The cf-operator will reconcile it into a shiny new Kubernetes Secret that you can use in any project – Cloud Foundry, or vanilla Kubernetes. It's very cool.
# generate-secret.yamlapiVersion: quarks.cloudfoundry.org/v1alpha1
kind: QuarksSecret
metadata:
name: my-internal-secret
spec:
type: password
secretName: my-internal-secret
Install this resource into the same namespace as cf-operator (probably kubecf
if you've already got KubeCF installed).
$ kubectl apply -f generate-secret.yaml -n kubecf
$ kubectl get quarkssecret my-internal-secret
NAME AGE
my-internal-secret 24s
cf-operator will reconcile this QuarksSecret into a new Secret secretName: my-internal-secret
:
$ kubectl get secret my-internal-secret -n kubecf
NAME TYPE DATA AGE
my-internal-secret Opaque 1 66s
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
Q3ba8rLvBdyzcqFmArAUTTxzuerGqfVGBBMl6cyWhH7AFgjS3Ys3wl4eIkimEljA
Awesome. Now any pod could attach this secret and the password
will be available in a volume or environment variable.
There are two requirements for rotating secrets:
The latter requirement means we cannot use environment variables to access secrets. If the Secret changes, the Pods will not be restarted to get the new environment variables.
Instead, to support secret rotation we must expose Secrets into Pods using volumes. When the Secret changes, the new values are automatically populated into the files of that volume.
The applications/processes running within the Pod's containers must also be written to observe changes to Secrets in volume files.
Alternately, if your applications/processes can not watch for changes to Secrets in volume files, then the Pods must fail hard and have Pods be recreated (with the new secrets passed to new processes).
Quarks and cf-operator allowed us to generate a Secret using a QuarksSecret custom resource. Fortunately, it also allows us to rotate or regenerate the Secret's contents.
# rotate-my-internal-secret.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rotate-my-internal-secret
labels:
quarks.cloudfoundry.org/secret-rotation: "true"
data:
secrets: '["my-internal-secret"]'
To install our request to rotate my-internal-secret
Secret:
kubectl apply -f rotate-my-internal-secret.yaml
This creates a (short-lived) ConfigMap:
$ kubectl get cm rotate-my-internal-secret
NAME DATA AGE
rotate-my-internal-secret 1 40s
The ConfigMap acts as a trigger to cf-operator to regenerate one or more Secrets from their QuarksSecret definition.
We can see that cf-operator has changed the .data.password
value of our Secret:
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
X1aidUZ2MBdtkYJccNdR4xWyJr6JcUtvh4LBafGsL38qpkPnX1kSBhR3sHCQRmiJ
We can now delete the ConfigMap trigger resource:
kubectl delete -f rotate-my-internal-secret.yaml
Want to generate the Secret again? Re-install the trigger ConfigMap:
$ kubectl apply -f rotate-my-internal-secret.yaml
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
79XERALdMRQi0LxonEoTJuV8o8ZUGmgUjjHqRp5yAVY8bnCciBoJFEdFDBoI7Du1
At the time of writing this article I've not yet tested all the different Secrets in KubeCF to confirm that all clients and servers support secret rotation. Stay tuned.
The post How to rotate Kubernetes secrets with Quarks and KubeCF? appeared first on Stark & Wayne.
]]>One of the brilliant aspects of BOSH that has been brought across to Kubernetes by the Quarks & KubeCF teams has been the generation of internal secrets. Internal client needs a secret to talk to internal Redis? No one cares what it is; just generate a good one and share it with the two entities. Need x509 certificates? Great, generate and share those too using a self-signed root CA, which was also internally generated. Brilliant stuff.
But after a Secret is generated initially, how do I ask Quarks to regenerate it? I know how to create a daily cronjob in Kubernetes – a fabulous feature of Kubernetes is a builtin cronjob facility – but I don't know what to do in my cronjob to rotate the Quarks-managed Secrets.
Quarks is a project name within the Cloud Foundry Foundation for a collection of projects that are helping to phase in the migration of non-Kubernetes Cloud Foundry into Cloud Foundry as-a-first-class-citizen of Kubernetes over time. During 2019, the Cloud Foundry code bases were only packaged for BOSH. The Quarks initiative was to consume these BOSH releases – with their scripts for package compilation, configuration files, and runtime bindings – and produce artifacts that could be run atop any Kubernetes.
The resulting deployable project is called KubeCF, previously named SCF v3. It is a Helm chart. It installs resources into Kubernetes, and eventually you have a running Cloud Foundry that uses Kubernetes itself to run your applications (thanks to the Eirini project).
If you are looking for a commercial distribution of KubeCF, then please get in contact with the great folks at SUSE and ask for SUSE Cloud Application Platform. Also, myself and several others from Stark & Wayne will be at SUSECON 2020 in Dublin in March.
Does the KubeCF Helm chart install Kubernetes Deployments and Services? No. The Quarks initiative adds an invisible (to you the human operator) step, thanks to cf-operator.
The cf-operator project is the hardcode power behind Quarks initiative. It converts BOSH releases, BOSH manifests, and BOSH operator files, such as for the upstream Cloud Foundry BOSH releases, into running Kubernetes pods.
The KubeCF Helm chart actually installs a Kubernetes Custom Resource called BOSHDeployment, and the cf-operator watches for it, and reconciles it eventually into a set of pods that are Cloud Foundry.
In addition to pods, cf-operator converts the BOSHDeployment resource into dozens of secrets, such as internal passwords, and certificates.
To generate a password with cf-operator, create a QuarksSecret. The cf-operator will reconcile it into a shiny new Kubernetes Secret that you can use in any project – Cloud Foundry, or vanilla Kubernetes. It's very cool.
# generate-secret.yamlapiVersion: quarks.cloudfoundry.org/v1alpha1
kind: QuarksSecret
metadata:
name: my-internal-secret
spec:
type: password
secretName: my-internal-secret
Install this resource into the same namespace as cf-operator (probably kubecf
if you've already got KubeCF installed).
$ kubectl apply -f generate-secret.yaml -n kubecf
$ kubectl get quarkssecret my-internal-secret
NAME AGE
my-internal-secret 24s
cf-operator will reconcile this QuarksSecret into a new Secret secretName: my-internal-secret
:
$ kubectl get secret my-internal-secret -n kubecf
NAME TYPE DATA AGE
my-internal-secret Opaque 1 66s
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
Q3ba8rLvBdyzcqFmArAUTTxzuerGqfVGBBMl6cyWhH7AFgjS3Ys3wl4eIkimEljA
Awesome. Now any pod could attach this secret and the password
will be available in a volume or environment variable.
There are two requirements for rotating secrets:
The latter requirement means we cannot use environment variables to access secrets. If the Secret changes, the Pods will not be restarted to get the new environment variables.
Instead, to support secret rotation we must expose Secrets into Pods using volumes. When the Secret changes, the new values are automatically populated into the files of that volume.
The applications/processes running within the Pod's containers must also be written to observe changes to Secrets in volume files.
Alternately, if your applications/processes can not watch for changes to Secrets in volume files, then the Pods must fail hard and have Pods be recreated (with the new secrets passed to new processes).
Quarks and cf-operator allowed us to generate a Secret using a QuarksSecret custom resource. Fortunately, it also allows us to rotate or regenerate the Secret's contents.
# rotate-my-internal-secret.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rotate-my-internal-secret
labels:
quarks.cloudfoundry.org/secret-rotation: "true"
data:
secrets: '["my-internal-secret"]'
To install our request to rotate my-internal-secret
Secret:
kubectl apply -f rotate-my-internal-secret.yaml
This creates a (short-lived) ConfigMap:
$ kubectl get cm rotate-my-internal-secret
NAME DATA AGE
rotate-my-internal-secret 1 40s
The ConfigMap acts as a trigger to cf-operator to regenerate one or more Secrets from their QuarksSecret definition.
We can see that cf-operator has changed the .data.password
value of our Secret:
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
X1aidUZ2MBdtkYJccNdR4xWyJr6JcUtvh4LBafGsL38qpkPnX1kSBhR3sHCQRmiJ
We can now delete the ConfigMap trigger resource:
kubectl delete -f rotate-my-internal-secret.yaml
Want to generate the Secret again? Re-install the trigger ConfigMap:
$ kubectl apply -f rotate-my-internal-secret.yaml
$ kubectl get secret my-internal-secret -n kubecf -ojsonpath='{.data.password}' | base64 --decode
79XERALdMRQi0LxonEoTJuV8o8ZUGmgUjjHqRp5yAVY8bnCciBoJFEdFDBoI7Du1
At the time of writing this article I've not yet tested all the different Secrets in KubeCF to confirm that all clients and servers support secret rotation. Stay tuned.
The post How to rotate Kubernetes secrets with Quarks and KubeCF? appeared first on Stark & Wayne.
]]>Photo by Alex Gorzen on Flickr
At Stark & Wayne, we've spent a ton of time figuring out the best solutions to problems using the open source tools we have available. We've pondered problem spaces such as:
What if we could...
This last one is the least tasty but potentially the most satisfying, taking the large virtual machine footprint of Cloud Foundry with its developer facing toolset and stuffing it into Kubernetes.
For those who've installed Cloud Foundry in the past, you know that BOSH is the only way to install and manage Cloud Foundry. Well, that is until the cf-operator, Cloud Foundry Quarks, Eirini, and kubecf
came along.
The cf-operator is a Kubernetes Operator deployed via a Helm Chart which installs a series of custom resource definitions that convert BOSH Releases into Kubernetes resources such as pods, deployments, and stateful sets. It alone does not result in a deployment of Cloud Foundry.
KubeCF is a version of Cloud Foundry deployed as a Helm Chart, mainly developed by SUSE, that leverages the cf-operator
.
Eirini swaps the Diego backend for Kubernetes meaning when you cf push
, your applications run as Kubernetes pods inside of a statefulset.
Kubernetes is the new kid in town for deploying platforms.
Using these tools together we can deploy Cloud Foundry's Control Plane (cloud controller, doppler, routers and the rest) and have the apps run as pods within Kubernetes.
Below, we'll cover all the moving pieces associated with sprinkling a bit of Cloud Foundry over a nice hot fresh batch of Kubernetes. Remember to add salt to taste!
Photo by Emmy Smith on Unsplash
There are a few layers to this process which include:
In our previous blog, Getting Started with Amazon EKS, we create a Kubernetes cluster using the eksctl
tool. This gives you time to ready a short story from Tolstoy and, in the end, a Kubernetes cluster is born. This allows you to deploy pods, deployments, and other exciting Kubernetes resources without having to manage the master
nodes yourself.
Before continuing be sure you are targeting your kubectl
CLI with the kubeconfig for this cluster. Run a kubectl cluster-info dump | grep "cluster-name"
to verify that the name of the cluster in EKS matches what kubectl
has targeted. This is important to check if you've been experimenting with other tools like minikube
in the meantime since deploying the EKS cluster.
Helm is a CLI tool for templating Kubernetes resources. Helm Charts bundle up a group of Kubernetes YAML files to deploy a particular piece of software. The Bitnami PostgreSQL Helm Chart installs an instance of the database with persistent storage and expose it via a service. The cf-operator
and kubecf
projects we use below are also Helm Charts.
To install helm
on MacOS with Homebrew:
brew install helm
If you are using a different operating system, other means of installation are documented at https://github.com/helm/helm#install.
This installs Helm v3. All of the subsequent commands will assume you are using this newer version of Helm. Note that the instructions in the cf-operator
and kubecf
GitHub repos use Helm v2 style commands. A short guide to converting Helm commands is here on the Stark & Wayne blog site.
cf-operator
Helm ChartSince we are using Helm v3, we'll need to create a namespace for the cf-operator
to use it (v2 would have done this for you automatically):
➜ kubectl create namespace cf-operator
Now you can install the cf-operator
:
➜ helm install cf-operator \
--namespace cf-operator \
--set "global.operator.watchNamespace=kubecf" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-v2.0.0-0.g0142d1e9.tgz
When completed, you will have two pods in the kubecf
namespace which look similar to:
➜ kubectl get pods -n kubecf
NAME READY STATUS RESTARTS AGE
cf-operator-5ff5684bb9-tsw2f 1/1 Running 2 8m4s
cf-operator-quarks-job-5dcc69584f-c2vnw 1/1 Running 2 8m3s
The pods may fail once or twice while initializing. This is ok as long as both report as "running" after a few minutes.
Before installing the kubecf
Helm Chart, you'll need to create a configuration file.
The complete configuration file with all options is available at https://github.com/SUSE/kubecf/blob/master/deploy/helm/kubecf/values.yaml. The examples below populate portions of this YAML file.
The absolute minimum configuration of values.yaml
file for KubeCF on EKS is:
system_domain
- The URL Cloud Foundry will be accessed fromkube.service_cluster_ip_range
- The /24
network block for serviceskube.pod_cluster_ip_range
- The /16
network block for the podsProject Eirini swaps out using Diego Cells for Container Runtime and instead uses Kubernetes pods/statefulsets for each application instance. This feature is enabled by adding one more configuration to the values.yaml
to the minimal configuration features.eirini.enabled: true
.
system_domain: system.kubecf.lab.starkandwayne.com
kube:
service_cluster_ip_range: 10.100.0.0/16
pod_cluster_ip_range: 192.168.0.0/16
features:
eirini:
enabled: true
This configuration will get you:
https://api.<system_domain>
kubecf-eirini
namespaceinstance_group
There are many more configuration options available in the default values.yaml
file, a more "Production Worthy" deployment would include:
Having CF Control Plane with 1 pod per instance_group
results in the deployment not being HA. If any of the pods stop, that part of the Cloud Foundry Control Plane stops functioning since there was only 1 instance.
There are a few ways of enabling multiple pods per instance group:
- Enable Multi AZ and HA Settings. Simply set the corresponding values to true
in values.yaml
:
multi_az: true
high_availability: true
- Manually set the instance group sizing in values.yaml
:
sizing:
adapter:
instances: 3
api:
instances: 4
...
tcp_router:
instances: 4
Note that setting instance:
sizes overrides the default values of high_availability:
This needs to be created beforehand and the values populated in values.yaml
:
features:
external_database:
enabled: true
type: postgres
host: postgresql-instance1.cg034hpkmmjt.us-east-1.rds.amazonaws.com
port: 5432
databases:
uaa:
name: uaa
password: uaa-admin
username: 698embi40dlb98403pbh
cc:
name: cloud_controller
password: cloud-controller-admin
username: 659ejkg84lf8uh8943kb
...
credhub:
name: credhub
password: credhub-admin
username: ffhl38d9ghs93jg023u7g
App-Autoscaler is an add-on to Cloud Foundry to automatically scale the number of application instances based on CPU, memory, throughput, response time, and several other metrics. You can also add your own custom metrics as of v3.0.0. You decide which metrics you want to scale your app up and down by in a policy and then apply the policy to your application. Examples of usage can be found here.
Add the lines below to your values.yaml
file to enable App Autoscaler:
features:
autoscaler:
enabled: true
As of this writing, there is not a documented way to scale the singleton-blobstore
or have it leverage S3. Let us know in the comments if you know of a way to do this!
Once you have assembled your values.yaml
file with the configurations you want, the kubecf
Helm Chart can be installed.
In the example below, an absolute path is used to the values.yaml
file, you'll need to update the path to point to your file.
➜ helm install kubecf \
--namespace kubecf \
--values /Users/chris/projects/kubecf/values.yaml \
https://github.com/SUSE/kubecf/releases/download/v0.2.0/kubecf-0.2.0.tgz
The install takes about 20 minutes to run and if you run a watch
command, eventually you will see output similar to:
➜ watch -c "kubectl -n kubecf get pods"
NAME READY STATUS RESTARTS AGE
cf-operator-5ff5684bb9-tsw2f 1/1 Running 0 4h
cf-operator-quarks-job-5dcc69584f-c2vnw 1/1 Running 0 4h
kubecf-adapter-0 4/4 Running 0 3h
kubecf-api-0 15/15 Running 1 3h
kubecf-bits-0 6/6 Running 0 3h
kubecf-bosh-dns-7787b4bb88-44fjf 1/1 Running 0 3h
kubecf-bosh-dns-7787b4bb88-rkjsr 1/1 Running 0 3h
kubecf-cc-worker-0 4/4 Running 0 3h
kubecf-credhub-0 5/5 Running 0 3h
kubecf-database-0 2/2 Running 0 3h
kubecf-diego-api-0 6/6 Running 2 3h
kubecf-doppler-0 9/9 Running 0 3h
kubecf-eirini-0 9/9 Running 9 3h
kubecf-log-api-0 7/7 Running 0 3h
kubecf-nats-0 4/4 Running 0 3h
kubecf-router-0 5/5 Running 0 3h
kubecf-routing-api-0 4/4 Running 0 3h
kubecf-scheduler-0 8/8 Running 6 3h
kubecf-singleton-blobstore-0 6/6 Running 0 3h
kubecf-tcp-router-0 5/5 Running 0 3h
kubecf-uaa-0 6/6 Running 0 3h
At this point Cloud Foundry is alive, now we just need to a way to access it.
There are 3 load balancers which are created during the deployment and can be viewed by:
➜ cf-env git:(master) kubectl get services -n kubecf
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubecf-router-public LoadBalancer 10.100.193.124 a34d2e33633c511eaa0df0efe1a642cf-1224111110.us-west-2.elb.amazonaws.com 80:31870/TCP,443:31453/TCP 3d
kubecf-ssh-proxy-public LoadBalancer 10.100.221.190 a34cc18bf33c511eaa0df0efe1a642cf-1786911110.us-west-2.elb.amazonaws.com 2222:32293/TCP 3d
kubecf-tcp-router-public LoadBalancer 10.100.203.79 a34d5ca4633c511eaa0df0efe1a642cf-1261111914.us-west-2.elb.amazonaws.com 20000:32715/TCP,20001:30059/TCP,20002:31403/TCP,20003:32130/TCP,20004:30255/TCP,20005:32727/TCP,20006:30913/TCP,20007:30725/TCP,20008:31713/TCP 3d
We need to associate the system_domain
in values.yaml
to the URL associated with the LoadBalancer named kubecf-router-public
.
In CloudFlare, we add a cname
record pointing the ELB to the system domain:
If you are using Amazon Route53, you can follow the instructions here.
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system domain URL to the one registered in the previous step:
cf api --skip-ssl-validation "https://api.system.kubecf.lab.starkandwayne.com"
admin_pass=$(kubectl get secret \
--namespace kubecf kubecf.var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${admin_pass}"
The full Kubecf
documentation for logging in is documented here. The "admin" password is stored as a Kubernetes Secret.
That's it! You can now create all the drnic
orgs and spaces you desire with the CF CLI and deploy the 465 copies of Spring Music to the platform you know and love!
I did not successfully deploy Kubecf on my first attempt, 0r my tenth. But most of these failed attempts were self inflicted and easily avoided. Below are a few snags encountered and their workarounds.
If your pods start to get evicted as you scale the components, you are likely running out of resources. If you run a kubectl describe node <nameofnode>
, you'll see:
Type Reason Age From
---- ------ ---- ----
Warning EvictionThresholdMet 13s (x6 over 22h) kubelet, Attempting to reclaim ephemeral-storage
Normal NodeHasDiskPressure 8s (x6 over 22h) kubelet, Node status is now: NodeHasDiskPressure
Normal NodeHasNoDiskPressure 8s (x17 over 15d) kubelet, Node status is now: NodeHasNoDiskPressure
Either scale out the Kubernetes cluster with the eksctl
tool or go easy on the sizing.diego_cell.instances: 50bazillion
setting in the kubecf values.yaml
file!
There is more than one version of Kubecf Release v0.1.0 and some of them seemed to not work for me, your experience may vary. When I used the tarball kubecf-0.1.0-002b49a.tgz
documented in the v0.1.0 release here, my router ELB had 0 of 2 members which I'm assuming was related to a bad port configuration. There is a Kubecf Helm Chart S3 bucket which has additional tarballs, cf-operator-v1.0.0-1.g424dd0b3.tgz
is the one used in the examples here.
A note: the original blog post had instructions for v0.1.0, the above is still true but you should not have the same issues with the v0.2.0 instructions that are now in this blog.
If you perform a helm uninstall kubecf
and attempt to reinstall at a later time, there are a few PVC's in the kubecf
namespace which you will need to delete after the first uninstall. If you don't, you'll wind up with an error similar to this on the kubecf-api*
pod:
➜ kubectl logs kubecf-api-0 -c bosh-pre-start-cloud-controller-ng -f
...
redacted for brevity...
...
VCAP::CloudController::ValidateDatabaseKeys::EncryptionKeySentinelDecryptionMismatchError
To fix, list the PVC's and delete them:
➜ kubectl get pvc -n kubecf
NAME STATUS
kubecf-database-pvc-kubecf-database-0 Bound
kubecf-singleton-blobstore-pvc-kubecf-singleton-blobstore-0 Bound
➜ kubectl delete pvc kubecf-database-pvc-kubecf-database-0
➜ kubectl delete pvc kubecf-singleton-blobstore-pvc-kubecf-singleton-blobstore-0
➜ helm install kubecf #again
Since publishing this article, we're seeing more people succeeding deploying Cloud Foundry to Kubernetes.
João Pinto has written up a tutorial to deploy Cloud Foundry on kind (Kubernetes IN Docker).
At Stark & Wayne, we maintain a bootstrap script to provision a Kubernetes cluster and deploy Cloud Foundry to it called, unambiguously bootstrap-kubernetes-demos.
bootstrap-kubernetes-demos up --google --kubecf
But wait, there's more!
Want to run multiple KubeCF installs on a single Kubernetes Cluster? Each in their own namespace? With a couple minor helm configurations you can have as many as you'd like!
See our new Limes & Coconuts: Running Multiple KubeCF Deployments on One Kubernetes Cluster blog post to discover how you can have an entire bucket of limes.
The post Running Cloud Foundry on Kubernetes using KubeCF appeared first on Stark & Wayne.
]]>Photo by Alex Gorzen on Flickr
At Stark & Wayne, we've spent a ton of time figuring out the best solutions to problems using the open source tools we have available. We've pondered problem spaces such as:
What if we could...
This last one is the least tasty but potentially the most satisfying, taking the large virtual machine footprint of Cloud Foundry with its developer facing toolset and stuffing it into Kubernetes.
For those who've installed Cloud Foundry in the past, you know that BOSH is the only way to install and manage Cloud Foundry. Well, that is until the cf-operator, Cloud Foundry Quarks, Eirini, and kubecf
came along.
The cf-operator is a Kubernetes Operator deployed via a Helm Chart which installs a series of custom resource definitions that convert BOSH Releases into Kubernetes resources such as pods, deployments, and stateful sets. It alone does not result in a deployment of Cloud Foundry.
KubeCF is a version of Cloud Foundry deployed as a Helm Chart, mainly developed by SUSE, that leverages the cf-operator
.
Eirini swaps the Diego backend for Kubernetes meaning when you cf push
, your applications run as Kubernetes pods inside of a statefulset.
Kubernetes is the new kid in town for deploying platforms.
Using these tools together we can deploy Cloud Foundry's Control Plane (cloud controller, doppler, routers and the rest) and have the apps run as pods within Kubernetes.
Below, we'll cover all the moving pieces associated with sprinkling a bit of Cloud Foundry over a nice hot fresh batch of Kubernetes. Remember to add salt to taste!
Photo by Emmy Smith on Unsplash
There are a few layers to this process which include:
In our previous blog, Getting Started with Amazon EKS, we create a Kubernetes cluster using the eksctl
tool. This gives you time to ready a short story from Tolstoy and, in the end, a Kubernetes cluster is born. This allows you to deploy pods, deployments, and other exciting Kubernetes resources without having to manage the master
nodes yourself.
Before continuing be sure you are targeting your kubectl
CLI with the kubeconfig for this cluster. Run a kubectl cluster-info dump | grep "cluster-name"
to verify that the name of the cluster in EKS matches what kubectl
has targeted. This is important to check if you've been experimenting with other tools like minikube
in the meantime since deploying the EKS cluster.
Helm is a CLI tool for templating Kubernetes resources. Helm Charts bundle up a group of Kubernetes YAML files to deploy a particular piece of software. The Bitnami PostgreSQL Helm Chart installs an instance of the database with persistent storage and expose it via a service. The cf-operator
and kubecf
projects we use below are also Helm Charts.
To install helm
on MacOS with Homebrew:
brew install helm
If you are using a different operating system, other means of installation are documented at https://github.com/helm/helm#install.
This installs Helm v3. All of the subsequent commands will assume you are using this newer version of Helm. Note that the instructions in the cf-operator
and kubecf
GitHub repos use Helm v2 style commands. A short guide to converting Helm commands is here on the Stark & Wayne blog site.
cf-operator
Helm ChartSince we are using Helm v3, we'll need to create a namespace for the cf-operator
to use it (v2 would have done this for you automatically):
➜ kubectl create namespace cf-operator
Now you can install the cf-operator
:
➜ helm install cf-operator \
--namespace cf-operator \
--set "global.operator.watchNamespace=kubecf" \
https://s3.amazonaws.com/cf-operators/release/helm-charts/cf-operator-v2.0.0-0.g0142d1e9.tgz
When completed, you will have two pods in the kubecf
namespace which look similar to:
➜ kubectl get pods -n kubecf
NAME READY STATUS RESTARTS AGE
cf-operator-5ff5684bb9-tsw2f 1/1 Running 2 8m4s
cf-operator-quarks-job-5dcc69584f-c2vnw 1/1 Running 2 8m3s
The pods may fail once or twice while initializing. This is ok as long as both report as "running" after a few minutes.
Before installing the kubecf
Helm Chart, you'll need to create a configuration file.
The complete configuration file with all options is available at https://github.com/SUSE/kubecf/blob/master/deploy/helm/kubecf/values.yaml. The examples below populate portions of this YAML file.
The absolute minimum configuration of values.yaml
file for KubeCF on EKS is:
system_domain
- The URL Cloud Foundry will be accessed fromkube.service_cluster_ip_range
- The /24
network block for serviceskube.pod_cluster_ip_range
- The /16
network block for the podsProject Eirini swaps out using Diego Cells for Container Runtime and instead uses Kubernetes pods/statefulsets for each application instance. This feature is enabled by adding one more configuration to the values.yaml
to the minimal configuration features.eirini.enabled: true
.
system_domain: system.kubecf.lab.starkandwayne.com
kube:
service_cluster_ip_range: 10.100.0.0/16
pod_cluster_ip_range: 192.168.0.0/16
features:
eirini:
enabled: true
This configuration will get you:
https://api.<system_domain>
kubecf-eirini
namespaceinstance_group
There are many more configuration options available in the default values.yaml
file, a more "Production Worthy" deployment would include:
Having CF Control Plane with 1 pod per instance_group
results in the deployment not being HA. If any of the pods stop, that part of the Cloud Foundry Control Plane stops functioning since there was only 1 instance.
There are a few ways of enabling multiple pods per instance group:
- Enable Multi AZ and HA Settings. Simply set the corresponding values to true
in values.yaml
:
multi_az: true
high_availability: true
- Manually set the instance group sizing in values.yaml
:
sizing:
adapter:
instances: 3
api:
instances: 4
...
tcp_router:
instances: 4
Note that setting instance:
sizes overrides the default values of high_availability:
This needs to be created beforehand and the values populated in values.yaml
:
features:
external_database:
enabled: true
type: postgres
host: postgresql-instance1.cg034hpkmmjt.us-east-1.rds.amazonaws.com
port: 5432
databases:
uaa:
name: uaa
password: uaa-admin
username: 698embi40dlb98403pbh
cc:
name: cloud_controller
password: cloud-controller-admin
username: 659ejkg84lf8uh8943kb
...
credhub:
name: credhub
password: credhub-admin
username: ffhl38d9ghs93jg023u7g
App-Autoscaler is an add-on to Cloud Foundry to automatically scale the number of application instances based on CPU, memory, throughput, response time, and several other metrics. You can also add your own custom metrics as of v3.0.0. You decide which metrics you want to scale your app up and down by in a policy and then apply the policy to your application. Examples of usage can be found here.
Add the lines below to your values.yaml
file to enable App Autoscaler:
features:
autoscaler:
enabled: true
As of this writing, there is not a documented way to scale the singleton-blobstore
or have it leverage S3. Let us know in the comments if you know of a way to do this!
Once you have assembled your values.yaml
file with the configurations you want, the kubecf
Helm Chart can be installed.
In the example below, an absolute path is used to the values.yaml
file, you'll need to update the path to point to your file.
➜ helm install kubecf \
--namespace kubecf \
--values /Users/chris/projects/kubecf/values.yaml \
https://github.com/SUSE/kubecf/releases/download/v0.2.0/kubecf-0.2.0.tgz
The install takes about 20 minutes to run and if you run a watch
command, eventually you will see output similar to:
➜ watch -c "kubectl -n kubecf get pods"
NAME READY STATUS RESTARTS AGE
cf-operator-5ff5684bb9-tsw2f 1/1 Running 0 4h
cf-operator-quarks-job-5dcc69584f-c2vnw 1/1 Running 0 4h
kubecf-adapter-0 4/4 Running 0 3h
kubecf-api-0 15/15 Running 1 3h
kubecf-bits-0 6/6 Running 0 3h
kubecf-bosh-dns-7787b4bb88-44fjf 1/1 Running 0 3h
kubecf-bosh-dns-7787b4bb88-rkjsr 1/1 Running 0 3h
kubecf-cc-worker-0 4/4 Running 0 3h
kubecf-credhub-0 5/5 Running 0 3h
kubecf-database-0 2/2 Running 0 3h
kubecf-diego-api-0 6/6 Running 2 3h
kubecf-doppler-0 9/9 Running 0 3h
kubecf-eirini-0 9/9 Running 9 3h
kubecf-log-api-0 7/7 Running 0 3h
kubecf-nats-0 4/4 Running 0 3h
kubecf-router-0 5/5 Running 0 3h
kubecf-routing-api-0 4/4 Running 0 3h
kubecf-scheduler-0 8/8 Running 6 3h
kubecf-singleton-blobstore-0 6/6 Running 0 3h
kubecf-tcp-router-0 5/5 Running 0 3h
kubecf-uaa-0 6/6 Running 0 3h
At this point Cloud Foundry is alive, now we just need to a way to access it.
There are 3 load balancers which are created during the deployment and can be viewed by:
➜ cf-env git:(master) kubectl get services -n kubecf
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubecf-router-public LoadBalancer 10.100.193.124 a34d2e33633c511eaa0df0efe1a642cf-1224111110.us-west-2.elb.amazonaws.com 80:31870/TCP,443:31453/TCP 3d
kubecf-ssh-proxy-public LoadBalancer 10.100.221.190 a34cc18bf33c511eaa0df0efe1a642cf-1786911110.us-west-2.elb.amazonaws.com 2222:32293/TCP 3d
kubecf-tcp-router-public LoadBalancer 10.100.203.79 a34d5ca4633c511eaa0df0efe1a642cf-1261111914.us-west-2.elb.amazonaws.com 20000:32715/TCP,20001:30059/TCP,20002:31403/TCP,20003:32130/TCP,20004:30255/TCP,20005:32727/TCP,20006:30913/TCP,20007:30725/TCP,20008:31713/TCP 3d
We need to associate the system_domain
in values.yaml
to the URL associated with the LoadBalancer named kubecf-router-public
.
In CloudFlare, we add a cname
record pointing the ELB to the system domain:
If you are using Amazon Route53, you can follow the instructions here.
Assuming you have the CF CLI already installed, (see this if not), you can target and authenticate to the Cloud Foundry deployment as seen below, remembering to update the system domain URL to the one registered in the previous step:
cf api --skip-ssl-validation "https://api.system.kubecf.lab.starkandwayne.com"
admin_pass=$(kubectl get secret \
--namespace kubecf kubecf.var-cf-admin-password \
-o jsonpath='{.data.password}' \
| base64 --decode)
cf auth admin "${admin_pass}"
The full Kubecf
documentation for logging in is documented here. The "admin" password is stored as a Kubernetes Secret.
That's it! You can now create all the drnic
orgs and spaces you desire with the CF CLI and deploy the 465 copies of Spring Music to the platform you know and love!
I did not successfully deploy Kubecf on my first attempt, 0r my tenth. But most of these failed attempts were self inflicted and easily avoided. Below are a few snags encountered and their workarounds.
If your pods start to get evicted as you scale the components, you are likely running out of resources. If you run a kubectl describe node <nameofnode>
, you'll see:
Type Reason Age From
---- ------ ---- ----
Warning EvictionThresholdMet 13s (x6 over 22h) kubelet, Attempting to reclaim ephemeral-storage
Normal NodeHasDiskPressure 8s (x6 over 22h) kubelet, Node status is now: NodeHasDiskPressure
Normal NodeHasNoDiskPressure 8s (x17 over 15d) kubelet, Node status is now: NodeHasNoDiskPressure
Either scale out the Kubernetes cluster with the eksctl
tool or go easy on the sizing.diego_cell.instances: 50bazillion
setting in the kubecf values.yaml
file!
There is more than one version of Kubecf Release v0.1.0 and some of them seemed to not work for me, your experience may vary. When I used the tarball kubecf-0.1.0-002b49a.tgz
documented in the v0.1.0 release here, my router ELB had 0 of 2 members which I'm assuming was related to a bad port configuration. There is a Kubecf Helm Chart S3 bucket which has additional tarballs, cf-operator-v1.0.0-1.g424dd0b3.tgz
is the one used in the examples here.
A note: the original blog post had instructions for v0.1.0, the above is still true but you should not have the same issues with the v0.2.0 instructions that are now in this blog.
If you perform a helm uninstall kubecf
and attempt to reinstall at a later time, there are a few PVC's in the kubecf
namespace which you will need to delete after the first uninstall. If you don't, you'll wind up with an error similar to this on the kubecf-api*
pod:
➜ kubectl logs kubecf-api-0 -c bosh-pre-start-cloud-controller-ng -f
...
redacted for brevity...
...
VCAP::CloudController::ValidateDatabaseKeys::EncryptionKeySentinelDecryptionMismatchError
To fix, list the PVC's and delete them:
➜ kubectl get pvc -n kubecf
NAME STATUS
kubecf-database-pvc-kubecf-database-0 Bound
kubecf-singleton-blobstore-pvc-kubecf-singleton-blobstore-0 Bound
➜ kubectl delete pvc kubecf-database-pvc-kubecf-database-0
➜ kubectl delete pvc kubecf-singleton-blobstore-pvc-kubecf-singleton-blobstore-0
➜ helm install kubecf #again
Since publishing this article, we're seeing more people succeeding deploying Cloud Foundry to Kubernetes.
João Pinto has written up a tutorial to deploy Cloud Foundry on kind (Kubernetes IN Docker).
At Stark & Wayne, we maintain a bootstrap script to provision a Kubernetes cluster and deploy Cloud Foundry to it called, unambiguously bootstrap-kubernetes-demos.
bootstrap-kubernetes-demos up --google --kubecf
But wait, there's more!
Want to run multiple KubeCF installs on a single Kubernetes Cluster? Each in their own namespace? With a couple minor helm configurations you can have as many as you'd like!
See our new Limes & Coconuts: Running Multiple KubeCF Deployments on One Kubernetes Cluster blog post to discover how you can have an entire bucket of limes.
The post Running Cloud Foundry on Kubernetes using KubeCF appeared first on Stark & Wayne.
]]>