Any CI/CD is just like a serverless platform or PaaS: you run other people’s code. The difference? With a platform you expect the code to work.
With CI/CD you’re waiting for things to fail. So you can fix them. Until they work. Hopefully.
And to fix them you need logs.
kpack is a new open source build system from Pivotal that specializes in applying Cloud Native Buildpacks to my apps and pushing out easy to use OCI/docker images. kpack runs entirely within Kubernetes, and allows me to build OCIs from a git repo branch. New commit results in new OCI image.
That is unless the buildpacks fail upon my heathen code. Then I must debug thy code, reduce my mistakes to distant memories, and request forgiveness from my build system overlords.
So kpack, where are my logs?
In this article we will look at both kpack’s own logs
CLI, and how you can find the raw logs from the Kubernetes init containers used to run Cloud Native Buildpacks. I learned about init containers and you can too.
First, let’s set up kpack and build something
To install kpack v0.0.4 into a clean kpack
namespace (there is a Cleanup section at the end):
kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml
We will build my sample NodeJS application using a collection of new buildpacks from the Cloud Foundry Buildpacks team, which includes buildpacks for NodeJS applications:
ns=demokubectl create ns $ns
kubectl apply -n $ns -f https://raw.githubusercontent.com/starkandwayne/bootstrap-gke/ecbdfc0900ecb58d02be302d968d9d074c59803e/resources/kpack/builder-cflinuxfs3.yaml
Now we need a service account that includes permissions to publish our OCI/docker images to a registry.
Find a sample serviceaccount YAML at https://gist.github.com/drnic/d35eddbef009b2eb8495218a29d4e263. Make your own YAML file, and install it:
kubectl apply -n $ns -f my-serviceaccount.yaml
Finally, to ask kpack to continously watch and build my sample NodeJS application, create a kpack Image
file kpack-image.yaml
with the name you wish to publish the Docker image:
apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
name: sample-app-nodejs
spec:
builder:
name: cflinuxfs3-builder
kind: Builder
serviceAccount: service-account
cacheSize: "1.5Gi"
source:
git:
url: https://github.com/starkandwayne/sample-app-nodejs.git
revision: master
tag: <my organization>/<my image name>:latest
Apply this file to your namespace and you’re done:
kubectl apply -n $ns -f kpack-image.yaml
Your kpack image will automatically detect the latest Git commit on my repository, will create a kpack Build
and will start doing its Cloud Native Buildpacks magic.
Unless it doesn’t. You have no idea. kpack is “native to Kubernetes” which I think means “no UI” and “figure out for yourself if it works”.
Logs, damn it
Latest kpack releases include a logs
CLI to allow you to watch or replay the logs for a build (git rep + builder/buildpacks -> docker image). Download the one for your OS, put it in your $PATH
, make it executable, and we can see the logs from our first build:
logs -image sample-app-nodejs -namespace $ns -build 1
The output will include the magic of Cloud Native Buildpacks applied to our sample NodeJS app:
...
-----> Node Engine Buildpack 0.0.49
Node Engine 10.16.3: Contributing to layer
Downloading from https://buildpacks.cloudfoundry.org/dependencies/node/node-10.16.3-linux-x64-cflinuxfs3-33294d36.tgz
...
-----> Yarn Buildpack 0.0.28
Yarn 1.17.3: Contributing to layer
...
*** Images:
starkandwayne/sample-app-nodejs:latest - succeeded
...
From where doth logs cometh?
So we have a kpack logs
CLI, but what does it do? Where are these logs?
Take a moment to brush up on init containers. You are now qualified to understand how kpack implements each Build – it creates a pod with a long ordered sequence of init containers. Each step of the Cloud Native Buildpack lifecycle (detect, build, export, etc) is implemented as an independent init container.
Init containers for a pod are run one at a time, until they complete, and pods are only run once all init containers are run. A kpack Build is implemented as a pod whose container does nothing; its all implemented with init containers.
The STDOUT/STDERR of each init container are the logs we are looking for.
To see the logs for an init container we use the kubectl logs -c <container>
flag.
For example, to see the build
stage logs (most likely where you will find bugs in how buildpacks are running against your application source code) we’d run:
kubectl logs <build-pod> -c build
The kpack logs
CLI is simply discovering the build pod, and displaying the logs for each init container in the correct order. Neat.
The init containers map to the buildpack lifecycle steps:
$ kubectl get pods -n $ns
NAME READY STATUS RESTARTS AGE
sample-app-nodejs-build-1-wnlxs-build-pod 0/1 Completed 0 2m38s
$ pod=sample-app-nodejs-build-1-wnlxs-build-pod
$ kubectl get pod $pod -n $ns -o json | jq -r ".spec.initContainers[].name"
creds-init
source-init
prepare
detect
restore
analyze
build
export
cache
So to get the logs for a complete kpack Build, we just look up the logs for each init container in order.
Enter xargs
to allow us to invoke kubectl logs -c <init-container>
on each named container above:
kubectl get pod $pod -n $ns -o json | \
jq -r ".spec.initContainers[].name" | \
xargs -L1 kubectl logs $pod -n $ns -c
stern shows all the logs
Another way to view the logs then is the stern
cli, which is very handy way to view logs of pods with multiple containers:
stern $pod -n $ns --container-state terminated
One current downside of stern for this task is that it does not show init container logs first, in correct order, so it may be confusing debugging them.
Cleanup
Delete our demo
namespace to remove the kpack image, builds, and pods:
kubectl delete ns demo
To remove kpack namespace and custom resource definitions:
kubectl delete -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml