Technical Requirements

Motivation

During the time that I wanted to learn and test some GoLang projects in a local Kubernetes cluster I stumbled on an issue that was build cache.

By default when you build a Go project it re-uses it’s build cache that is located at ~/.cache/go-build in case the project is small and you have a basic Dockerfile then you might not notice the performance difference when building. But when your project grows you will stumble across a very slow build and deploy process(sometimes slower than deploying to production).

Let’s Go

I would expect that you have basic experience with Go, Kubernetes, Docker and have preinstalled Minikube and Skaffold. In case you don’t then you can follow the steps in this article TODO: Link to article with install steps

Set up Minikube

First lets create a minimal k8s cluster with Minikube:

❯ minikube start --cpus=4
😄  minikube v1.24.0 on Debian bullseye/sid
✨  Automatically selected the docker driver. Other choices: kvm2, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=4, Memory=3700MB) ...
🐳  Preparing Kubernetes v1.22.3 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Once complete confirm that all services are running:

❯ k get po -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-78fcd69978-5xphq           1/1     Running   0          14s
kube-system   etcd-minikube                      1/1     Running   0          26s
kube-system   kube-apiserver-minikube            1/1     Running   0          26s
kube-system   kube-controller-manager-minikube   1/1     Running   0          29s
kube-system   kube-proxy-95vtq                   1/1     Running   0          14s
kube-system   kube-scheduler-minikube            1/1     Running   0          26s
kube-system   storage-provisioner                1/1     Running   0          25s

Set up a basic GoLang project

Initially we will just print some text in the console. After that we will get some pod data from k8s and service account binding but lets keep it simple at the beginning:

package main
import (
	"fmt"
	"time"
)
func main() {
	// We would like the app to be in an endless loop so it doesn't end in a crash loopback
	for {
		fmt.Println("Hello k8s and skaffold!!")
		time.Sleep(time.Second)
	}
}

Set up the Dockerfile

We would like to use a multi-stage build in order to have some caching. The specific Dockerfile and optimal requirements are noted in the Docker Docs

# syntax=docker/dockerfile:1
##
## Build
##
FROM golang:1.16-buster AS build
WORKDIR /app
COPY go.mod ./
# COPY go.sum ./ # We don't need go.sum at the moment as there aren't any other packages
RUN go mod download
COPY *.go ./
RUN go build -o /hello
##
## Deploy
##
FROM gcr.io/distroless/base-debian10
WORKDIR /
COPY --from=build /hello /hello
EXPOSE 8080
USER nonroot:nonroot

ENTRYPOINT ["/hello"]

Lets test the build and build time:

❯ time docker build -t debug-local-build:0.0.1 .
Successfully built f21033a33261
Successfully tagged debug-local-build:0.0.1
docker build -t debug-local-build:0.0.1 .
0.03s user 0.03s system 2% cpu 2.759 total

That was fast 2.759 seconds total, when we run it again it would go to 0.123s total and when we make a change in the file 1.845s We can live with that right?

Set up Deployment

We will be using a very basic deployment that will be directly bound to Skaffold - we won’t be doing production builds at the moment:

Input:

k create deployment \
--image=hello-skaffold-buildkit hello-skaffold-buildkit \
-o yaml \
--dry-run=client > deployment.yml

Output:

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: hello-skaffold-buildkit
  name: hello-skaffold-buildkit
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello-skaffold-buildkit
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: hello-skaffold-buildkit
    spec:
      containers:
      - image: hello-skaffold-buildkit
        name: hello-skaffold-buildkit
        resources: {}
status: {}

Set up Skaffold

Once we have the Dockerfile and the Deployment we would proceed with the default Skaffold setup:

skaffold init => Docker => Buildpack go.mod => Yes

Skaffold init demo

Run the default Skaffold setup

We would like to run skaffold dev --tail in order to see the results and change the output text from skaffold to skaffolds:

Skaffold init demo

Here are the results:

Deployments stabilized in 2.137 seconds
Deployments stabilized in 2.107 seconds
Deployments stabilized in 2.114 seconds

Woah 2 seconds per change, this is fast(if the project was huge)!

Lets make the project a bit more complex

We are pretty happy with ~2 second build and deploy time for a Hello World app but what about if we want to use some other packages or directly connect to the k8s api with a service account - how slow can this be, maybe 3 seconds max?

Include Go Kubernetes client-go

I will directly use this example from the go-client repo and test the basic build time.

Note: Uncomment the # COPY go.sum from the main Dockerfile as it will be required for this test.

Note 2: Run go mod tidy in order to get the additional packages required for the snippet.

Note 3: Consider clearing the container cache with noCache: true:

build:
  local:
    useBuildkit: true
  artifacts:
    - image: hello-skaffold-buildkit
      docker:
        noCache: true
        dockerfile: Dockerfile

Note 4: In case you get panic: pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" at the cluster scope you will have to give access to the default service account to list pods:

Create the ClusterRole that can read pods:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: pod-reader
rules:
  - apiGroups: [""] # "" indicates the core API group
    resources: ["pods"]
    verbs: ["get", "watch", "list"]

Bind it to the default serviceaccount:

kubectl create clusterrolebinding pod-reader \
  --clusterrole=pod-reader  \
  --serviceaccount=default:default

And you should be ready to rock!

The initial skaffold dev --tail might require some time to download the modules but after that we would get this result:

- deployment.apps/hello-skaffold-buildkit created
Waiting for deployments to stabilize...
 - deployment/hello-skaffold-buildkit is ready.
Deployments stabilized in 2.153 seconds
Press Ctrl+C to exit
Watching for changes...
[hello-skaffold-buildkit] There are 8 pods in the cluster
[hello-skaffold-buildkit] Pod example-xxxxx not found in default namespace
[hello-skaffold-buildkit] There are 8 pods in the cluster
[hello-skaffold-buildkit] Pod example-xxxxx not found in default namespace
[hello-skaffold-buildkit] There are 8 pods in the cluster
[hello-skaffold-buildkit] Pod example-xxxxx not found in default namespace
[hello-skaffold-buildkit] There are 8 pods in the cluster

Okay okay, it seems to work with the initial build.

Let’s just change a string and see if the re-build will be fast enough:

Successfully built 36805bf93ac6
Successfully tagged hello-skaffold-buildkit:8ab2695-dirty
Starting test...
Tags used in deployment:
 - hello-skaffold-buildkit -> hello-skaffold-buildkit:36805bf93ac66da7cbf07a5e0ad2a9b6dae4132743572bbc7d117c608ccb20f3
Starting deploy...
 - deployment.apps/hello-skaffold-buildkit configured
Waiting for deployments to stabilize...
 - deployment/hello-skaffold-buildkit is ready.
Deployments stabilized in 2.142 seconds
You can also run [skaffold run --tail] to get the logs

skaffold run  1.51s user 0.67s system 2% cpu 1:20.21 total

Alright, the deployment stabilized in 2 seconds but it took it a total of 1 minute and 2 seconds to build for only one string change?

This can’t be right lets run it again:

Deployments stabilized in 2.142 seconds
You can also run [skaffold run --tail] to get the logs

skaffold run  1.45s user 0.74s system 4% cpu 52.537 total

Wait a minute, phew I waited less than a minute - just 52 seconds to re-build the app and deploy it to a local k8s cluster with 4 cores for one string change.

Not good enough, lets optimize it with BuidKit

Configure BuildKit

BuildKit provides advanced caching and build mechanisms. We can say that it is the successor of the default docker build.

Lets make some changes to the default Dockerfile and try it out:

# syntax=docker/dockerfile:1

FROM golang:1.17-alpine AS base
WORKDIR /src
ENV CGO_ENABLED=0
COPY *.go .
COPY go.mod ./
COPY go.sum ./
RUN --mount=type=cache,target=/go/pkg/mod go mod download

FROM base AS build
ARG TARGETOS
ARG TARGETARCH
RUN --mount=target=. \
    --mount=type=cache,target=/go/pkg/mod/ \
    --mount=type=cache,target=/root/.cache/go-build/ \
    GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /hello .

EXPOSE 8080

ENTRYPOINT ["/hello"]

And instruct Skaffold to use BuildKit:

build:
  local:
    useBuildkit: true

Run skaffold dev --tail - the output will be absolutely different and it should look like this on your end:

[+] Building 86.5s (16/16) FINISHED                                                                                                                                                                            
 => [internal] load build definition from Dockerfile                                                                                                                                                      0.1s
 => => transferring dockerfile: 507B                                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                         0.1s
 => => transferring context: 2B                                                                                                                                                                           0.0s
 => resolve image config for docker.io/docker/dockerfile:1                                                                                                                                                2.4s
 => docker-image://docker.io/docker/dockerfile:1@sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2                                                                                  1.8s
 => => resolve docker.io/docker/dockerfile:1@sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2                                                                                      0.0s
 => => sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2 2.00kB / 2.00kB                                                                                                            0.0s
 => => sha256:93f32bd6dd9004897fed4703191f48924975081860667932a4df35ba567d7426 528B / 528B                                                                                                                0.0s
 => => sha256:e532695ddd93ca7c85a816c67afdb352e91052fab7ac19a675088f80915779a7 1.21kB / 1.21kB                                                                                                            0.0s
 => => sha256:24a639a53085eb680e1d11618ac62f3977a3926fedf5b8471ace519b8c778030 9.67MB / 9.67MB                                                                                                            1.4s
 => => extracting sha256:24a639a53085eb680e1d11618ac62f3977a3926fedf5b8471ace519b8c778030                                                                                                                 0.2s
 => [internal] load build definition from Dockerfile                                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                                                         0.0s
 => [internal] load metadata for docker.io/library/golang:1.17-alpine                                                                                                                                     1.7s
 => [base 1/6] FROM docker.io/library/golang:1.17-alpine@sha256:4918412049183afe42f1ecaf8f5c2a88917c2eab153ce5ecf4bf2d55c1507b74                                                                         18.3s
 => => resolve docker.io/library/golang:1.17-alpine@sha256:4918412049183afe42f1ecaf8f5c2a88917c2eab153ce5ecf4bf2d55c1507b74                                                                               0.0s
 => => sha256:666ba61612fd7c93393f9a5bc1751d8a9929e32d51501dba691da9e8232bc87b 282.16kB / 282.16kB                                                                                                        0.9s
 => => sha256:8ed8ca4862056a130f714accb3538decfa0663fec84e635d8b5a0a3305353dee 155B / 155B                                                                                                                0.2s 
 => => sha256:4918412049183afe42f1ecaf8f5c2a88917c2eab153ce5ecf4bf2d55c1507b74 1.65kB / 1.65kB                                                                                                            0.0s 
 => => sha256:8474650232fca6807a8567151ee0a6bd2a54ea28cfc93f7824b42267ef4af693 1.36kB / 1.36kB                                                                                                            0.0s 
 => => sha256:d8bf44a3f6b435c736d8b355ea32eb014508656504c77a3ecbc9378c84665762 5.20kB / 5.20kB                                                                                                            0.0s 
 => => sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3 2.82MB / 2.82MB                                                                                                            0.6s 
 => => sha256:1ff5b6d8b8c6b6093e19083f398755431fee6120fae681379ad828e84f387ec0 110.13MB / 110.13MB                                                                                                       13.5s 
 => => extracting sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3                                                                                                                 0.1s 
 => => sha256:40fcfd711f8db74e87407ed47c1d306a77eefe885d79679595d94f13c905f395 155B / 155B                                                                                                                0.8s 
 => => extracting sha256:666ba61612fd7c93393f9a5bc1751d8a9929e32d51501dba691da9e8232bc87b                                                                                                                 0.1s 
 => => extracting sha256:8ed8ca4862056a130f714accb3538decfa0663fec84e635d8b5a0a3305353dee                                                                                                                 0.0s 
 => => extracting sha256:1ff5b6d8b8c6b6093e19083f398755431fee6120fae681379ad828e84f387ec0                                                                                                                 4.1s
 => => extracting sha256:40fcfd711f8db74e87407ed47c1d306a77eefe885d79679595d94f13c905f395                                                                                                                 0.0s
 => [internal] load build context                                                                                                                                                                         0.1s
 => => transferring context: 66.58kB                                                                                                                                                                      0.0s
 => [base 2/6] WORKDIR /src                                                                                                                                                                               0.1s
 => [base 3/6] COPY *.go .                                                                                                                                                                                0.1s
 => [base 4/6] COPY go.mod ./                                                                                                                                                                             0.1s
 => [base 5/6] COPY go.sum ./                                                                                                                                                                             0.1s
 => [base 6/6] RUN --mount=type=cache,target=/go/pkg/mod go mod download                                                                                                                                 14.8s
 => [build 1/1] RUN --mount=target=.     --mount=type=cache,target=/go/pkg/mod/     --mount=type=cache,target=/root/.cache/go-build/     GOOS=linux GOARCH=amd64 go build -o /hello .                    46.0s
 => exporting to image                                                                                                                                                                                    0.4s
 => => exporting layers                                                                                                                                                                                   0.3s
 => => writing image sha256:d5a69442f414079a3f8a588b88c8d4c627c918b028935b92df41bb483cbecdc8                                                                                                              0.0s
 => => naming to docker.io/library/hello-skaffold-buildkit:8ab2695-dirty                                                                                                                                  0.0s
Tags used in deployment:
 - hello-skaffold-buildkit -> hello-skaffold-buildkit:d5a69442f414079a3f8a588b88c8d4c627c918b028935b92df41bb483cbecdc8
Starting deploy...
 - deployment.apps/hello-skaffold-buildkit configured
Waiting for deployments to stabilize...
 - deployment/hello-skaffold-buildkit is ready.
Deployments stabilized in 2.132 seconds
Press Ctrl+C to exit
Watching for changes...
[hello-skaffold-buildkit] There are 9 pods in the cluster

Okay, okay what this is even slower - or is it? Lets make a basic update of the code and see if there will be any difference:

[+] Building 6.2s (16/16) FINISHED                                                                                                                                                                             
 => [internal] load build definition from Dockerfile                                                                                                                                                      0.0s
 => => transferring dockerfile: 38B                                                                                                                                                                       0.0s
 => [internal] load .dockerignore                                                                                                                                                                         0.0s
 => => transferring context: 2B                                                                                                                                                                           0.0s
 => resolve image config for docker.io/docker/dockerfile:1                                                                                                                                                0.7s
 => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:42399d4635eddd7a9b8a24be879d2f9a930d0ed040a61324cfdf59ef1357b3b2                                                                           0.0s
 => [internal] load .dockerignore                                                                                                                                                                         0.0s
 => [internal] load build definition from Dockerfile                                                                                                                                                      0.0s
 => [internal] load metadata for docker.io/library/golang:1.17-alpine                                                                                                                                     0.8s
 => [base 1/6] FROM docker.io/library/golang:1.17-alpine@sha256:4918412049183afe42f1ecaf8f5c2a88917c2eab153ce5ecf4bf2d55c1507b74                                                                          0.0s
 => [internal] load build context                                                                                                                                                                         0.0s
 => => transferring context: 2.04kB                                                                                                                                                                       0.0s
 => CACHED [base 2/6] WORKDIR /src                                                                                                                                                                        0.0s
 => [base 3/6] COPY *.go .                                                                                                                                                                                0.1s
 => [base 4/6] COPY go.mod ./                                                                                                                                                                             0.1s
 => [base 5/6] COPY go.sum ./                                                                                                                                                                             0.1s
 => [base 6/6] RUN --mount=type=cache,target=/go/pkg/mod go mod download                                                                                                                                  0.6s
 => [build 1/1] RUN --mount=target=.     --mount=type=cache,target=/go/pkg/mod/     --mount=type=cache,target=/root/.cache/go-build/     GOOS=linux GOARCH=amd64 go build -o /hello .                     3.0s
 => exporting to image                                                                                                                                                                                    0.3s
 => => exporting layers                                                                                                                                                                                   0.3s
 => => writing image sha256:74283e78a2257e033adda1ed2081f9337074c802bd662db099b9a558ec8480b5                                                                                                              0.0s
 => => naming to docker.io/library/hello-skaffold-buildkit:8ab2695-dirty                                                                                                                                  0.0s
Tags used in deployment:
 - hello-skaffold-buildkit -> hello-skaffold-buildkit:74283e78a2257e033adda1ed2081f9337074c802bd662db099b9a558ec8480b5
Starting deploy...
 - deployment.apps/hello-skaffold-buildkit configured
Waiting for deployments to stabilize...
 - deployment/hello-skaffold-buildkit is ready.
Deployments stabilized in 3.128 seconds
Watching for changes...
[hello-skaffold-buildkit] There are 8 podss in the cluster

Alright alright it seems to be working for around 6 seconds and is faster than the previous change that took around 52 seconds.

Honestly I would expect to be able to directly re-build and re-run the app directly in the cluster but at the moment there doesn’t seem to be such a solution. But this one seems to be sustainable and stable for the time being.

Fortunately I will be constantly looking for any optimal build/deploy methods or if anyone has any good and fast build alternatives that can build the app faster than Skaffold and BuildKit then please let me know - I am open for all solutions!

Thanks for reading!