Deploying on Kubernetes #2: Scaffolding
This is the second in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to…
This is the second in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.
Assumptions
To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.
Necessary Background
So far we’ve been able:
Next, we need to define a helm chart to manage our Kubernetes resources!
Minimum viable environment
Kubernetes can be managed in either a declarative or a procedural way. It stores its state in a well defined API and the tooling allows creating a resource in procedural way (kubectl run
, for example) but editing them in a declarative one (kubectl get pod -l name=foo --output=yaml > specification.yml)
The problem is as services become more complex there are an astounding number of these files that need to be managed. From the Kubernetes command line there are 36 different types of objects that can be managed. This can quickly turn into configuration hell, in which one variable might need to be updated across 4 different files to function.
There are various ways to manage them, but for this project we will be using helm
. From GitHub:
Helm is a tool that streamlines installing and managing Kubernetes applications. Think of it like apt/yum/homebrew for Kubernetes.
Creating the chart
Helm provides an opinionated set of defaults for creating a chart with the command:
helm create kolide-fleet
It creates a whole series of files required to get a super basic service up and running:
Chart.yml — the chart declaration, noting name, version etc.
values.yml — a file that allows specific parameters in the chart to be overridden, based on user requirements
.helmignore — like .gitignore but for helm
templates/NOTES.txt — A file of instructions printed out after a release has been successfully installed
templates/_helpers.tpl — A file that contains specific golang template functions to help make certain tasks easier
templates/(deployment|service|ingress).yml — Kubernetes resources. We’ll get to those in a future post.
This allows us to deploy a fully functioning app … of some kind:
$ helm install kolide-fleet
NAME: tan-snake
LAST DEPLOYED: Sun Mar 25 20:06:00 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tan-snake-kolide-fleet ClusterIP 10.100.177.52 <none> 80/TCP 0s
==> v1beta2/Deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
tan-snake-kolide-fleet 1 0 0 0 0s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
tan-snake-kolide-fleet-749df88c7d-2bwnb 0/1 ContainerCreating 0 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace kube-system -l "app=kolide-fleet,release=tan-snake" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
So! We’ve deployed a fully functional application onto our local cluster. Following the instructions in the NOTES
section we do indeed get the right response:
$ curl localhost:8080 -I
HTTP/1.1 200 OK
Server: nginx/1.12.2
Date: Sun, 25 Mar 2018 18:11:07 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 11 Jul 2017 13:29:18 GMT
Connection: keep-alive
ETag: "5964d2ae-264"
Accept-Ranges: bytes
We can clean this up with:
$ helm delete tan-snake --purge
release "tan-snake" deleted
Creating a better chart
After using Helm for a year or so, I was unhappy with the defaults set out by the normal chart. Luckily, helm has the ability to use my own starter chart — a chart with defaults as I have specified them.
The starter chart I use is at sitewards/helm-chart, and the instructions there detail how to install it. However, given that I have already installed it I will recreate the chart as I did before but using my own chart as the base:
$ helm create --starter=sitewards/chart kolide-fleet
Creating kolide-fleet
The necessary fine tuning
Unfortunately, I am not as well organised as the helm upstream chart, and mine does not work “out of the box”. Instead, I need to make a few adjustments:
# This application has no requirement for persistence, and thus
# we do not need to document it
$ rm PERSISTENCE.md
# There are a tonne of opinionated templates, but they are
# not filled with values that make them work initially. Therefore,
# remove them for now
$ mv templates .to-do
# I use `__` as a placeholder everywhere. So, I need to know what placeholders to fill out.
# These placeholders are explained in the repository itself -- checkout there for more info
$ grep -riE '__[A-Z_]+__' . --only-matching --no-filename | sort | uniq
__AUTHOR_GITHUB_USERNAME__
__AUTHOR_NAME__
__CHART__
__CONFIGMAP_NAME__
__CONTAINER_IMAGE__
__CONTAINER_NAME__
__CONTAINER_PORT__
__CONTAINER_PORT_NAME__
__CONTAINER_PORT_PROTOCOL__
__ITEM_KEY__
__KEY_USED_IN_SECRET_NAME__
__LIVENESS_PROBE__
__NAME__
__RESOURCE_NAME__
__RESTART_POLICY__
__ROLE_NAME__
__SECRET_NAME__
__SERVICE_ACCOUNT_NAME__
__VOLUME_NAME__
# Better start filling out those placeholders!
$ find . -type f -exec sed 's/__AUTHOR_GITHUB_USERNAME__/@andrewhowdencom/g' {} --in-place \;
# etc.
Aaalmost working
Unfortunately this is not enough to launch the chart. Helm requires we deploy at least something or it will fail the release. So, the following three objects are included:
$ mkdir templates
$ mv .to-do/NOTES.txt .to-do/configmap.yml .to-do/_helpers.tpl templates
The configmap.yml
is a file that we will revisit later, but for now it’s safe to ignore as “something that’s deployed that makes helm think this is worthwhile, but it’s harmless”
This lets us deploy the chart:
$ helm upgrade --install kolide-fleet path/to/chart
Release "kolide-fleet" does not exist. Installing it now.
NAME: kolide-fleet
LAST DEPLOYED: Sun Mar 25 20:32:10 2018
NAMESPACE: kube-system
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
kolide-fleet-fleet 0 0s
NOTES:
fleet
## Accessing fleet
----------------------
1. Get the fleet URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the loadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc --namespace kube-system -w kolide-fleet-fleet'
export SERVICE_IP=$(kubectl get svc kolide-fleet-fleet --namespace kube-system --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo http://$SERVICE_IP:/login
For more information, check the readme!
The instructions are a bit off at the minute, but the default configuration is to expose the application via a LoadBalancer
. At the minute there’s nothing to expose, but the instructions will be corrected later.
Finishing Touches
Lastly, I do not like the default charts.yml
that helm insists on creating. Unfortunately, it will overwrite whatever is in the starter pack, so we must get the desired one directly from github
$ curl https://raw.githubusercontent.com/sitewards/helm-chart/master/chart/Chart.yaml -O
$ vim Chart.yml # Correct the placeholders
That’s all for now, folks
This gives us a chart that is deployable, but essentially useless. For now, this is fine. In future work we will cover adding the dependencies for MySQL and Redis, and the necessary Kubernetes primitives to create a production worthy deployment of kolide/fleet.
For now, you can see the changes here*:
AD-HOC feat (kolide/fleet): Initial specification of the kolide/fleet... ·…
chart This is a super early commit of the kolide/fleet chart. The chart scaffold itself is only partially implemented…github.com
* I will try to keep this link up to date
Thanks
Behrous Abbasi who’s encouragement is spurring me to write this blog, and
Zach Wasserman, who apparently notice this blog post — also super encouraging!
See the next in the series here: