Deploying on Kubernetes #9: Exposition via service
This is the ninth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve…
This is the ninth in a series of blog posts that hope to detail the journey deploying a service on Kubernetes. It’s purpose is not to serve as a tutorial (there are many out there already), but rather to discuss some of the approaches we take.
Assumptions
To read this it’s expected that you’re familiar with Docker, and have perhaps played with building docker containers. Additionally, some experience with docker-compose is perhaps useful, though not immediately related.
Necessary Background
So far we’ve been able:
Service Discovery
In addition to providing primitives for running containers and injecting configuration into these containers, Kubernetes also provide a service discovery abstraction.
From Wikipedia:
Service discovery is the automatic detection of devices and services offered by these devices on a computer network. A service discovery protocol (SDP) is a network protocol that helps accomplish service discovery. Service discovery aims to reduce the configuration efforts from users.
Kubernetes implements this in two parts:
The service abstraction
Kubernetes provides a service abstraction. We can think of a service as a super simple proxy that sits in front of pods. It gets assigned an IP, and passes traffic to that IP to 1 in a series of pods.
We have unknowingly been using the service abstractions provided by the Redis and MySQL charts. We can take a look at one of those to evaluate what a service looks like:
$ kubectl get svc kolide-fleet-mysql --output=yaml
---
# Abridged
apiVersion: v1
kind: Service
metadata:
labels:
app: kolide-fleet-mysql
chart: mysql-0.3.6
heritage: Tiller
release: kolide-fleet
name: kolide-fleet-mysql
namespace: default
spec:
clusterIP: 10.101.74.82
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
selector:
app: kolide-fleet-mysql
sessionAffinity: None
type: ClusterIP
We see the familiar metadata which describes where and how the service was created. But let’s take a look at the spec
block, and go over each component:
spec:
clusterIP: 10.101.74.82
Earlier it was mentioned that all services get assigned an IP. That’s the IP!
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: mysql
Services proxy traffic based on the configuration defined in ports
. They can proxy one or many ports, and can proxy one port to the other. We will take advantage of this behaviour later.
In this case, the configuration notes that a TCP proxy listening to 3306 must target the MySQL port.
selector:
app: kolide-fleet-mysql
Earlier it was mentioned that Kubernetes selects resources by label. This applies here — the service will proxy traffic on the defined port to any pod that matches the above selector. We can check which pods will be chosen:
$ kubectl get pods --selector app=kolide-fleet-mysql
NAME READY STATUS RESTARTS AGE
kolide-fleet-mysql-58d8f6c496-75v5w 1/1 Running 0 2h
Onto:
sessionAffinity: None
Whether to enable sticky sessions.
type: ClusterIP
There are several types of services:
ClusterIP
NodePort
LoadBalancer
ExternalName
A full description is available on the website. We’ll be using the ClusterIP (internal) and LoadBalancer (eehrm load balancer) types.
The service implementation
As mentioned earlier, each service gets assigned an IP. However, it’s not clear how each service should find this IP. There are a few different ways:
Environment variables are injected into the container environment
Querying the API directly
Querying a DNS server running on Kubernetes
We’ll only be focusing on DNS here, but check the docs for more detail if this does not suit you.
Kubernetes runs a DNS server and makes it the default resolver for all pods injected into the cluster. We can see it by running the following command:
$ kubectl exec kolide-fleet-mysql-58d8f6c496-75v5w cat /etc/resolv.conf
nameserver 10.96.0.10 # <-- The important bit
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
The resolv.conf
file is used by the system resolver to determine which upstream DNS server to query for DNS enquiries.
Kubernetes makes services available at the “FQDN” (fully qualified domain name) as follows:
${SERVICE_NAME}.${NAMESPACE}.svc.${CLUSTER_DOMAIN}
For example, given the service name kolide-fleet-mysql
and the namespace default
the DNS record will be available at:
kolide-fleet-mysql.defaut.svc.cluster.local
The cluster.local
is configurable by the Kubernetes administrators, but it’s an extremely common pattern for it to be called cluster.local
.
However, this doesn’t quite match what we created earlier. We used simply:
# templates/configmap:25-26
redis:
address: kolide-fleet-redis:6379
This is possible thanks to the other configuration in the resolv.conf
file:
# /etc/resolv.conf:2-3
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
In this file, ndots:5
means “search up to 5 dots deep”. Additionally, the search domains are default.svc.cluster.local
, svc.cluster.local
and cluster.local
.
This will express itself as several DNS queries in the following order:
kolide-fleet-redis.default.svc.cluster.local`
kolide-fleet-redis.svc.cluster.local
kolide-fleet-redis.cluster.local
Luckily, our DNS server has a record for one of those — kolide-fleet-redis.default.svc.cluster.local
. So, we can simply use kolide-fleet-redis
and it will work! Additionally, it will work across all namespaces and all clusters.
Our own personal service
The work required to implement a service given our starter chart is extremely minimal — indeed, there would likely be none, but it’s worth taking the opportunity to bounce the port around.
There’s a lot of things that the default start template takes care of:
Deferring the choice of service type to the user, but defaulting on
LoadBalacer
(seeValues.yml
).Surfacing the service to Prometheus for discovery and analysis
If it’s a load balancer, using the
OnlyLocal
annotation to direct traffic directly to the node on which a replica runs rather than any node and getting it bounced around through NAT.The
NOTES.txt
shows the appropriate access information depending on the service type.
The template explains its purpose fairly well. However, let’s stick to implementing things. First, we move the service from .to-do
to templates
:
$ mv .to-do/service.yaml templates/
Then, a simple bit of editing. Given the section:
# temlpates/service.yaml:27-30
ports:
- protocol: "TCP"
name: "http"
port: 8080
We swap it for:
# templates/service.yaml:27-31
ports:
- protocol: "TCP"
name: "http"
port: 443
targetPort: "http"
Changing ingress port to 443
means given the appropriate address browsers will automatically connect via HTTPS. targetPort
will map the http
to the http
in the deployment declaration — in this case 8080.
That’s it! Upon release, we can see the service:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kolide-fleet-fleet LoadBalancer 10.107.47.72 <pending> 443:31525/TCP 7m
Unfortunately I am running in Minikube, so a load balancer is not automatically created. However, we should be able to test on the exposed NodePort
. First, we need the IP of a node:
$ minikube ip
192.168.99.100
Then we can simply use the IP and the port defined above in the kubectl get svc
call to connect to the service. It becomes:
https://192.168.99.100:31525/
We stick in the browser and: it works!
TLS validation fails as the self signed certificate earlier issued is … well, self signed, and not valid for the IP we provided. However, it should be trivial deploying this in an actual environment to resolve these issues.
As always, the commit for this work is here:
AD-HOC feat (Service, Deployment, Secret): Add service, fix Redis · andrewhowdencom/charts@01bc87c
This commit adds a service such that the deployment is discoverable. It uses the service defined by the starter…github.com
Astute viewers will notice that there are additional changes there that aren’t discussed. Since the application now works I was testing it, and discovered I missed the Redis secret in the previous set of secret work. Oops.
In Summary
This brings us to a “deployable” version of the application. Good news also, as I wanted to get this up into a work environment where I can demonstrate it for colleagues.
However, there is still work to make it production ready. Future work we will add further hardening to ensure the application is continually up, as well as start trimming back some of the unnecessary parts of the starter template. Lastly, we will use the learnings from this chart to improve the starter template for future charts.
Hooray deployable!
The next version in this series is here:
https://medium.com/@andrewhowdencom/deploying-on-kubernetes-10-health-checking-a4986e807afe