Run Your Own Kubernetes Instance with Microk8s
This post covers how to get a personal Kubernetes (K8s) cluster running on an instance with a public IP address, with HTTPS support, on the cheap. I’m paying about €9/month to run mine and without VAT in the US you will pay less. This post is not how to build or run a large production system, but some of the things you learn here could apply.
We’ll get a cloud instance, install Kubernetes, set up ingress, get some https
certs, and set the site up to serve traffic from a basic nginx
installation.
You need to know some basic stuff to make this work, things like Linux CLI,
getting a domain name, and setting up a DNS record. Those are not covered.
Choices
Whenever you dive into something technical of any complexity, you need to make a series of choices. There are an almost infinite set of combinations you could choose for this setup. If you have strong feelings about using something else, you should do that! But in this post, I have made the following choices:
- Ubuntu Linux as the OS distro
- microk8s as the Kubernetes distro
- Contour as the ingress controller
- Letsencrypt certificates
These are opinionated choices that will impact the rest of this narrative. However, there are two other choices you need to make that are up to you.
Selecting Hosting
To run Kubernetes where your services are available on the public Internet, you need to host it somewhere. If you want to run that at home on your own hardware, you could do that. Having had a long career maintaining things, I try to keep that to a minimum in my spare time. So I have chosen a cloud provider to host mine.
You will need to make sure you get a fixed public IP address if you want to serve traffic from your instance. Most clouds provide this, but some require add-ons (and more money).
From my own experience, I can strongly recommend Hetzner for this. They have a really good price plan for instances with a good bit of CPU, memory, and disk. I’ve chosen the CPX21 offering at the time of this writing.
This offering has the following characteristics:
- 3 vCPUs
- 4GB RAM
- 80GB disk
- 20TB traffic
- IPv4 support (IPv6-only is cheaper)
- Available in US, Germany, and Finland
- €8.98/month with VAT included, less if you are in the US
In any case, you should select an hosting provider and make sure your instance is running the latest Long Term Support (LTS) Ubuntu Linux.
Selecting A Domain And DNS
Your Kubernetes host will need to be addressed by name. You should either use a domain you already own, or register one for this purpose. A sub-domain from an existing domain you own is also no problem. Once you have an instance up, you will want to get the IP address provided and set it up with your registrar. You could also consider fronting your instance with CloudFlare, in which case they will host your DNS. I will leave this as an exercise to the reader. There are lots of tutorials about how to do it.
Installing and Configuring Kubernetes
This is actually quite easy to do with
microk8s
. This is a very self-containd
distro of Kubernetes that is managed by Canonical, the people who make Ubuntu
Linux. So that means it’s pretty native on the platform. It supports pretty
much anything you are going to want to do on Kubernetes right now.
On Ubuntu you can install microk8s
with the snap
package provider. This is
normally available on the latest Ubuntu installs, but on Hetzner the distro is
quite tiny so you will need to sudo apt-get install snapd
. With that
complete, installing microk8s
is:
$ sudo snap install microk8s --classic
That will pick the latest version, which is what I recommend doing. You can test that this has worked by running:
$ microk8s kubectl get pods
That should complete successfully and return an empty pod list. Kubernetes is installed!
That being said, we need to add some more stuff to it. microk8s
makes some of
that quite easy. You should take the following and write it to a file named
“microk8s-plugins” on your hosted instance:
#!/bin/sh
# These are the plugins you need to get a basic cluster up and running
DNS_SERVER=1.1.1.1
HOST_PUBLIC_IP="<put your IP here>"
microk8s disable ha-cluster --force
microk8s enable hostpath-storage
microk8s enable dns:"$DNS_SERVER"
microk8s enable cert-manager
microk8s enable rbac
microk8s enable metallb:"${HOST_PUBLIC_IP}-${HOST_PUBLIC_IP}"
Youd should put the public IP address of your host in the placeholder at the top of the script.
I have selected CloudFlare’s DNS here. This is used by services running in the
cluster when they need name resolution. You could alternatively pick another
DNS server you prefer. Another example would be Google’s 8.8.8.8
or 8.8.4.4
.
Note that the disable ha-cluster
is not required and it will limit your
cluster to a single machine. However, it does free up memory on the instance
and if you only plan to have one instance, it’s what I would do.
Now that you’ve written that to a file, make it executable and then run it.
$ chmod 755 microk8s-plugins && ./microk8s-plugins
It’s a good idea to keep this around so that if you re-install or upgrade later you have it around. You might also look around later to see which other add-ons you might want. For now this is good enough.
We’re setting up:
-
hostpath storage: allows us to provide persistent volumes to pods written to the actual disks of the system itself.
-
cluster wide DNS: configures the DNS server to use for lookups
-
cert manager: will support getting and managing certificates with Letsencrypt
-
rbac: support access control in the same way as most production systems
-
metallb
: a basic on-host load balancer that allows us to expose services from the Kubernetes cluster to the public IP address. This is installed with a single public IP address.
kubectl
Options
microk8s
has kubectl
already available. You used it earlier with microk8s
kubectl
. This in a totally valid way to use it. You might want to set up an
alias like: alias kubectl="microk8s kubectl"
. Alternatively you can install
kubectl
with:
$ snap install kubectl --classic
Either is fine. If you like typing a lot, you don’t have to do either one!
Namespaces
From this point on we’ll just work in the default
namespace to keep things
simple. You should do this however you like. If you want to put things into
their own namespace, you can apply a namespace to all of the kubectl
commands.
cert-manager
and contour
have their own namespaces already and are setup to
work that way.
Container Registry
You need somewhere to put containers you are going to run on your cluster. If
you already have that, just use what you have. If you are only going to run
public projects, then you don’t need one. If you want to run anything from a
private repository, you will need to sign up for one. Docker Hub allows a
single public image. There are lots of other options. I selected
JFrog because they allow up to 2GB of private images,
regardless of how many you have.
If you are going to use a private registry, you need to give some credentials to the Kubernetes cluster to tell it how to pull from the registry. You will need to get these credentials from your provider.
A running Kubernetes it entirely configured via the API. Using kubectl
we can
post YAML files to it to affect configuration, or for simpler things, just
providing them on the CLI. You need to give it registry creds by setting up a
Kubernetes secret like this:
$ kubectl --namespace default create secret docker-registry image-registry-credentials --docker-username="<your username here>" --docker-password="<your password here>" --docker-server=<your server>
For jfrog.io
the server is <yourdomain>.jfrog.io
. This should not contain a path.
You now have a secret called image-registry-credentials
. You can verify this with
kubectl get secrets
, which should return a list with one secret.
Setting Up Ingress
When stuff is running inside a Kubernetes cluster, you can’t just contact it from outside. There are a lot of ways to do this, as with everything else in Kubernetes. For most, this is managed by an ingress controller. There are again many options here. I’ve chosen Contour because it’s widely used, easily supported, and runs on top of Envoy, which I think is a really good piece of software.
So let’s set up Contour. This is super easy! Do the following
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
That grabs a manifest from the Internet and runs it on your cluster. As with anything like this, caveat emptor. Check it over yourself if you want to be sure what you are running. Now that it is installed, you can run:
kubectl get pods -n projectcontour
Contour installs in its own namespace, hence the -n
.
That should look something like this:
NAME READY STATUS RESTARTS AGE
contour-6d4545ff84-kptk2 1/1 Running 1 (5d19h ago) 6d3h
contour-6d4545ff84-qbggg 1/1 Running 1 (5d19h ago) 6d3h
envoy-l5sqg 2/2 Running 2 (5d19h ago) 6d3h
Obviously yours will have been running for much less time.
That’s all we need to do here.
Storage
We installed the hostpath
plugin so we can make persisntent volumes available
to pods. But we don’t really have control over where it puts the directories
that contain each volume. We can control that by creating a storageClass
that
specifies it, and then specifying that storage class when creating pods. This
is how you might do that. If you don’t care, just skip this step and don’t
specify the storage class later.
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd-hostpath
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
parameters:
pvDir: /opt/k8s-storage
volumeBindingMode: WaitForFirstConsumer
This will create a storageClass
called ssh-hostpath
and all created volumes
will live in /opt/k8s-storage
. You can, of course, specify what you like here
instead. You might look into the reclaimPolicy
if you want to do something
else as well.
Setting Up Certs
If you are going to expose stuff to the Internet you will probably want to use
HTTPS, which means you will need to have certificates. We installed
cert-manager
earlier, but we need to set it up. We’ll use letsencrypt
certs
so that means we’ll need a way to respond to the ACME challenges used to
authenticate our ownership of the website in question. There are a few ways to
do this, including DNS records, but we’ll use the HTTP method. cert-manager
will automatically start up an nginx
instance and map the right ingress for
this to work! There are just a few things we need to do.
We’ll first use the staging environment for letsencrypt
so that you
don’t run off the rate limit for new certs while messing around.
You’ll want to install both of these, substituting your own email address for the placeholder:
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
email: '<your email here>'
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Secret resource that will be used to store the account's private key.
name: letsencrypt-account-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: contour
Write this to a file called letsecrypt.yaml
and the run kubectl apply -f
letsencrypt.yaml
.
Do the same for this file, naming it letsencrypt-staging.yaml
.
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
email: '<your email here>'
privateKeySecretRef:
name: letsencrypt-staging
server: https://acme-staging-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: contour
At this point, we are finally ready to deploy something!
Deploying Nginx
To demonstrate how to deploy something and get certs, we’ll deploy an nginx
service to serve files from a persistent volume mounted from the host. I won’t
walk through this whole thing, it’s outside the scope of this post. But, you
can read through the comments to understand what is happening.
Write the following into a file called nginx.yaml
.
---
# We need a volume to be mounted on the ssd-hostpath we created earlier.
# You can add content here on the disk of the actual instance and it
# will be visible inside the pod.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-static
spec:
storageClassName: ssd-hostpath
accessModes: [ReadWriteMany]
# Configure the amount you want here. This is 1 gigabyte.
resources: { requests: { storage: 1Gi } }
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
# This is not needed if you are using a public image like
# nginx. It's here for reference for your own apps.
imagePullSecrets:
- name: image-registry-credentials
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: nginx-static
volumes:
- name: nginx-static
persistentVolumeClaim:
claimName: nginx-static
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
sessionAffinity: None
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
labels:
app: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
# You will probably want to re-run this manifest with this
# set to true later. For now it *must* be false, or the
# ACME challenge will fail: you don't have a cert yet!
ingress.kubernetes.io/force-ssl-redirect: "false"
# This is important because it tells Contour to handle this
# ingress.
kubernetes.io/ingress.class: contour
kubernetes.io/tls-acme: "true"
spec:
tls:
- secretName: '<a meaningful name here>'
hosts:
- '<your hostname>'
rules:
- host: '<your hostname>'
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: nginx
port:
number: 80
Read through this before you run it, there are a few things you will need to change in the bottom section. There are a ton of other ways to deploy this but this is enough to get it running and show that your setup is working.
Now just run kubectl apply -f nginx.yaml
. Following a successful application,
you should be able to run kubectl get deployments,services,ingress
and get
something back like:
NAME READY UP-TO-DATE
deployment.apps/nginx 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/nginx ClusterIP 10.152.183.94 <none> 80/TCP 5d1h
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress ... <none> some-host 1.2.3.4 80 5d1h
This has also created a certificate request and begun the ACME challenge. You
can see that a certificate has been requested and check status by running
kubectl get certificates
.
k8 get certificates
NAME READY SECRET AGE
my-cert True my-cert 4d6h
Once the ACME challenge has succeeded you can see Ready: True
. If something
goes wrong here, you can follow this very good troubleshooting
guide.
You can test against your site with curl --insecure
since the cert from the
staging environment is not legit.
Final Steps
Once that all works, I recommend re-running against the production
letsencrypt
by changing the nginx.yaml
to reference it and re-applying. You
can watch the ACME challenge by inspecting the pod that will be created to run
the ACME nginx.
That’s pretty much it. Kubernetes is huge and infintitely configurable, so there are a million other things you could do. But it’s up and running now.