-- Leo's gemini proxy

-- Connecting to johnhame.link:1965...

-- Connected

-- Sending request

-- Meta line: 20 text/gemini

Configuring DNS for Kubernetes Development

2019-11-28


In the intervening 3 years between my last blog and this one, I've been on quite the adventure, partly involving setting up multiple Kubernetes clusters.


While I'm absolutely sold on the ability of Kubernetes to slash server budgets, to drastically improve deployment compared with much of the home-grown CI systems out there (I am also guilty of this sin), and to provide a coherent configuration system that's free of vendor lock-in poison; in my new occupation[1] I'm focussed on improving the developer onboarding experience. Allow me to explain further below. (If you don't care and just want the juicy stuff, feel free to skip down to the section entitled, "How to configure DNS for Kubernetes Development").


1: new occupation


Reducing Costs


At a particular startup I've previously worked at, moving to Kubernetes was an incredibly thrifty experience. Our previous deployment system was pieced together while iterating very quickly on a startup idea and as a result was a complete mess after 16 months of intensive development, and so there was plenty of room for improvement.


Moving to Kubernetes allowed us to optimise the usage of the resources we were using: before, our infrastructure was very wasteful: EC2 instances were not being utilised properly and there was way too much overhead to deal with traffic spikes, for example. After, with Kubernetes deciding what sized cluster our product required, our infrastructure had been reduced drastically, and we required less overhead to tie everything together. The cumulative impact of all this really begins to add up in production!


Our previous setup made use of AWS' EC2 autoscaler functionality. This worked _ok_, but there were certainly issues with it: the metrics used to decide whether to expand or contract the worker pool were woefully basic and often not very useful. Indeed, when the autoscaler spins instances up and down, this generates cost, and when the autoscaler is ping-ponging between numbers of instances, this can get expensive pretty quickly.


After the first few months of switching completely to k8s, it became clear that we had managed to reduce our AWS billing fees by a third. Obviously for a startup, this was a massive deal for us.


Likewise, at Financial Cloud, we expect to make significant reductions in our server bills as we optimise and rollout our Kubernetes infrastructure.


We need to talk about vendor lock-in


It's so easy to become totally dependent to a cloud's offering. I've worked at companies where there was nothing short of an addiction to cloud products, with direct integrations with those products strewn throughout the codebase. This might be acceptable for a large company which can absorb the cumulative cost of using a cloud provider for - for example - proprietary databases, but this strategy ignores how painful the developer experience outside of the cloud can be (remember who costs more: developers or infrastructure!). The solution I've seen at most businesses in this situation is to simply rely on the cloud for development as well. This ties the developer to their desk, and makes it very difficult to run isolated and fully-functional tests.


Developers don't want vendor lock-in! Our codebases should be free to run sans-internet!


Kubernetes allows us to drive a wedge between our codebase and the cloud's greedy paws.


How to configure DNS for Kubernetes Development


Given that one of our major goals with this Kubernetes deployment is to improve the developer's user experience see here[1], its important to provide a good user journey[2] for bootstrapping a k8s cluster on a developer's computer, and installing all of our applications in the same way as they'd be installed in production (but on a smaller scale, of course).


1: see here

2: user journey


Part of this includes being able to easily access services which are running on this k8s cluster.


Internally, kubernetes clusters use their own DNS server - typically using CoreDNS[1] or Kube-DNS[2].


1: CoreDNS

2: Kube-DNS


To be able to access services via Kubernetes' DNS, we need to join Kubernetes' DNS with our own. Luckily, there's a great little app for this called kwt[1]. `kwt` allows us to setup a DNS proxy between the host and the internal DNS server inside of k8s:


1: kwt


sudo -E ${HOME}/.local/bin/kwt net start --dns-mdns=true --kubeconfig ${HOME}/.kube/kind-config-kind

The above command starts the DNS proxy, enables mdns so that avahi can talk to it, and then uses KIND's kubeconfig file explicitly. On Ubuntu, I had to make an extra change in `/etc/nsswitch.conf`:


hosts: files dns mdns4_minimal [NOTFOUND=return] myhostname

This change allows the host's DNS server to attempt to use `kwt` before marking the request as failed. This means that we can now hit service endpoints using the familiar hostnames that k8s' DNS server provides for us, like so (all from the host machine):


$ kubectl run echo --image=inanimate/echo-server --replicas=3 --port=8080
$ kubectl expose deployment echo --type=LoadBalancer

$ kubeclt get deployment echo -w

# ... Wait for the deployment to come up ...

$ curl echo.kube-system.svc.cluster.local:8080

> Welcome to echo-server!  Here's what I know.

-- Response ended

-- Page fetched on Thu May 9 21:21:01 2024