After following an internal CKA training, I needed teaching material and flying hours to familiarize myself with Kubernetes to pass the CKA exam (which I did). For that reason, I bought an Intel NUC and installed Ubuntu, and got started with K3S. In this blog, I will share how to set up external DNS at home.
During my journey in the world of Kubernetes, I became a bit lost on the DNS side. For applications to be reachable from the outside, running on your k3s cluster, DNS is very useful. Whatever happens, my clients (my wife and kids) should always have a 100% uptime ;-)
So, resolving domain names for clients should still work, no matter if my cluster or my Synology NAS is down. A fallback must be in place pointing to some DNS provider on the internet.
DNS requests are a bit strange. If one of your DNS servers replies but with an incorrect answer, your client will never go to the second DNS server in your list. A good working DNS server is key.
The solution should be able to handle the three following scenarios:
One of the most important components within Kubernetes is DNS. While the internal DNS service is mission-critical, the external isn’t.
However, running DNS on a local network is not as easy as it seems. I could, for instance, run a DNS server on Kubernetes for my local network. But if the K3s cluster is down, I’ll have an angry wife and children complaining about the internet being down (which it isn’t, of course).
I wasn't sure if I should run the DNS server besides my K3s cluster or on a separate virtual machine (VM). After a while, the penny dropped, and I realized that I still had a Synology NAS running. Since the Netflix era, I don't use it as much anymore, so this could come in handy.
And yes, I was lucky. By default, Synology offers a DNS server within DSM (DiskStation Manager OS). I won't bore you with setting it up, as that is a "next, next, finish" installation.
In this instance, I'm using example.org. The idea was simple. A master zone on example.org within DSM with forwarding records pointing to the new applications running on my K3s cluster and a wildcard that points to the internet. In case the subdomain isn't running within the cluster but on the internet somewhere. Like my externally hosted website.
Luckily, there are several applications to be found on the internet. I chose Adguard since I previously had good experiences with this app.
To manage the settings (WebUI) of Adguard I added an A-record on my DSM pointing to my K3s cluster. “adguard.example.org “ The setup looked like this.
Note: Adguard is used to forward records to the internet resolvers and filter the ads when returning them to the client.
Everything seemed fine, and after going from one kubectl apply -f to another, I quickly had some applications running.
Though every time I had a new application running, I had to manually add an A-record to the DSM DNS to connect to the WebUI of that application. It is 2022; let’s automate it!
After some googling, I found out about External-DNS. External-DNS is a service that is used to orchestrate DNS records living on another service, within or outside your network, allowing Kubernetes to sync IP addresses (from services and ingress) with DNS records. It all started with cloud-based API DNS providers. Though there is also a rfc2136 version. The latter is controlled via TSIG secrets.
TSIG (transaction signature) is a computer-networking protocol defined in RFC 2845. Primarily it enables the Domain Name System (DNS) to authenticate updates to a DNS database. It is most commonly used to update Dynamic DNS or a secondary/slave DNS server. TSIG uses shared secret keys and one-way hashing to provide a cryptographically secure means of authenticating each endpoint of a connection as being allowed to make or respond to a DNS update.
Source: Wikipedia
So, the first step is to generate a new TSIG secret within the DSM DNS server. Download this key as this is needed later within your deployment.
All files needed to add External-DNS to your cluster will be linked below.
The deployment example will look like this.
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
namespace: external-dns
spec:
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --registry=txt
- --txt-prefix=external-dns-
- --txt-owner-id=k8s
- --provider=rfc2136
- --rfc2136-host=10.10.10.100 ## your DNS server
- --rfc2136-port=53
- --rfc2136-zone=example.org ## your domain name
- --rfc2136-tsig-secret= 2cOPVWlG1eAg3BfVtIfsH4jKH8UOcte
- --rfc2136-tsig-secret-alg=hmac-sha512
- --rfc2136-tsig-keyname=externaldns-key
- --rfc2136-tsig-axfr
- --source=ingress
- --source=service
- --domain-filter=example.org ## your domain name
- --rfc2136-min-ttl=400s ## optional TTL in seconds
For your own purposes, please change the marked items below:
Applying the above deployment file including the serviceAccount, clusterRole, ClusterRoleBinding adds the External-DNS to your Kubernetes cluster. Which you can find on the GitHub of external-DNS, links are provided below.
Make sure that you add the generated key to your “Create zone update rule” within DSM – DNS server.
Whenever you add a service or ingress with annotation to your cluster an a-record is added/updated, or deleted on your DNS server.
I learned a lot through this journey about DNS and Kubernetes, have an ad-free network, and most importantly, my internet wasn't down at any moment, so I still have a waf of 100%, wink wink ;-)
In my next blog post, I'll explain how to use Let's Encrypt certificates on a private network with my own personal subdomain. Yes, you read this right; private network and Let's Encrypt certificates. But also will tell you the downside of this construction, and why you maybe want to use your own privately signed certificates.
That's it, see you next time. I hope this helped you as much as it helped me.
Would love to hear from you in the comments!
aversion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: dnstest.example.org
http:
paths:
- path: /
backend:
service:
name: my-service
port:
number: 8000
path: /
pathType: Prefix