Knot DNS: One Tame and Sane Authoritative DNS Server
How to install and minimally configure Knot to act as your home lab's local domain master and slave servers.
If you were a regular viewer of the original Saturday Night Live era, you will remember the Festrunks, two lewd but naïve Czech brothers who were self-described "wild and crazy guys!" For me, Gyorg and Yortuk (plus having my binomial handed to me by tests designed by a brilliant Czech professor at the local university's high-school mathematics contests) were the extent of my knowledge of the Czech Republic.
I recently discovered something else Czech, and it's not wild and crazy at all, but quite tame and sane, open-source and easy to configure. Knot DNS is an authoritative DNS server written in 2011 by the Czech CZ.NIC organization. They wrote and continue to maintain it to serve their national top-level domain (TLD) as well as to prevent further extension of a worldwide BIND9 software monoculture across all TLDs. Knot provides a separate fast caching server and resolver library alongside its authoritative server.
Authoritative nameserver and caching/recursive nameserver functions are separated for good reason. A nameserver's query result cache can be "poisoned" by queries that forward to malicious external servers, so if you don't allow the authoritative nameserver to answer queries for other domains, it cannot be poisoned and its answers for its own domain can be trusted.
A software monoculture means running identical software like BIND9 everywhere rather than different software providing identical functionality and interoperability. This is bad for the same reasons we eventually will lose our current popular species of banana—being genetically identical, all bananas everywhere can be wiped out by a single infectious agent. As with fruit, a bit of genetic diversity in critical infrastructure is a good thing.
In this article, I describe how to install and minimally configure Knot to act as your home lab's local domain master and slave servers. I will secure zone transfer using Transaction Signatures (TSIG). Although Knot supports DNSSEC, I don't discuss it here, because I like you and want you to finish reading before we both die of old age. I assume you already know what a DNS zone file is and what it looks like.
You may download the latest version (2.8.x as I write) source tarball via
https://www.knot-dns.cz/download and build it yourself using the ubiquitous
make install incantation. I recommend running the latest
Knot version if you intend to have your resolver face the public internet. I
found that, to build Knot on Ubuntu 18.04, I needed to install the packages
libedit-dev in addition to the
standard essential build toolkit. Knot's website has further
requirements, which CentOS 7 and Ubuntu 18.04 appear to meet. The tar extract
and build process also oddly demanded to make some hard file links. Building
as an unprivileged user was successful despite the hard link failure errors.
For my home lab, the 2.6.x package in Ubuntu "universe" and EPEL repos is adequate. To install it on Ubuntu 18.04, I enabled the "bionic universe" repo in my /etc/apt/sources.list and did this:
$ sudo apt update $ sudo apt install knot
On CentOS 7, I would have run the commands:
$ sudo yum install epel-release $ sudo yum install knot
I can now proceed to configure the Knot master instance. I hate reading BIND9 config files, and although I know the major distros are trying to be helpful, I particularly hate the sliced and diced versions they provide. Knot.conf is a breeze by comparison. Knot's mercifully terse single-file configuration uses YAML format. The Knot package deploys a sample configuration in /etc/knot.conf. I moved that file aside and created a fresh one using my preferred editor, vi:
$ sudo mv /etc/knot.conf /etc/knot.conf-save $ sudo vi /etc/knot.conf
My home network is cpu-chow.local; name yours as you see fit. Since nobody but you will recognize this server as authoritative, you can choose your TLD as well. You feed this domain to caching nameservers in your home lab by configuring them to forward queries explicitly for your domain to your authoritative resolver.
The first section defines the server's presence on the host and network. Section names start at column 1, entries are indented four spaces:
server: identity: dns1.cpu-chow.local listen: [ [email protected], [email protected] ] user: knot:knot rundir: /run/knot
In the server section, I'm telling Knot its hostname and to run on port 53 of the local interface but at a different address, 127.0.0.9. Since Knot can resolve queries only for cpu-chow.local, it has little use serving from the default 127.0.0.1; besides, my host's caching nameserver is there already. I'm also directing it to listen on the host's lab network address, 172.28.1.22. My lab network is 172.28.1.0/24, and I want to offer authoritative nameservice for cpu-chow.local to the other hosts in my lab. I could instead offer the authoritative server to my host's caching server only and have that caching server listen on the lab net IP and serve all the other hosts' query needs if I desired.
Installing Knot created a user id knot, and I'm directing Knot to run as
that UID/GID for least-privilege best practice. The
rundir will be the home
of Knot's management socket. If you run multiple Knot instances, use a
unique /run directory for each.
rundir, I insert a blank line to signal the section's end and start
the key section to define my TSIG key. Note that
id has a dash in column 3.
That is YAML, and it means that what follows is specifically associated with
key: - id: tsigkey1. Algorithm: hmac-sha256 secret: base64-keyvalue
To define another key, add another
immediately after the first secret line.
This is the secret that authorized slave servers will provide when they request an administrative zone transfer (AXFR), a request to sync up with the master. I've given my arbitrarily named key a fully lowercase name followed by a period to ensure interoperability with other DNS server software like PowerDNS, which seemed always to lowercase and append a period to whatever key name I gave it when I ran it as a slave to a Knot master.
The secret is a randomly generated Base64 value. Knot includes the
utility to generate the key section for you, and you can paste the output
into your config file:
$ keymgr -t tsigkey1. key: - id: tsigkey1. algorithm: hmac-sha256 secret: oxRKAUfGN3R6fGjWX/V+i4rjCl1zRuYslX0c4se+GWs=
Another blank line, and I move on to the template section, which defines common presets that can be applied in other sections. You add multiple templates just like multiple keys. The template section and the template id default must be present in knot.conf, or else Knot will emit a cryptic error message when you start it:
template: - id: default storage: "/var/lib/knot"
default is a special template that is applied automatically; other templates
must be referenced in a section to apply there. I am declaring here that by
default, all files to be read are in /var/lib/knot. I intend to use
to manage my zones, so keeping them in /etc/knot will not be appropriate
nsupdate is beyond the scope of this article.)
And now I define my authorized slaves with the remote section. This session associates a host and various attributes with a name you can use later in the config:
remote: - id: slave1 address: [email protected] key: tsigkey1. - id: slave2 address: [email protected] key: tsigkey1.
id is an arbitrary name. In addition to a remote slave on lab host
172.28.1.2, I will run a slave on the master host at 18.104.22.168 to
illuminate an arcane Linux networking issue.
acl section defines the server's access control lists
(ACLs)—to whom Knot will listen when they send administrative requests
acl: - id: acl_axfr address: [ 127.0.0.0/24, 172.28.1.20 ] action: transfer key: tsigkey1. - id: acl_upme address: 127.0.0.0/24 action: update key: tsigkey1.
I've configured two ACLs, one to receive
AXFR requests from slaves
and the other to facilitate
nsupdate requests originating on the local host.
Each will use the same key, but they don't have to.
So why did I employ a /24 mask on
acl_xfer? Shouldn't it be 127.0.0.19?
Indeed, it should, but Linux networking has a small quirk. Sometimes when my
localhost slave sends a request, the data will appear to be coming from the
main IP associated with the lo interface: 127.0.0.1. Before a web
search revealed this oddity, I stared at my logs in disbelief and pored over
tcpdump output for some time when the master kept denying
requests from 127.0.0.1. Several DNS server software packages have a config
option to specify the source address for slave transmissions. I have yet to
find such an option in Knot. I'm causing the issue here because I'm
being cheeky in my use of 127.0.0.* addresses; this would not happen if I
used a lab or public network address. So I'm comfortable specifying a
range for localhost on my home lab.
How, during the above incident, did I get Knot to tell me why it wasn't working? I ratcheted up the logging level. The log section controls what gets logged and where it goes:
log: - target: syslog any: info
This config emits all log entries at
info level and worse, and sends them to
syslog. To get debug-level entries as well and dump it out to your screen
(running Knot in the foreground), I used this config instead:
log: - target: stdout any: debug
target also can be stderr or a filename.
I'm now ready to declare my authoritative zones with the zone section. For this article, I'm just setting up one zone:
zone: - domain: cpu-chow.local file: "cpu-chow-local.zone" notify: slave1 acl: acl_axfr acl: acl_upme semantic-checks: on disable-any: on serial-policy: increment
Note that a period is not required at the end of the domain name. The file attribute does not allow a path; the default template supplies the storage attribute, /var/lib/knot, the location of cpu-chow.local.zone.
notify designates to which remote id my master will send a
"notify" message advising them that they should promptly respond with an
request. The first
acl designates the IP addresses to whom the master will
AXFR requests and the second for
nsupdate requests. I could map a
non-default template here by adding a
template: template-id line
under this zone.
I have included some other attributes in the zone to illustrate
semantic-checks does extra syntax checking on the zone file.
disable-any alters Knot's response to, say, a
ANY query to
prevent DNS zone-reflection attacks.
serial-policy ensures that a dynamic
update will trigger a serial number increment in the zone file.
I saved the file, set its owner to
root:knot and permissions to 640. I can
now start Knot, either via systemd:
$ sudo systemctl restart knot $ sudo systemctl status knot
or by running it in the foreground with great verbosity:
$ sudo /usr/sbin/knotd -vvv -c /etc/knot/knot.conf
Note that systemd may not immediately report that Knot startup has failed, so following up with a status check is a good practice.
On my local caching server at 127.0.0.1, I update the config to tell it to consider my master at 127.0.0.9 as authoritative for my zone. Since I'm running PowOerDNS Recursor, I edit the file /etc/powerdns/recursor.conf:
I'll do the same elsewhere in my lab, using the lab net address 172.28.1.22 instead.
Let's look at a config for a slave server on 127.0.0.19. I'll call
it /etc/knot/slave.conf, same owner and permission as the master config.
Since it is a second Knot instance, I'll need to create a second
and storage location. Since it's a slave Knot, let's call it
$ sudo mkdir -p /run/snot /var/lib/snot $ sudo chown knot:knot /run/snot /var/lib/snot $ sudo chmod 755 /run/snot /var/lib/snot
And here is /etc/knot/slave.conf:
server: identity: dns2.cpu-chow.local listen: [email protected] user: knot:knot rundir: /run/snot key: - id: tsigkey1. algorithm: hmac-sha256 secret: oxRKAUfGN3R6fGjWX/V+i4rjCl1zRuYslX0c4se+GWs= template: - id: default storage: /var/lib/snot remote: - id: master1 address: [email protected] key: tsigkey1. acl: - id: acl_axfrnfy address: 127.0.0.0/24 action: notify key: tsigkey1. log: - target: stdout any: debug zone: - domain: cpu-chow.local master: master1 acl: acl_axfrnfy
It looks quite similar to the master. Note the
acl allowing the slave to
accept the notify message from the master. I have logging set to
and standard output because I intend to run it in foreground.
You may create a systemd unit file to start the slave if you wish (beyond this article's scope). I'm running it manually here:
$ sudo /usr/sbin/knotd -vvv -c /etc/knot/slave.conf
I can control a running instance of
knotd by using the control program
$ sudo /usr/sbin/knotc -c config-file [ command ]
Specifying the config file tells
rundir contains the socket to
which to connect. Use the
zone-notify on the master to have it
notify slaves to request an
zone-flush to flush the cache. Many
other commands are available, use
--help to see them.
I have barely scratched the surface of Knot's capabilities; it really is designed to handle a full TLD with ease, so it can handle your load. Consider using it alongside your existing DNS software to avoid a DNS software monoculture in your organization.
Now that you've been introduced to Knot DNS and walked through an example master and slave configuration, you can install, configure and evaluate it for yourself. And you may like it enough that it will make your chest hairs all crispy, just like the Festrunk brothers.