IPv6 and DNS

IPv6 DNS – It works for me….. but it shouldn’t.

When in my IPv6 environment I perform a test ping to, say, Google, it seems to work great:

ping6 ipv6.google.com
PING ipv6.google.com(2a00:1450:8006::6a) 56 data bytes
64 bytes from 2a00:1450:8006::6a: icmp_seq=1 ttl=55 time=49.3 ms
64 bytes from 2a00:1450:8006::6a: icmp_seq=2 ttl=55 time=44.6 ms
.
.
.

Which is lovely. But I then ask myself how the ping6 command actually gets to know that name ipv6.google.com lives at IPv6 global address 2a00:1450:8006::6a. How is the domain name being resolved? And I find that I actually don’t know. I’m perfectly familiar with IPv4 DNS. So what’s going on here?

I’m cheating

I discover, upon investigation, that in fact I’m “cheating”. By that I mean that my attempt to set up a “pure” IPv6 environment (albeit in parallel with IPv4) that does not rely upon or touch IPv4 in any way has not been achieved – It turns out that my DNS is currently entirely dependent upon the existing IPv4 infrastructure! And before going ahead and trying to rectify that, it’s actually rather educational to understand how it is actually working at all.

So I run tcpdump on the IPv6 interface and take a look at what’s going on when I kick off the ping6:

13:28:36.671682 IP (tos 0x0, ttl 64, id 45341, offset 0, flags [DF], proto UDP (17), length 61)
11.11.11.11.48231 > 212.XXX.XXX.XXX: [udp sum ok] 8831+ AAAA? ipv6.google.com. (33)
13:28:36.765503 IP (tos 0x0, ttl 60, id 0, offset 0, flags [DF], proto UDP (17), length 250)
212.XXX.XXX.XXX > 11.11.11.11.48231: 8831 q: AAAA? ipv6.google.com. 7/0/0 ipv6.google.com. [1h49m54s] CNAME[|domain]
13:28:36.767123 IP (tos 0x0, ttl 64, id 45365, offset 0, flags [DF], proto UDP (17), length 118)
11.11.11.11.56346 > 212.XXX.XXX.XXX: 37833+[|domain]
13:28:37.042646 IP (tos 0x0, ttl 60, id 0, offset 0, flags [DF], proto UDP (17), length 178)
212.XXX.XXX.XXX > 11.11.11.11.56346: 37833 NXDomain q:[|domain]
.
.
.
So what’s all that about then?
We appear to have a perfectly standard IPv4 exchange taking place, but with a few odd bits mixed in! Taking it a step at a time…..
  • A DNS query (UDP – port 53 – IPv4 – standard stuff) goes out for ipv6.google.com. The odd looking bit is the DNS query type: “AAAA”. What’s that? That actually signifies that this is an IPv6 query. Special.
  • And sure enough we get a response from the IPv4 DNS. It does not decode it here, but in the CNAME response data is the full IPv6 address required.
  • In fact it returns, as DNS queries often do, more than one address. It returns a selection of 6 of them, of which one gets selected for use.
  • And we then get a rather odd repeating pattern of subsequent PTR resolution attempts, which is a bit confusing. We’ll ignore that bit for now.

So: in fact our IPv6 DNS is running (with great success!) but…………. over an entirely IPv4 infrastructure.

It’s fabulous that it all works so easily and seamlessly. 🙂 However, for the purposes of my voyage in to IPv6, I’d actually rather not use the IPv4 side of things at all. What if IPv4 wasn’t available to me? So what to do?

Pure IPv6 Name Resolution – IPv6 DNS

So we want to shift the DNS function off IPv4 and to make use of the IPv6 infrastructure. Where to being? Well, since setting up all the IPv6 I had noticed some new bit ‘n bobs appearing in my logs, as you do with new things. And I’d mostly ignored them for now. Again, as you do. But this one was appearing rather regularly, and now seemed rather interesting…

Mar 30 09:45:30 xxxxx radvd[2351]: RDNSS address 2a01:e00::1 received on eth0 from fe80::207:cbff:fea5:XXX is not advertised by us

What’s that all about? Looking at the elements:

  • eth0 is my external (Internet-facing) interface
  • the fe80: address is the IPv6 link address of my adjacent router (i.e. the ISP’s IPv6 router)
  • 2a01:e00::1 is a normalish looking IPv6 global address. And sometimes I see the same log with 2a01:e00::2 in it instead.
  • The RDNSS rather gives it away! “DNS.” Is this to do with DNS perhaps…?
  • “…is not advertised by us” – What’s that all about?

 

Grabbing the incoming packet that seems to generate these logs I see more when fully decoded. The packet concerned is an expected Router Advertisement (RA) but it has some options on it:

  • Prefix Information: this we expect. It tells me the 64-bit prefix that is “mine” to use for global IPv6 addresses.
  • Recursive DNS Server: woah! That acronymises as RDNSS. And it hands me two “Recursive DNS Servers”: 2a01:e00::1 and ::2. So that’s where they come from.
  • MTU: an MTU of 1480 is specified in there too. I see that in fact my interface MTU is 1500. Should I worry? perhaps yes. But I’ll leave that for now and come back to it.
  • Source link-layer address: we can ignore that. 🙂

 

Manual test of IPv6 DNS

Before jumping in to new software subsystems, let’s try a manual test and see what happens. Just as one can use nslookup on the command line to check name resolution for IPv4, so one can use it for IPv6 too. Specify the name to be resolved and the server to us (or else we will default as per /etc/resolv.conf) and see what happens, checking with tcpdump:

 nslookup ipv6.google.com 2a01:e00::1

And I see the following packet sequence result:

10:23:43.522926 IP6 (hlim 64, next-header UDP (17) payload length: 41) 2a01:XXX:8b25:7ea0:XXX:63ff:fef5:f93c.52838 > dns2.proxad.net.domain: [udp sum ok] 9362+ A? ipv6.google.com. (33)

10:23:43.577684 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) fe80::207:cbff:fea5:XXX > ff02::1:fff5:XXX: [icmp6 sum ok] ICMP6, neighbor solicitation, length 32, who has 2a01:XXX:8b25:7ea0:XXX:63ff:fef5:f93c
source link-address option (1), length 8 (1): 00:07:cb:a5:1a:68
0x0000:  0007 cba5 1a68

10:23:43.577937 IP6 (hlim 255, next-header ICMPv6 (58) payload length: 32) 2a01:XXX:8b25:7ea0:XXX:63ff:fef5:f93c > fe80::207:cbff:fea5:XXX: [icmp6 sum ok] ICMP6, neighbor advertisement, length 32, tgt is 2a01:XXX:8b25:7ea0:240:63ff:fef5:f93c, Flags [router, solicited, override]
destination link-address option (2), length 8 (1): 00:40:63:f5:f9:3c
0x0000:  0040 63f5 f93c

10:23:43.578294 IP6 (hlim 60, next-header UDP (17) payload length: 112) dns2.proxad.net.domain > 2a01:XXX:8b25:7ea0:XXX:63ff:fef5:f93c.52838: [udp sum ok] 9362 q: A? ipv6.google.com. 1/1/0 ipv6.google.com. [2h44m2s] CNAME ipv6.l.google.com. ns: l.google.com. [10m] SOA ns4.google.com. dns-admin.google.com. 1411041 900 900 1800 60 (104)

Key here are the first and fourth packets: DNS request out, and DNS response back. All in IPv6. No IPv4 there at all. That’s good. We can see our IPv6 DNS server and, in principle, they work.

A secret – ndisc6

Time to let you in on a little secret to make life much easier… While tcpdumps and so on are instructive up to a point, and force one to think a little about what is being seen, they are also pretty tedious. A lot of what we need to achieve here can be done using much more accessible tools! Do yourself a big favour and install the ndisc6 package on your linux system. The creator’s web page gives you a little more information, but just as an example, look at this command + output:

rdisc6  eth0
Soliciting ff02::2 (ff02::2) on eth0…
Hop limit                 :           64 (      0x40)
Stateful address conf.    :           No
Stateful other conf.      :           No
Router preference         :       medium
Router lifetime           :         1800 (0x00000708) seconds
Reachable time            :  unspecified (0x00000000)
Retransmit time           :  unspecified (0x00000000)
Source link-layer address: 00:40:63:F5:F9:3C
from fe80::XXX:63ff:fef5:f93c
Hop limit                 :           64 (      0x40)
Stateful address conf.    :           No
Stateful other conf.      :           No
Router preference         :       medium
Router lifetime           :         1800 (0x00000708) seconds
Reachable time            :  unspecified (0x00000000)
Retransmit time           :  unspecified (0x00000000)
Prefix                   : 2a01:XXX:8b25:7ea0::/64
Valid time              :        86400 (0x00015180) seconds
Pref. time              :        86400 (0x00015180) seconds
Recursive DNS server     : 2a01:e00::2
Recursive DNS server     : 2a01:e00::1
DNS servers lifetime    :          600 (0x00000258) seconds
MTU                      :         1480 bytes (valid)
Source link-layer address: 00:07:CB:A5:1A:68
from fe80::207:cbff:fea5:1a68

Wow. Look at all that! Useful.

Now to move on to integrating this into the system so all IPv6 names get resolved this was.

rdnssd – Recursive DNS Server daemon

I think I should start out with a warning here: this next step is the entirely logical and sensible thing to do. But in fact read through to the end: it’s not going to work – the Linux IPv6 userspace tools are simply not quite here they should be yet… But it’s instructive to look at this, if only for the learning it provides.

Let’s set up the required sub-system to handle IPv6 DNS requests from this system and the users who will later traverse it. The first step is simply to install the required package:

apt-get install rdnssd

This package may have other dependencies, which should get automatically fulfilled, e.g. resolvconf)

Just what is rdnssd? The best summary of it I can see is the first paragraph of the associated man page:
rdnssd is a daemon program providing client-side support for DNS configuration using the Recursive
DNS Server (RDNSS) option, as described in RFC 5006. Its purpose is to supply IPv6 DNS resolvers
through stateless autoconfiguration, carried by Router Advertisements.

That pretty much sums it up. It’s just what we need here!

The second paragraph is also quite illuminating:
rdnssd parses RDNSS options and keeps track of resolvers to write nameservers entries to a
resolv.conf(5) configuration file. By default, it writes its own separate file, and may call an
external hook to merge it with the main /etc/resolv.conf. This is aimed at easing coexistence with
concurrent daemons, especially IPv4 ones, updating /etc/resolv.conf too.

So, we’ve installed it. What’s it doing? Straight after installing the package a ps -ef shows me that the process is running. I rerun my ping6 and nslookup (without specifying th IPv6 DNS this time) and tcpdump shows me no change: the DNS is still taking place over IPv4.

As mentioned at the start of thus sub-section, we have a problem here. A big, fat problem. We want this daemon to pick up our DNS server from the RA and use them. Which is fine. But check out the last para of the rdnssd man page:

When rdnssd uses a raw socket instead of the netlink kernel interface, it does not validate received
Neighbor Discovery traffic in any way. For example, it will always consider Router Advertisement
packets, whereas it should not if the host is configured as a router. When the netlink interface is
used, such validation is done by the kernel.

What that boils down to is that if we’re running as a router (and we are, since in /etc/sysctl.conf we have net.ipv6.conf.all.forwarding=1) the kernel will simply not pass the RA up to user-space at all. So rdnssd never gets the chance to see it, and thus never acts upon it. Which is a bummer.

Just to experiment, you can dynamically drop down to /proc/sys/net/ipv6/conf/all/forwarding and set it to ‘0’. (radvd will bitch and moan, but ignore that.) Force a RA refresh if required, using rdisc eth0, and you will see rdnssd do its stuff and change the /etc/resolv.conf to point at the IPv6 servers. But we can’t leave it that way, alas. We’ve hit a bit of a blocker here – picking up the IPv6 DNS servers automatically from the ISP seems to not be achievable at the moment using rdnssd – the kernel’s policies for what is allowed and when prevent it.

So what do we do?

We now understand what we’re trying to do. We also understand how it should be doable. But currently there’s a blocker. What to do?

As always in such matters, there’s an easy, pragmatic way forward and a tricky, hacky way forward! The sensible path to take is very simple indeed. We know the IPv6 addresses of our DNS servers (and if we forget we can just do a rdisc eth0 to find them out again) The obvious thing to do it to statically configure them in to the existing /etc/resolv.conf and then, when we later use IPv6 from a device on the internal network, configure them there too.

That’s what I would recommend. That’s what you should do. You can specify a mixture of IPv6 and IPv4 name-server in the /etc/resolv.conf file. If you specify the IPv6 servers first, then all DNS on the system will (if available) use the IPv6 name servers. I suppose this does slightly violate still our desire to keep IPv4 and IPv6 separate (since IPv4 name resolution will now also use IPv6) but since it tilts the bias in favour if IPv6, with a seamless fallback to IPv4, I think I can live with that.

OK – but what about more wacky solutions?

[Remember, stop here unless you’re wanting to have some fun and you are comfortable with building your own software.]

One approach that would give us a partial solution would be to admit we were wrong originally, and instead of using radvd to propagate simple prefix information into our internal networks we should instead use a fully-fledged IPv6 DHCP server that can propagate addressing and DNS information. This would solve the problem of devices inside the network needing to have the IPv6 DNSs statically configured on them. However even this would not solve the root issue here: our inability to automatically pick up the IPv6 DNS information from received RAs when we’re configured to operate as a router.

The problem there is not, directly, rdnssd itself. The problem is the kernel. An architectural decision has been taken to stop the kernel sending RA DNS data up to userspace if the system is functioning as a router. Good or bad decision? Bad in my view. I can understand why it is sensible default behaviour, yes. But I do not understand why it’s been made unchangeable. But that’s how it is, for now anyway.

The solution I’m going to go for is to actually bypass the policing mechanism itself. The kernel only manages to stop the DNS being sent to userspace when userspace uses the Netlink mechanism to talk down. So why not just bypass that and get the raw data we want? This should bre easy enough to do, as rdnssd itself used to work this way, before the kernel started using Netlink to talk to userspace. So we might be able to build rdnssd to behave as it used to and get the data, right?

Hack and build rdnssd

The existing rndssd code makes provision for kernels with the netlink capability which causes us problems and for older kernels without that capability. So all the code we need is already in place. All we really need to do is change the default behaviour of rdnssd to the old way and we’ll be all set. One could of course do this properly and completely using command line options. Here’s what you might do in terms of changes to the rdnssd code-base:

Edit rdnssd.c

In usage(), ad a line such as:

”  -n  –no-netlink  use old method to pick up kernel notificationsn”

In main() drop in appropriate parameter parsing:

static const struct option opts[] =
{
{ “foreground”,         no_argument,            NULL, ‘f’ },
{ “no-netlink”,         no_argument,            NULL, ‘n’},
.
.
.

and an appropriate global:

bool nonetlink = false;

and then in the main parsing switch add:

case ‘n’:
nonetlink = true;
break;

Then finally up in worker() we act upon it.

Before we had:

static int worker (int pipe, const char *resolvpath, const char *username)
{
sigset_t emptyset;
int rval = 0, sock = -1;
const rdnss_src_t *src;
#ifdef __linux__
src = &rdnss_netlink;
sock = src->setup ();
#endif
if (sock == -1)
{
src = &rdnss_icmp;
sock = src->setup ();
}
.
.
.

Change it to something like:

static int worker (int pipe, const char *resolvpath, const char *username)
{
sigset_t emptyset;
int rval = 0, sock = -1;
const rdnss_src_t *src;
#ifdef __linux__
if (!nonetlink) {
src = &rdnss_netlink;
sock = src->setup ();
}
#endif
if (sock == -1)
{
src = &rdnss_icmp;
sock = src->setup ();
}
.
.
.

and we’re good. Build, install and use as before, but using the new cli parameter “-n” as required to force the old behaviour.

Attached is a version of rdnssd.c based off version 0.9.9 for reference.

For finishing touches, set up an rdnssd hook-file which puts the IPv6 nameservers first in /etc/resolv.conf, and then have the original IPv4 servers appended after them.

3 comments to IPv6 and DNS

  • business daily

    .Routers and hosts are strictly separated meaning that router cannot be host and host cannot be router at the same time… .Neighbor Discovery ND is a set of messages and processes that determine relationships between neighboring nodes. The default value that should be placed in the Hop Count field of the IP header for outgoing unicast IP packets…..

    • I *think* this is spam, so I’ve zapped the URL. It’s good to see that it makes an effort though!

      Just in case:
      – Router and hosts: functionally distinct, but absolutely often one and the same thing in practice.
      – ND determines reachability between nodes.
      – The hop count field, outgoing IP unicast,… mmmkay. Where are we going with this?! IPv6 cares about *max* hop count, yes, and so… and…?

      Nah. Must be spam. But a good effort!!

  • peter

    Hi there,

    Thanks for this article – can you possibly update it as we’re coming up to a year now, and a new releases of tools etc?

    Also would recompiling the kernel, while not for everyone, solve the problem with the rdnssd sub system as is without hacking it? If so what options would I compile in?