Development of the Domain Name System
This paper discusses the issues and motivations for the design of DNS. The main ideas are hierarchies and caching. The old way was to use a centralized text file (HOSTS.TXT), but this was in violation of one of the principles of the Internet – distributed management. Plus, the system was slow to respond and did not scale. DNS is about partitioning the name-space database, and it made sense to do so across the same organizational boundaries that make up the internet.
Caching is done by using a TTL field on a DNS entry. There are issues with caches being inconsistent during a change, but they also decrease server load and mask periods of NS outage. Caching for known results is not sufficient either; there is a need to cache negative results as well.
All in all, it is an interesting read. Still, while the paper is talking about the development of DNS, it would have been nice to have a better explanation and an example of how DNS works. If you don’t already know about DNS, A records, NS records, etc, then you do not come away from this paper knowing much more. Fear not – the next paper covered that well enough in its Related Work section.
Trivia question: What is a fuzzball? (bottom of page 4).
DNS Performance and the Effectiveness of Caching
This paper analyzed performance from the client’s perspective and how effective DNS caching is. They did this by logging all DNS traffic and TCP connection establishment traffic on the external links to two networks. Their general findings are that NS record caching is very important, but A records for non-NS machines are not that important.
The general understanding is that caching is very important for DNS performance. It is the case for NS records. A major reason for this is that it decreases the load on the root servers. Plus, these servers do not change frequently. One of their motivations for wanting to relax the caching is that systems can build off having a short-lived DNS record. Specifically, this helps mobile IP (dynamic DNS), load balancing, and content distribution networks. None of these require the NS have a short TTL.
The main benefit of their work is that they showed not all records are equal with their needs for caching, and that is a justification for the use of different A record TTLs without hurting the Internet.
The main complaint I have against their work is their methods. They sniffed traffic at the external links and did a lot of inferring about what was happening at internal DNS servers. For instance, if they saw a TCP flow start and didn’t see a DNS request, they figured the answer was cached at an internal server. While their logic might be okay, they really should have sniffed the traffic going into their local DNS servers too. That way they know conclusively what DNS requests were hits or not, as well as how many times those hits were rehit locally.
Overall, both papers were a good read, and they ought to remain on the syllabus.
Similarly tagged OmniNerd content: