This page's shorter URL is http://tinyurl.com./2aek.
You've come to this page because you've made an erroneous claim about Dan Bernstein's djbdns, similar to one (or more) of the following précis (which are expanded in full further on):
- djbdns violates standards because it doesn't support TCP.
- djbdns doesn't support the SRV record type.
- djbdns doesn't support multiple IP addresses.
- djbdns doesn't support DNSSEC and TSIG and never will because Dan Bernstein thinks they are evil.
- dnscache has a problem due to how it handles the RD bit.
- djbdns doesn't fully support "zone transfer" database replication.
- djbdns is bad because it uses the Unix
fork()
/exec()
paradigm, even though BIND has stopped doing so.- dnscache and tinydns must listen on different IP addresses. Some networked machines are incapable of this and so cannot use djbdns.
- tinydns doesn't hand out referrals for domains that it does not control.
These are the Frequently Given Answers to these claims.
These claims are highly inaccurate. Many of their falsehoods stem from
applying BIND Think to djbdns - thinking that it works
like BIND, thinking that it is configured and operated like BIND, and
thinking that it shares the same fundamental design flaws as BIND.
The myth about lack of TCP support
By default, tinydns does not support the use of TCP at all. This most definitely violates the spirt of the RFCs, as well as the letter (if a DNS query via UDP results in truncation, you're supposed to re-do the query using TCP instead).
The idea that djbdns does not support DNS service over TCP is a falsehood, and the deduction that is then made from that premise is thus wholly unsupported.
A passing glance at the djbdns documentation, even if one has never actually used the programs themselves, reveals the error of this claim. The name of the package is in fact djbdns, and tinydns is one of the (several) server programs in the package. In particular, tinydns is the program that supplies general purpose content DNS service over UDP. djbdns comprises different programs for supplying different types of DNS service. The program that supplies general purpose content DNS service over TCP is axfrdns. (It is one of Dan Bernstein's mistakes that the djbdns advertising blurb doesn't mention axfrdns.)
In response to antibodies, mutations occur. When people started pointing out that the chain letters that promote pyramid schemes are illegal under U.S. law, the confidence tricksters at the apices of the pyramids modified their letters to falsely claim legality under the very sections of the United States Code that people were pointing out declared them to be illegal. The same thing happened with this claim. In response to people pointing out the existence of axfrdns and how the djbdns package is structured, the original source of the claim added a second paragraph:
Indeed, if you want to support TCP under tinydns, you have to configure an optional program called "axfrdns", which was intended to handle zone transfers, but also happens to share the same database as tinydns, and can handle generic TCP queries.
This merely compounds the original error. It still does not grasp what the djbdns package is and what it contains. axfrdns is only "optional" inasmuch as any of the server programs in the package can be termed optional: if one doesn't need or want to provide that sort of service, one doesn't run the program. But if this were the (somewhat useless) definition of "optional", then mv would be an "optional" part of the GNU File Utilities, too. After all, if one doesn't want to move files, one doesn't run mv.
The true situation about support in djbdns for
content DNS service over TCP
is very simple: If you publish resource record sets that exceed the size
limits of a DNS UDP datagram (and, remember, this is for database content
that you are publishing, so you know how large everything will
be), or you have peers who replicate your servers' DNS databases using the
"zone transfer" mechanism of database replication, then you need to
provide content DNS service over TCP; and so you run axfrdns
to provide it. Otherwise, you don't.
The myth about lack of support for new record types
Without a third party patch, tinydns does not support standard SRV records (which are intended to ultimately replace MX records, as well as perform similar functions for services other than mail).
This is simply a falsehood. The
tinydns-data
database source file format allows one to construct resource records of
every possible type using ':' lines. The third party add-on to
djbdns simply provides syntactic sugar that allows one to create
"SRV" resource records using a new 'S' line instead of having to enter the
data in raw form, and works around a boneheaded mistake (that gratuitously
inhibits interoperability by breaking binary compatibility in the way that
resource records are compressed) in the SRV specification.
The myth about tinydns not supporting multiple IP addresses
Without a patch from a third party, tinydns does not listen to more than one IP address. If you have a multi-homed server, you have to apply a patch from someone other than they[sic] author, before you can get it to listen on more than one address/interface.
This is a falsehood. One doesn't require a "patch from a third party" to provide content DNS service on multiple IP addresses with djbdns. One simply runs multiple instances of tinydns (and, if necessary, axfrdns), each listening on a different IP address, but all sharing the same database (which can be achieved with a single symbolic link or by modifying the $ROOT environment variable as described in Life With djbdns).
This is a good example of how these myths erroneously apply BIND Think to djbdns. Running multiple instances of the server would be a problem with softwares that employ large "everything under one roof" designs, such as BIND. BIND loads its entire database from the zone files into memory. Multiple instances of BIND attached to different interfaces would thus cause multiple in-memory copies of the database to exist, one per process, gobbling swap space at a ferocious rate if the database were of significant size. Moreover, each BIND process would contain the data structures dealing with caching proxy DNS service, even though they would be unused. The erroneous thinking here is to think that because multiple instances of BIND is such a massive problem, having multiple instances of a server must automatically be a problem for other softwares too.
However tinydns and axfrdns do not load a copy of the database into memory. They read it directly from the database file (which they can do because the database file does not have to be parsed, as "zone" files have to be with BIND, and can be used directly). Thus they can all share a single database, and only one copy of that database will exist (in the operating system's file cache). Nor do their process images contain extranenous code and data unrelated to content DNS service. (On Solaris 2.7, for example, the tinydns executable built using GCC 2.95.2 is 1/8th the size of the supplied in.named - and that's just comparing it with BIND version four.)
On the gripping hand, the ironic fact is that in practice one very
rarely actually needs a single content DNS server to listen on multiple
interfaces. This is, after all, Unix. Unices are able to route IP
datagrams from one network interface to another. More often than not,
the underlying problem here is not that the content DNS server doesn't
listen on more than one IP adddress, but that IP routing isn't routing
DNS traffic appropriately.
The myth about DNSSEC and TSIG
There aren't even any patches that can get djbdns to implement TSIG, Dynamic DNS, or DNSSEC, nor are they ever likely to be created (my understanding is that the author is strongly opposed to them).
Dan Bernstein's
stated position
on the implementation of DNSSEC is reasonably clear, and contradicts the
above "understanding" and preceding claim.
The myth about dnscache and the RD bit
DNSCACHE (the caching server) does not respond to queries with the RD bit clear in the query. (Instead of simply answering from cache without recursing the dns-tree).
Answering from the cache without performing back-end queries is, in environments such as (say) an ISP where a single proxy DNS server is shared by multiple customers, a security hole, since it allows DNS clients to snoop on the activities of others.
The assertion of what a caching proxy DNS server should do is - somewhat ironically - utterly wrong. The assertion is that a caching proxy DNS server should behave the way that BIND does. But in fact BIND does the wrong thing here, too. A caching proxy DNS server should in fact respond to all queries in exactly the same way, irrespective of the value of their RD bits. The simple truth of the matter is that the RD bit is a useless piece of frippery, a mistake in the design of the DNS protocol, and DNS softwares should simply ignore it, whatever it is set to. The type of service desired is implicit in what server one sends one's query to. DNS client libraries always talk to proxy DNS servers, always expect complete answers to be returned, and so always require recursion to be performed.
In practice, DNS client libraries always set the RD bit to 1 in all
queries that they send. So the facts that dnscache doesn't
answer, and BIND foolishly provides a different kind of service to,
queries with the RD bit set to 0, do not affect the normal operation of
DNS clients, since that scenario never occurs in normal operation.
However, whilst both BIND and dnscache do the wrong things,
at least dnscache does the wrong thing that protects
inter-client privacy.
The myth about database replication mechanisms
The suggested method for copying contents of DNS zones is rsync, scp, or other remote copy tools. The DNS standard method of zone transfers (query type "axfr") is only supported as an additional, disrecommended method.
The problem is that if you make a mistake and munge the database and then rsync or rcp that to the backup servers, you're totally hosed. Contrariwise, if you use the standard zone transfer mechanism, then the zone transfer should fail if the master is munged, and the slaves should keep a good working copy for a while and give you time to notice that the master is munged and needs to be fixed.
This myth starts with a few nuggets of truth but then proceeds to falsehood and gross misrepresentation.
First the few nuggets of truth: Database replication in djbdns involves copying a database file from one place to another. How one does this is left up to the administrator. rsync and scp are suggested. One can use the "zone transfer" mechanism if one wants to. Dan Bernstein's FAQ document does, however, very briefly mention the deficiencies of the "zone transfer" mechanism (in particular that it is locked into one single database schema) that make it a poorer choice than the alternatives.
Now the misrepresentations and falsehoods:
The implication of the use of the singular in The suggested method … is is that rsync and scp have special status. When one reads the documentation, one sees that this is clearly not the case. Any method that will copy a file from one location to another is satisfactory. djbdns is neutral with respect to how one copies files. One could even use the cp command, for example, or the uucp command. rsync and scp are no more than suggestions.
A second implication in this myth is that there is a lesser degree of support for the "zone transfer" mechanism of database replication than there is for any other mechanism. It is ironic that in fact the converse is the case. djbdns has no need to include anything at all to support rsync or scp database replication, because they are general-purpose tools for file copying, and require no special modifications or augmentations for DNS database files than they do for any other files. Whereas, because "zone transfer" is a mechanism that is locked in to one particular network protocol, and is not a general-purpose file copying mechanism by any stretch of the imagination, djbdns has to include, in the form of the axfrdns and axfr-get programs, special tools to support it.
The major falsehood in this myth is in the second paragraph, where it talks about mistakes that "munge" the database. Let us presume that the word "munged" here means undesired modifications that are still syntactically correct. (Syntax errors in the database source file will be "caught" by "zone transfer" inasmuch as BIND will report errors when parsing the file to load the database into its memory in the first place. But they will also be caught by djbdns. tinydns-data will spot the syntax errors, refuse to compile the source file into the binary database form, and return an error; this will in turn cause the processing of the makefile to be aborted before reaching the stage where the database file is copied. Moreover, because tinydns-data only replaces the binary database file atomically after it has successfully compiled it, tinydns/axfrdns will continue to serve the previous data from the (untouched) previous binary database file without interruption.)
The second paragraph implies that there are database modifications that one can make that the "zone transfer" mechanism will magically notice and reject, that other replication mechanisms will not. This is false. If, say, one has 1000 type "A" resource records in one's database, and - with a miskeyed vi command - one accidentally deletes them all, then "zone transfer" will still happily replicate this new database to other content DNS servers. One will end up with all of one's servers missing these 1000 resource records.
The simple truth is that "zone transfer" has no magic means of reading
the administrator's mind and deciding what database modifications were
legitimate and what were errors that have "munged" the database. All
modifications to the database look alike to "zone transfer". A "munged"
database will be quite happily propagated by "zone transfer" to
secondary servers, just as with other database replication mechanisms.
The myth about
fork()
/exec()
One of the legitimate big complaints about older versions of BIND is that they implemented zone transfers in a separate program. If the database was large, then the fork()/exec() overhead was large, and the system could seriously thrash itself to death as it copied all those pages (for systems without copy-on-write), only to immediately throw them away again when it fired up the named-xfer program. With BIND 9, this problem is solved by having a separate thread inside the server handling zone transfers, and no fork()/exec() is done. However, tinydns/axfrdns goes back to the fork()/exec() model that was so greatly despised.
This is a falsehood, and an obvious one at that. No-one who has actually
used djbdns would make such basic factual mistakes when talking
about how the servers in the djbdns package operate.
tinydns has nothing whatever to do with "zone transfer"
database replication. axfrdns does not fork()
.
(Indeed, none of the programs in the djbdns
package fork()
.)
Moreover, this myth tries to propound the false notion that
fork()
/exec()
is absolutely wrong in all
circumstances, and that it is a bad design that modern softwares eschew.
The truth is that it is a bad design for a program like BIND to
fork()
. The overhead of fork()
is large where
one has a lot of in-memory data structures that have to be copied. This
is the case with "everything under one roof" programs like BIND, where the
address space of the process that fork()
s contains massive
data structures, including the cache and the in-memory copy of the DNS
database.
But it's not bad design for programs in general to
fork()
/exec()
. Indeed, this is a basic Unix
paradigm, which is used well in many programs, from shells to
init. Neither is it a bad design for TCP servers, or DNS
servers in particular, to fork()
/exec()
. It is
flawed reasoning to think that just because a program is a DNS server it
must work like BIND does, and so suffer from the same fundamental design
flaws as BIND does.
The fundamental design flaws in BIND that make it inefficient for it
to fork()
are peculiar to BIND.
The servers in djbdns do not operate like BIND, and don't
suffer from its problems. In a djbdns system, the server
process that fork()
s is usually either
tcpserver
(part of a separate
general-purpose suite of TCP/IP
tools that can support any TCP service) or inetd.
Neither of these have a cache of DNS resource records or a giant in-memory
DNS database that need to be copied on fork()
, like the BIND
process does.
tcpserver, in particular, doesn't allocate vast swathes of
memory. It is a small program that does a simple job. It listens for
incoming TCP connections, performs access control, and forks child
processes to handle them. The overhead of copying the
tcpserver address space is nowhere near the amount that it is
for BIND. (Indeed, on a Solaris 2.7 system, for example, more pages in
fact have to be duplicated when the init process
fork()
s a new child ttymon process for the
console than when tcpserver fork()
s a new child
process to run axfrdns. Yet - strangely - we don't see the
people who repeat the above myth railing against init for
using the "greatly despised fork()
/exec()
model".)
The myth about separate IP addresses
Like tinydns, dnscache will not bind to more than one IP address without a third party patch. Because they are separate programs, you can't have both tinydns and dnscache listening to the same IP address(es) on the same server.While this is not the recommended mode of configuration, some sites don't have the luxury of having separate authoritative-only and caching/recursive-only server(s), and need to mix them both on one machine (or set of machines). […] With djbdns, this is impossible.
This myth makes the classic mistake of confusing "on a single machine" with "on a single IP address". By the very nature of UDP, one cannot run two UDP servers that provide different types of service, such as tinydns and dnscache, listening on the same IP address and port. However, this is Unix. Unix allows one machine to handle many IP addresses. Being unable to use a single IP address is not the same as being unable to use a single machine. People repeating this myth really should have read the Unix documentation.
The ability to have multiple network interfaces, and hence the ability to run multiple types of DNS server on a single machine, each listening on a separate IP address, is far from being a "luxury"; it is in fact overwhelmingly the most usual case with Unix. Any Unix machine with a network card in it has at least two IP addresses, one for the network card and one for the loopback interface. In almost all cases where one needs a content DNS server as well as a proxy DNS server, this much actually suffices, and one has one's content DNS server (tinydns/axfrdns) listening on the one interface and one's proxy DNS server (dnscache) listening on the other. (Which DNS server listens on which interface varies according to one's specific requirements, of course. One might want a public content DNS server and a private proxy DNS server; or one might want a private content DNS server and a public proxy DNS server.)
Far from it being "impossible" to have multiple types of DNS server on a
single machine with djbdns, it is actually easy. Configuring the
servers with
tinydns-conf
and
dnscache-conf
is relatively trivial. The IP address is one of the command arguments.
The difficult part is deciding what type of service to offer on what
network interface - a task that one has to perform anyway in such a
situation, and is unaffected by whatever DNS server software one uses.
The myth about referrals
By default, tinydns does not hand out referrals to questions it is asked about zones it does not control. I believe that this violates the spirt of the RFCs, if not the letter.
This claim is, on the face of it, nonsense. tinydns is not told what "zones" it controls, any more than any other content DNS server is told. Bailiwick is in the eye of the beholder. i.e. It is proxy DNS servers that determine what "zones" a content DNS server "controls". tinydns is simply given a database of resource records to publish, which it publishes.
This claim cannot mean what it actually says, because what it actually says makes no sense. Therefore, if one presumes that the people making this claim are referring to some actual observed or documented behaviour of tinydns, then one has to guess as to what they are talking about by looking for some aspect of tinydns' behaviour that bears a passing resemblance to the claim.
Unfortunately, there isn't anything that resembles the claim. When asked about a name for which it has in its database a "&" (delegation) record for a superdomain, tinydns happily hands out referrals. If this is what is being talked about, the claim is a falsehood. Of course, if there are no delegation records in the database, tinydns cannot publish a referral. But this is simple common sense. A content DNS server cannot publish resource records that haven't been entered into its database by the administrator in the first place. If this is what is being talked about, then the claim is blaming tinydns for the fact that it doesn't correct pilot error by making answers up out of thin air when it hasn't been told them - a charge that (a) could be levelled at almost all softwares, and (b) is completely unreasonable.