[Dnsmasq-discuss] [PATCH] Treat records signed using unknown algorithms as unsigned instead of bogus

Simon Kelley simon at thekelleys.org.uk
Mon Nov 23 23:00:47 GMT 2015


On 23/11/15 13:21, Michał Kępień wrote:
>> OK, I've done some more thinking about this. We have to be careful to
>> distinguish between validating a DS RRset and using that DS RRset to
>> prove that the DNSKEY RRset it refers to is valid. If we can't validate
>> a DS RRset, either because its signature is wrong, or we don't speak
>> the correct algorithm, then we can't use it to prove anything about the
>> zone it refers to. I'll return to that case later.
>>
>> There's another case where the DS RRset is validated - we know it's good
>> data, but we don't speak the hash algorithm is gives. In that case, we
>> can treat the zone as insecure, in exactly the same way as if we have an
>> NSEC record proving that the DS doesn't exist. 4035 says as much.
>>
>>
>>    If the validator does not support any of the algorithms listed in an
>>    authenticated DS RRset, then the resolver has no supported
>>    authentication path leading from the parent to the child.  The
>>    resolver should treat this case as it would the case of an
>>    authenticated NSEC RRset proving that no DS RRset exists, as
>>    described above.
>>
>> (An interesting bit of RFC-ology this, my RFC quote is two paragraphs
>> ahead of yours: they both refer to the same state, and say subtly
>> different things.)
>>
>> This isn't what dnsmasq does at present, and I'll fix that, but I doubt
>> it's the problem you've been seeing, since it's about hash algorithms.
> 
> Could you please explain why do you think that the above excerpt from
> RFC 4035 only applies to hash algorithms and not signing algorithms as
> well?  As far as I can tell, if we're discussing a zone signed using
> exclusively algorithms the resolver doesn't understand, with a properly
> validated DS in its parent zone, in the end there is no difference
> between "I don't know how to calculate a digest of any of these DNSKEYs"
> (unknown hashing algorithm) vs. "I don't know how to verify any of this
> DNSKEY RRset's signatures" (unknown signing algorithm), because both of
> these cases boil down to "I'm unable to prove this DNSKEY RRset is
> secure, but I'm unable to prove it is bogus either".

Caveat. I'm not sure what the answer is. I'm certainly not arguing for a
fixed interpretation, not even the current behaviour of dnsmasq, and I'm
trying to understand what the correct behaviour should be. As always,
I'm terrified of breaking people's DNS by rejecting something that's OK,
and I'm also terrified or creating a security hole by accepting a answer
that's actually an attack which should be rejected.

The difference between unknown DS hashes, and unknown signature is as
follows, as best I understand it. In the DS case, we have a DS RRset
which is signed, and validated. We know it has been signed, and it says
that a key in the child zone has some hash value, for some algorithm we
don't support. The DS RRset is signed, so we know it's not a forgery. We
can therefore be sure that there really is no way for us to validate the
DNSKEY RRset in the child zone, because of the actions of whoever has
the private key which signed the DS RRset.

Next stage, assume that the hash value for one of the DNSKEYS in the
child zone is correctly given by the DS record, but the signature
algorithm for the DNSKEY RRset is unknown to us. We have no way of
validating the DNSKEYs. We know that the DNSKEY which matches the hash
in the DS is good, but the others may be impostors. Indeed an attacker
may have given us an answer with one good DNSKEY (matching the DS hash)
and another DNSKEY for a non-implemented algorithm. If we use the rule
that non-implemented algorithm -> insecure, then that's enough for an
attacker to ensure that any zone becomes insecure, and records are
returned by the validator.

I think the rules should be that for unvalidated data, the zone should
be proved to be unsigned by NSEC/NSEC3 records for the DS. If a zone
isn't proved to be unsigned, and the data can't be validated, then the
data should be treated as BOGUS. If you don't do that then any attacker
can make any signed zone into an unsigned zone by the trick above, and
there's no point in a BOGUS return at all.

> 
>> AFAIK there are no new hash algorithms out there. More likely the
>> problem is that a zone is _signed_ with a new public-key algorithm
>> (probably ECDSA) The following bit of 4035 might indicate that treating
>> such data as unisgned is not correct.
>>
>>
>>    If for whatever reason none of the RRSIGs can be validated, the
>>    response SHOULD be considered BAD.  If the validation was being done
>>    to service a recursive query, the name server MUST return RCODE 2 to
>>    the originating client.  However, it MUST return the full response if
>>    and only if the original query had the CD bit set.  Also see Section
>>    4.7 on caching responses that do not validate.
> 
> Yes, the section you quoted is in there indeed, but its header reads
> "Resolver Behavior When Signatures Do Not Validate".  I believe there
> may be a subtle difference between "signatures do not validate" and
> "signatures cannot be validated".
> 
> I am not claiming my interpretation is the only valid one.  While
> researching this topic, I came across RFC 4955, which discusses the
> methodology for setting up DNSSEC experiments which uses "strictly
> unknown algorithm identifiers when signing the experimental zone, and
> more importantly, having only unknown algorithm identifiers in the DS
> records for the delegation to the zone at the parent".  Section 4
> contains the following quote discussing treating the zone as unsigned
> when a DS with only unknown algorithm identifiers is encountered:
> 
>     Although this behavior isn't strictly mandatory (as marked by MUST),
>     it is unlikely for a validator to implement a substantially
>     different behavior.  Essentially, if the validator does not have a
>     usable chain of trust to a child zone, then it can only do one of
>     two things: treat responses from the zone as insecure (the
>     recommended behavior), or treat the responses as bogus.  If the
>     validator chooses the latter, this will both violate the expectation
>     of the zone owner and defeat the purpose of the above rule.
>     However, with local policy, it is within the right of a validator to
>     refuse to trust certain zones based on any criteria, including the
>     use of unknown signing algorithms.


I think that this is suggesting using unknown hash algorithms in the DS.
That makes perfect sense. Under the rule for such, "normal" validators
will treat such a DS as proof that the experimental zone is unsigned
(and so will dnsmasq, after the patch I posted, but test validators will
continue the validation into the zone.
> 
> I haven't yet found another validating resolver in the wild that would
> return SERVFAILs for zones signed using unknown algorithms, which is why
> I'm so curious about your reasons for implementing it this way.


See above. I think there's a difference between "I know that zone isn't
signed" and "I can't prove that this zone has been correctly signed."

Cheers,

Simon.

> 
>> PS. Change for DS unknown hash is:
>>
>> http://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commit;h=67ab3285b5d9a1b1e20e034cf272867fdab8a0f9
> 
> Thanks!
> 




More information about the Dnsmasq-discuss mailing list