-- Leo's gemini proxy
-- Connecting to ur.gs:1965...
-- Connected
-- Sending request
-- Meta line: 20 text/gemini; lang=en
Posted on 2013-09-14 by Nick Thomas
Communications on the Internet overwhelmingly rely on SSL/TLS for protection.
There are two forms of protection this is meant to provide - from snooping of
traffic, and from impersonation. The first of those gets a lot of attention
but, unless we have the latter as well, an attacker can snoop on your traffic
by performing a man-in-the-middle attack on you with a dodgy certificate:
Unfortunately, the current method of providing protection-from-impersonation
is terrible. Traditionally, OS and browser vendors pick a range of root
certificates to bundle with their software - a list that's generally hundreds
of entries long - and everyone trusts that the list is good. Anyone who can
get a certificate into the lists can then sell certificates signed by it to
people who can't (like me, for a start).
They can sell certificates for any domain, for any reason, with any degree of
publicity, transparency or validation; the only recourse vendors have is to
threaten to stop trusting them if the're shown to be issuing certificates that
don't meet some standard or another. If they're compromised and the key for the
root certificate is stolen - as happened in 2011:
Then it's a mad scramble to revoke or blacklist new certificates based on that
stolen information before too much harm is done.
Recently, some vendors - Chrome, for instance - have started introducing
certificate pinning
to restrict the range of CAs that are valid for a particular domain:
This helps a bit against some attacks on large sites, but isn't much use as a
general solution.
As for the first part - the encryption itself - there's a lot of discussion
right now over which parameters are safe, and which aren't. There's probably
*some* setups that're safe from cryptanalysis - or if not, then we can probably
come up with some. In this area, one more problem we have with the current CA
model is that deploying new types of certificates is a slow process - you have
to wait for a trusted CA to start offering them, before you can use them.
The current system, then, can be summarised as trust silos. The main contender
to replace it is an RFC known as DANE:
This leverages DNSSEC-signed DNS to publish records that say which certificates
(rather than certificate authorities) are valid for a particular service running
on a domain. As it utilises the DNS, we move from trust silos to hierarchical
trust.
Hierarchical trust is narrower, and so better, but still vulnerable to
compromises of keys not under your control. However, the only other schemes
I'm really aware of at the moment are based on web-of-trust relationships with
offline identity verification. This boils down to everyone manually curating
bookmarks that tell them how much to trust things, and there are still keys
out of your control that, if compromised, break you - you just get to choose
between trust anchors more flexibly than with a hierarchical system. I'm not
convinced the extra effort is worth it, so I've deployed DNSSEC + DANE instead,
and in the next article, I'll go over how I did it.
Questions? Comments? Criticisms? Contact the author by email: gemini@ur.gs
-- Response ended
-- Page fetched on Sat May 18 10:17:42 2024