[Note: I’m using the X^Y notation to note “X to the power of Y” because this system plays strange with superscripts.]
The Register reports today that the Chinese researchers who originally posted an attack on the SHA-1 hashing algorithm have further refined the attack. This follows the revelation in August 2004 that the MD5 and SHA-0 hashes were broken.
The previous attack, revealed in February 2005, allowed the researchers to find a hash collision in 2^69 operations instead of the normal 2^80 expected for brute force operations. The new attack pares that down to 2^63 operations. Now, that’s still a heck of a lot, but 17 orders of magnitude is quite a big deal — especially since that number is a preliminary result and there seems to be every reason to believe that they’ll be able to move it down even farther as they refine the new technique.
Any refinement, though, is icing on the cake. As Bruce Shneier notes:
But an attack that’s faster that’s faster than 2^64 is a significant milestone. We’ve already done massive computations with complexity 2^64. Now that the SHA-1 collision search is squarely in the realm of feasibility, some research group will try to implement it. Writing working software will both uncover hidden problems with the attack, and illuminate hidden improvements. And while a paper describing an attack against SHA-1 is damaging, software that produces actual collisions is even more so.
What I find really interesting is that people have already been thinking about ways to work around this flaw. Steven Bellovin and Eric Rescorla have written a paper that describes the effect of the MD5 and SHA-1 breaches on actual security protocols such as S/MIME, TLS, and IPSec. They don’t mince words:
The current generation of attacks address collision resistance. MD5 is effectively dead from that perspective; SHA-1 is much weaker than it should be, though finding collisions is still impractical.
A couple of notes I pulled from the paper through a quick browse:
- HMACs — using hashes to authenticate messages between two parties with a shared secret — are probably safe, because they use multiple applications of the hashing function.
- Digital signatures and fingerprinting are in “grave danger”, since an attacker could prepare two versions of the data being signed and switch them without detection.
Of more interest are their conclusions, quoted here:
Implementors of S/MIME should ensure that their product handles multiple signatures properly. In particular, programs should report success with one signature while warning about unverifiable signatures. MD5 should never be used for digests, since all conforming implementations already support SHA-1. The IETF should develop a method for indicating digest function capabilities in certificates, CA vendors should implement it, and new certificates should contain explicit statements about hash functions supported.
The IETF should define a TLS extension by which clients can signal support for newer certificates.
The IETF should pick one of the two suggested alternatives for supporting client side certificates properly.
The IETF should consider allowing PRF negotiation in a future version of TLS.
The definition of the digitally-signed element should be amended to support new hash functions.
The definition of the Finished message should be amended to support new hash functions.
The IETF should amend the IKE and IKEv2 specifications to describe signaling via the SA hash proposal.
DSA presents a special problem, since it may only be employed with SHA-1. NIST needs to clarify this situation, either by defining DSA-2 or by describing how DSA can be used with randomized hashes or truncated longer hashes.
Lots of food for thought.