One of the big stories from the past few days is the return of DNS cache poisoning. The new attack has been dubbed SADDNS, and the full PDF whitepaper is now available. When you lookup a website’s IP address in a poisoned cache, you get the wrong IP address.
This can send you somewhere malicious, or worse. The paper points out that DNS has suffered a sort of feature creep, picking up more and more responsibilities. The most notable use of DNS that comes to mind is LetsEncrypt using DNS as the mechanism to prove domain ownership, and issue HTTPS certificates.
DNS Cache poisoning is a relatively old attack, dating from 1993. The first iteration of the attack was simple. An attacker that controlled an authoritative DNS server could include extra DNS results, and those extra results would be cached as if they came from an authoritative server. In 1997 it was realized that the known source port combined with a non-random transaction ID made DNS packet spoofing rather trivial. An attacker simply needs to spoof a DNS response with the appropriate
txID, at the appropriate time to trick a requester into thinking it’s valid. Without the extra protections of TCP connections, this was an easy task. The response was to randomize the
txID in each connection.
I have to take a moment to talk about one of my favorite gotchas in statistics. The Birthday paradox. The chances that two randomly selected people share a birthday is 1 in 365. How many people have to be in a room together to get a 50% chance of two of them sharing a birthday? If you said 182, then you walked into the paradox. The answer is 23. Why? Because we’re not looking for a specific birthday, we’re just looking for a collision between dates. Each non-matching birthday that walks into the room provides another opportunity for the next one to match.
This is the essence of the DNS birthday attack. An attacker would send a large number of DNS requests, and then immediately send a large number of spoofed responses, guessing random txIDs. Because only one collision is needed to get a poisoned cache, the chances of success go up rapidly. The mitigation was to also randomize the DNS source port, so that spoof attempts had to have both the correct source port and txID in the same attempt.
The other major DNS security improvement we have to mention is DNSSEC. DNS Security Extensions is the DNS protocol built on top of public key cryptography. Cache poisoning and DNS spoofing essentially goes away when DNSSEC is implemented. The problem is that so many resolvers, forwarders, and domains don’t yet support DNSSEC. SADDNS specifically targets those resolvers and forwarders.
The real linchpin to SAD DNS is a quirk of DNS ports that allows a port scan to identify the sending port of a given DNS query. That quirk is that when a UDP packet is sent from a given source port, that port is considered open to incoming packets by the OS. Any other port will return an ICMP bounce message, indicating a closed port. This technique, combined with a denial of service attack, gives an attacker enough time to identify the source port and send a flurry of txID guesses. Because they can figure out the port and txID in a two step process, the cache poisoning attack is viable once again.
There is one quick-and-dirty mitigation against this attack, that is easy to implement. Instead of setting your firewall to reject packets by default, set it to drop them by default. It’s possible that this workaround could be overcome, but the mechanism described in this paper does not work if the target DNS resolver/forwarder is set to default drop unsolicited packets.
How To Document a Breach
Capcom of Japan was recently hit with a ransomware attack. How exactly their network was compromised is still unclear, but Capcom is being very clear about what data was compromised, and what data may have been compromised. In fact, their overall response looks like a textbook case of getting it right. They give the types and numbers of records exposed. Additionally, they intentionally don’t keep things like credit card numbers in their database, so those are known not to be compromised.
They give a detailed timeline in section four. As soon as the results of the attack were noticed, they pulled the plug on their servers and went directly into incident response mode. It’s very likely that the attack remained minor due to this quick action. Within two days of the incident, they had an initial statement out, confirming it was an attack. The more detailed report being available two weeks later is reasonably quick as well. If you’re ever in the unenviable position of having to write the public incident summaries for an attack, this might be a good reference, as to how to get it right.
Intel has dropped 40 security advisories in November, for a total of 95 vulnerabilities disclosed. As noted in the link above, that’s a lot. What strikes me is that over half of those were discovered by Intel employees doing paid security research. Some of the more interesting vulns discovered were a pair of remotely exploitable problems in Intel’s AMT management platform, and a similar problem in Intel Bluetooth products. While it’s great that these bugs are getting fixed, I would love to see a bit more information about what exactly is going on with each of these vulns.
Remember last week, we talked about the problems in TCL TVs? One of the biggest complaints security researchers had about that issue was the deafening silence from TCL. That silence has been broken, and quite a few questions have been answered. I’ve seen some additional information, where TCL states that TerminalManager is strictly for remote diagnostics, and can only be activated by the end user. The primary vulnerability we discussed, the filesystem being exposed over HTTP, was caused by a seperate app, T-Cast. This application enables streaming from mobile devices, and wasn’t installed on North American TVs. It seemed to be an unintended side-effect of the app.
There may be more to come, as there are several of us digging into TCL TVs and firmware. So far, however, my experience has been that the firmware is relatively well put together. On the model I bought, SELinux is enabled, the root filesystem is mounted read-only, and no SU binary is installed. There are a few oddities, but if anything comes of those, we’ll let you all know at the appropriate time.
Remember War Games? A young Matthew Broderick was just looking for early access to a video game. He decided to dial every phone number in LA, and make note of which ones had computer modems at the other end, so he could find which one belonged to his favorite game developer. I think [Martin Vigo] must have had that in mind when he made a discovery. Many services allow you to trigger a password reset through a cell number. Often times, that process gives away a few digits of the associated number. If you could coordinate a lookup against multiple services, you could recover multiple digits, maybe all of them, from a simple email address.
The second half of that tool is now available: Phonerator. It scrapes Open Source INTelligence (OSINT) sources for information on valid phone numbers, and lets you generate a list of known good numbers from a partial match. “Open Source” here doesn’t refer to source code, but to publically available data. Martin suggests a few clever uses of these tools. One would be a reverse lookup on a stalker, assuming you have an email address that’s associated with their number. The read is interesting, particularly if you’re a telephone geek like many of us are.
Worst Idea of the Week?
And finally, we have the idea that strikes me as the worst idea of the week. In this blog post, [Matthew Green] makes a simple request. He asks Google to release their old DKIM secret keys after they expire, and maybe rotate those keys more often. First, DKIM is the “DomainKeys Identified Mail” system. It’s a simple way to confirm that the claimed sender of an email is the actual sender. In the case of gmail, Google signs each outgoing message with their secret DKIM key, confirming that the message was really sent from the claimed account, and hasn’t been tampered with. The purpose of this signature is to give admins another spam-fighting tool.
So why does Matthew want Google to release those keys? Google DKIM has recently been used to validate some very high-profile email dumps. His argument is that if anyone has access to the signing keys, signatures on dumped emails will no longer be trustworthy. Since they can no longer be proven to be real, email hacks will slow down and cease to be an issue.
Why is this a bad idea? Because it won’t cut down on email hacks, it will just make it impossible to debunk the fake ones. If anything, giving away the signing keys will just encourage fake email dumps. There’s a separate point to be made here about DKIM being useful in the case of law enforcement and prosecution. In this case, it not only confirms validity, but also prevents anyone from tampering with the emails.
[Editor’s note: We’re not all agreed on this. Deniability is often seen as a good thing in crypto algorithms, both because it’s privacy-preserving and prevents the system from being used in these kind of blackmail schemes. If you want to sign your e-mails, there are tons of ways to do so: GPG is the most obvious. But you shouldn’t have to. Users shouldn’t have this fundamental privacy choice made for them, especially with the default mode being the privacy violation. Discuss!]