I wrote this essay in October 2012 but hadn’t taken the time to publish it at the time. As such, the vast majority of links dates from 2012 or earlier except for a few that I inserted as a cleaned up the text. It’s interesting to note that it was written pre-Snowden. It’s interesting how this is still relevant today.
Any device that is accessible, which means it has bluetooth, ethernet port, wifi, GSM/3G chip[2], Firewire/DisplayPort/Thunderbolt, analog modem, usb port, memory card, custom wireless protocol, DVD/CD player, HDMI or DVI[3], NFC, SmartCard, TPM[4], GPS, satellite video feed, magnetic card reader, credit card chip reader, simply physically accessible, means that the device is potentially easy to turn into a remote spying bug.
[2] While a GSM tower isn’t an hack in itself, known hack exist from the GSM tower on 3G modems.
[3] While it’s not an exploit of the hardware itself, it is interesting in the fact that it is impossible to update the protocol and that Niels Ferguson, a Microsoft employee, claimed to have cracked it 9 years earlier but didn’t release his research due to fear of DMCA.
[4] Not an exploit but a large increase of attack surface.
Look at medical hardware. If you haven’t played FreeCell yet on a pill distribution machine, you should try it. Having glanced at it, it’s truly frightening. Seriously. All these electronic devices are rarely updated and a reservoir of malware. Military and U.S. DoE had issues too.
Why did the Military and U.S. DoE had issues? It’s because of software based vulnerabilities. While fixing all vulnerabilities in a program is near impossible, a way to reduce the problem is to reduce the software attack surface. As seen on the Google Chrome’s Sandbox and the XBox 360 Hypervisor, a way to reduce the attack surface is with containment. Containment is hard. For example, many virtualization software introduce bugs instead of securing the software.
Containment efficiency is limited with all the I/O ports available on devices. So while efforts can be done to reduce the attack surface, it exists by definition.
–
REVIEW HERE
My point above is: there will always be a security related bugs in anything done by a human. Maybe the author was careless, maybe a plain oversight, maybe he’s stupid, maybe he did it intentionally. It doesn’t matter, the only fix is that once the error is spotted, the fix must be deployed as fast as possible.
Having one of the best software sandbox on Windows is useless if there’s an escape. Your only hope is to update all the users as fast as possible.
We started analyzing the exploit as soon as it was submitted, and in fewer than 10 hours after Pwnium 2 concluded we were updating users with a freshly patched version of Chrome.
That’s how you keep your users secure.
Software update is key to security. Because before having good crypto, you need good software. And bugs will be found over a long period of time, usually long after the “clients” are using the code. This means someone has to take responsibility for the security related issue, which is a problem in itself, then must take care of updating clients securely.
How do you safely update something if you know the code fetching the update is broken? You don’t want to do like a popular application that silently downloads its updates over plain HTTP and doesn’t verify the files it downloads[5].
[5] An undisclosed zero-day MITM attack I heard about, resulting in remote-code execution.
So what do you do? Use crypto of course! So you get to the point where your update crypto is potentially vulnerable and you need to update it.
Independent of what crypto choices you do, it is important to be able to push your bits to the clients as fast as possible. The client software update code is your most important code. Test it. Monitor update failures. Update failures will happen. Deal with it. If you can’t update your clients, your clients are not secure anymore as soon as a Zero-Day exploit is known.
An option is to have multiple implementations and have the clients switch over. There is two uses cases. The first one is supporting multiple crypto algorithms for a protocol. Many protocols support that. This is simply needed because old clients weren’t updated but this gives newer clients an higher level of protection.
The second reason to support multiple cryptographic algorithms is to support a fallback crypto in case the initial one is severely broken. There is at least one case where it made sense. Overall the idea is to be agile with the usage of crypto so you can fallback to a good enough algorithm when the main one is broken, so that you can still send bits to update your software. IMHO, this has a few bad properties:
It’s based on the assumption that updating all the clients and the servers is impossible, which is true more often tre that not. On the other hand with due discipline, it’s potentially to test the code paths, so that #2 is not that much a problem. #1 and #3 still remains to some extent.
Sometimes the crypto is good but humans need help with security. Because cryptography is useful for two things:
I talked about confidentially above. Let’s cover authentication.
Using crypto usually mean some form of trust relationship. What happens when the problem is about trust relationship itself? Comodo and DigiNotar comes to mind. Often, it’s not even a bug, it’s not an human error, it’s simply the key strength used for the algorithm is not good anymore. Sometimes it’s an unnoticed bug which disable authentication completely. In these case, it’s not just the algorithm that needs to be changed but it could be the keys! One way is to use a secondary authenticating mechanism in addition to the primary one. But in the end, it could be the authentication key that needs to be updated securely.
Designing crypto primitives with a strong preference for hardware implementations defeats the idea of keeping software running secure; it’s near to impossible to update hardware in the field, especially in consumer products. Let alone updating them fast. What do you do if the whole cipher is broken and you need to switch the cipher? Your fancy hardware implementation becomes useless. Sure, you can work around with FPGA, but that’s only useful up to a certain extent. You can use smartcards to offload the crypto but they are way too slow and cumbersome to use for most use cases. Also, what if you embed keys into hardware and it becomes vulnerable to spoofing like the Estonian electronic ID card?
Let’s just hope the crypto world will gravitate towards easy-to-deploy easy-to-update secure software-based implementations.
Thanks to Ryan Sleevi for reviewing earlier drafts.