Monday, January 17, 2011 - 17:18

StUXNET: In security tests we trust...or not. Maybe we even fake them.

Isn't it lovely. You deliver your piece of code or hardware for a security audit and get it back all shiny and clean. Except for the little bug that happens to be handy when someone would have to sabotage one of your customers nuclear reactors. A customer like Iran. Ignoring the political magnitude of this it has a very bad taste to it. Even more so if Siemens was not a part in this but a conveniently available source for exploitation through inspection.

Now it wasn't a reactor that was attacked but it could have been. Actually it could have been anything. That Stux was a masterpiece with a very specific taste for a certain enrichment facility in Iran was pretty obvious. And both of it was rather unexpected. The exploit was ingenious. The target was suspiciously selective. Only a very specific setup was attacked. That of the enrichment facility at Natanz. A common attack would target a significantly higher number of systems. The source(s) of the attack was/were rather obvious after the first inspection of Stux. But there's no proof. Still no proof btw. But the explanation the NYT delivered 2 days ago is pretty smooth. And aside from the obvious issues with this it raises a very interesting question. How honest are security audits at that level? And how trustworthy can they be if there's one of those 3 letter companies with a significant interest in a possible exploit?

If you're producing critical equipment ( like Siemens does ) these audits apparently are worth...not that much. It actually seems that having no special audit at all makes you and your customers safer. A weird situation in which security by obscurity indeed does help. Not entirely of course but it obstructs the options to gather such information quickly.

If the situation is remotely as the NYT outlined it - with or without Siemens active support - this situation has some grave implications on securing your assets.