Having been in the sausage-factory for a long time, I’d like to share some thought about what I think is a problem when it comes to cyber-security.
Contrary to popular belief programmers are humans; we make stupid mistakes, some are big, some are small. Some days we are lazy, others we are energetic and highly motivated. So (exploitable) bugs inevitably creep in, this is just a fact of life.
The first step in writing robust and secure code is in the design and architecture. The next step is to have developers with good habits and skills, and finally you run a good selection of automated tests on the modules that make up the product.
But consider that a VMS typically runs on an OS that the vendor has very little control over, or uses a database (SQL server, mySQL, MaxDB etc) that is also outside the manufacturers reach. Furthermore, the VMS itself uses libraries from 3rd parties, and with good reason too. Open Source libraries are often much better tested and under an extreme amount of scrutiny compared to a VMS doing a homemade concoction reviewed by just a few peers, but they too have bugs (just fewer than the homebrew stuff usually).
Inevitably, someone finds a way to break the system, and when it happens, it’s a binary event. The product is now insecure. You can argue all you want that the other windows and doors are super-secure, but if the back door is open – who cares about the lock on the window?
To be fair, if the rest of the building is locked down well, then fixing the broken door may be a smaller event.
Contrast to a system that is insecure by design. Where fixing the security issues requires changes to the architecture. We’re no longer talking about replacing a broken lock, but upheaval of the entire foundation. An end-user doesn’t know if the cracks are due to a fundamental issue, or something that just needs a bit of plaster and paint.
And this brings me to the real issue.
Say a developer politely asks demand that resources are allocated to fixing these issues, what do you imagine will happen? In some companies, I assume that a task-force is assembled to estimate the severity of the issue, resources are then allocated to fix the issue. A statement is issued so that people know to apply the patch (they’re not going to do it, but it’s the right thing to do). This is what a healthy company ought to do. A sick company would make the following statement: “no-one has complained about this issue, and – actually – we have to make money”.
A good way to make yourself unpopular (as a programmer) is to respond by saying that if the issue IS discovered, you can forget about making any money. Your market will be limited to installations who really don’t care about security. The local Jiffy-Lube who replaced their VHS based recorder with a DVR that just sits on a dusty shelf may truly not care. The system is not exposed in any way – it is a CCTV (Closed, being the operative word here). They’re fine. And the root password is written on a post-it note stuck on the monitor. But what about a power plant? What about a bank? an airport?
You might imagine that an honest coder with integrity would resign on the spot, but this doesn’t solve the problem. Employees are often gagged by NDAs and non-disparagement clauses, and while disclosure of security flaws is clearly protected by the first amendment, it is generally a bad idea to talk about these things. The company may suffer heavy losses and you are putting (unsuspecting) customers at risk by making these things public. The threat of legal action and the asymmetry (a single person vs a corporation) ensures that flaws rarely surface.
It’s also conceivable that the dumbass programmer, is wrong about the risk of a bug/design issue. A developer may think that a trivial bypass of privilege checks is “dangerous”, but customers might genuinely not care.
Who knows? During the Black Hat convention in 2013, IP cameras from different manufacturers were shown to be hopelessly unsafe. Didn’t seem to make any difference.
I referenced this talk in an earlier post as well.
4 years later, cybersecurity is all the rage, and perhaps people do care – but from what I can tell, it’s just a few SJWs who crave the spotlight who pretend to care. Whether the crazy accusations have merit is irrelevant, all that matters is that viewers tune in, and the show will get increasingly grotesque to keep people entertained. And if the freaks-show is not bringing in the crowds, you can always turn it into a sort of “anonymous facebook” where people can back-stab each other – like the bitchiest teenage girls used to treat each other.
What the industry probably needs to do, is to pay professional penetration testers to go to work on the systems out there. I’m not talking about the kind of shitty automated tests that are being done today. They are far, far from being sufficient. You need people like Craig Heffner in the video to go to town to get to the bottom.
Happy hacking.