You can say whatever you want in the comments, and I will approve it, but I need to know who you are before I do so.
You can say whatever you want in the comments, and I will approve it, but I need to know who you are before I do so.
I think some of the incumbents are going in the wrong direction, while I am a little envious of some that I think got it 100% right.
In the old days, things were simple and cameras were really dumb, but today cameras are often quite clever, but now hordes of VMS salespeople are now trying to make them dumb again, thereby driving the whole industry backward to the detriment of the end-users. Eventually, though, I think people will wake up and realize what’s going on.
The truth is that you can run a VMS on a $100 hardware platform (excluding storage). Yet, if you are keeping up on the latest news, it seems that you that you need a $3000 monster PC with a high-end GPU to drive it. In the grand scheme of things (cost of cameras, cabling and VMS licenses) the $2900 dollar difference is peanuts, but it bothers me nonetheless. It bothers me because it suggests a piss-poor use of the available resources.
As I have stated quite a few times, the best way to detect motion is to use a PIR sensor, but if you insist on doing any sort of image analysis the best way to do it is on the camera. The camera has access to the uncompressed frame in it’s most optimal format, and it brings its own CPU resources to the party. If you move the motion detection to the camera, then your $100 platform will never have to decode a single video frame, and can focus on what the VMS should be doing: reading, storing and relaying data.
In contrast, you can let the camera throw away a whole bunch of information as it compresses the frame. Then send the frame across the network (dropping a few packets for good measure) to a PC that is sweating bullets as it must now decompress each and every frame since MPEG formats are all or (almost) nothing formats, there is no “decode every 4th frame” option here. The decompressed frame now contains compression artifacts which contribute to making accurate analysis difficult. The transmission of the frames across the network can also lead to the frames not arriving at a steady pace, which causes other problems for video analytics engines.
VMS vendors now say they have a “solution” to the PC getting crushed under the insane workload required to do any sort of meaningful video analysis. Move everything to a GPU they say – and it’s kinda true. If you bring up the task manager in windows, your CPU utilization will now be lower, but crank up GPU-z and you (should) see the GPU buckling under the load. One might ask if it would not have been cheaper to get a $350 octa-core Ryzen CPU instead of a $500 GPU
Some will say that if the integrator has to spend 2 days setting up the cameras using edge detection, it might be cheaper if they just spring for the super PC and do everything on that. This assumes that the setup can actually be done quicker than when setting it up on a camera. I’d wager that a lot of motion detection systems are not really necessary, and in other cases, the VMS motion detection is simply not as good as the edge-based detection, which in some tragic instances completely invalidate the system and renders it worthless as people and objects magically teleport from one frame to the next.
Here’s a video from 2015 showing the advantages of letting the GPU do some of the heavy lifting.
Sometime in 2014, I received a database dump from a high profile industry site. I received the file from an anonymous file sharing site via a Twitter user that quickly disappeared. The database contained user names, mail addresses, password hashes (SHA1), the salt used, IP address used to access the site and the approximate geographical location (IP geolocation lookup – nothing nefarious).
I had canceled my subscription in January 2014, and the breach happened later than that. I don’t believe I received a notification of a breach of the database. Many others did, but I absolutely would remember if I had received one – in part because I discussed the breach with a former employee at the blog, and in part, because I was in possession of said DB.
A user reached out to me, seemingly puzzled as to why I would be annoyed by not receiving a notification – seeing as I was no longer a member, why would I care that my credentials were leaked. No-one would be able to log into the site using my account anyways.
Here’s the issue I have with that. I happen to have different passwords for different things – but a lot of people do not. A lot of people use the same password for many different things. Case in point, say you find a user with the email address email@example.com, and someone uses a rainbow attack and finds the password, do you think there’s a likelihood that the same password would work if they try to log into the mail account at Gmail? Sure, it’s bad to reuse passwords, but do people do it. You bet.
So, when your site is breached, I think you have an obligation to inform everyone affected by the breach – regardless of whether they are current members or not. I would imagine anyone in the security industry would know this.
“Neverending stooooooryyyyyyy… ”
Update: Mirai now has an evil sister
Mirai seems to be the talk of the town, so I can’t resist, I have to chime in.
Let me start by stating that last week’s Dyn attack was predictable.
But I think it is a mistake to single out one manufacturer when others demonstrably have the same issues. Both Axis and NUUO recently had full disclosure documents released that both included the wonderful vulnerability “remote code execution” – which is exactly what you need to create the type of DDOS attack that hit Dyn last week.
There’s not much we can do right now. The damage has been done, and the botnet army is out there, probably growing every day, and any script-kiddie worth their salt will be able to direct the wrath of the botnet towards one or more internet servers that that dislike for one reason or other. When they do, it won’t make any headlines, it will just be another site that goes offline for a few days.
The takeaway (if you are naive) is that you just need a decent password and HW from reputable sources. While I would agree on both on the principle -the truth is that in many cases it is useless advice.
For example. if you look at the Axis POC you’ll see that the attacker doesn’t need to know the password at all. (I did some probing, and I am not sure about that now).
The impact of this vulnerability is that taking into account the busybox that runs behind (and with root privileges everywhere. in all the binaries and scripts) is possible to execute arbitrary commands, create backdoors, performing a reverse connection to the machine attacker, use this devices as botnets and DDoS amplification methods… the limit is the creativity of the attacker.
In other words:
I am not suggesting that Axis is better or worse than anyone else, the point is that even the best manufacturers trip up from time to time. It can’t be avoided, but thinking that it’s just a matter of picking a decent password is not helping things.
My recommendation is to not expose anything to the internet unless you have a very, very good reason to do so. If you have a reason, then you should wrap your entire network in a VPN, so that you are only able to connect to the cameras via the VPN, and not via the public Internet.
My expectation is that no-one is going to do that, but instead they will change the password from “admin” to something else, and then go back to sleep.
A while ago I read about a rationalization experiment conducted by Tim Wilson at the University of Virginia. Two groups were asked to pick a poster to take home as a gift. One group were asked to explain why they liked the poster before taking it home, the other could just pick one and go home.
After a while, they were asked for feedback. How did they like their posters now?
The group that were asked for an explanation, hated theirs while the others were still happy with their choice. This is a little surprising (it was to me at least). Why would explaining your choice affect it so profoundly?
I think this behaviour is important to keep in mind when we talk to our clients.
A potential client may be reviewing 4 different bids, and while they are pretty similar, the client decides that he likes #2. At some point the client may have to rationalize why he picked #2. If there are very obvious and compelling reasons to go with #2, it presents no problems. But what if all 4 are very similar, and the reason the client wants #2 is really because he quite simply, just likes it better. He can’t explain his choice by just jotting “I like #2 better”, so he will rationalize.
And so, unless there are very clear, objective, advantages of #2 over #4, we are entering that dangerous territory. The Poster Test (and other variations of the same thing), show that people simply make things up. It’s not that people consciously think “oh, let me just make up a story”. They genuinely feel that they are making a rational argument.
In the realm of software development, requirement specifications quite often deal exclusively with what one might call itemizable features. E.g. “open a file”, “delete a user”, “send an email”, and very little effort is put into discussing the “feel” of the application. When bad managers sit in a management meeting the focus is on Product X does Y, we need to do Y too, and then Z to be “better”, and the assumption is that the user doesn’t care one bit about how the app feels. Almost no feedback exists that complain about how an application “feels” (it does exists, there is just not a lot of it), and it is very rare that managers take that sort of thing to heart.
At Prescienta.com, the “feel” is important. I believe that if the product feels robust and thoughtful, that the customer will happily make up a story about why some small unique feature in our product is crucial. “Feel” is about timing, animations, and non-blocking activities, it’s about consistency and meaningful boundaries. Some companies seem to understand this, and consistently move towards improving the user experience, others give up, forget, or lose the people that have the ability to create pleasing user experiences, and instead let their product degrade into a jumble of disjointed UX ideas, inconsistent graphical language and bad modal activities (hmmm.. a bit of introspection here?)
A well designed application is designed with the technical limitations in mind – a good example is dropcam, where the audio delay is so large that the UI is designed to take this into consideration. The push to talk metaphor is a perfect way to handle the latency. If the UI was designed to work like a phone (that has almost no latency), there would be a massive gap between what the UI is suggesting and what the technology can deliver.
Another example, a bit closer to home, is the way you move cameras around in privilege groups. The goal here was to design the UI to ensure that the user did not get the idea that they could add the same camera to two different groups. If the UI suggests that you can do so, you may be puzzled as to why you can’t. I decided to prevent multi-group membership to avoid contradictory privileges (e.g. in one group you are allowed to use PTZ in another it is forbidden, so which is correct?).
OrwellLabs just released a POC on how to hack Axis certain cameras and video servers (h.t. Gavin Millard). This is just a few days after I received a marketing email from Axis (the July 2016 Axis ADP eNewsletter), containing this verbiage.
Service release for critical security vulnerability
Recently, a critical security vulnerability was discovered in some of Axis’ products that are accessible from the Internet. We have now published firmware service releases for the majority of our products; see http://www.axis.com/support/product-security. Axis recommends users to update the affected products’ firmware as soon as possible, especially if the products are accessible from the Internet.
The very same email contains this message
Taking responsibilityCEO Ray Mauritsson comments on privacy, business ethics, the Axis code of conduct, and how we – together with our partners – are working to create a smarter, safer world
Axis did put out a press release on the 6th of July 2016 about the vulnerability on their security page (about 9 months after Axis was notified by Orwelllabs). But honestly, how many owners of the affected devices will go there? How many owners of the affected devices will get an email about this sort of thing? Even if they do, they have to visit a link to determine the severity. I don’t know if issuing a press release 9 months later, that your company’s cameras may become part of a botnet army is “taking responsibility”. I know it is not uncommon for companies to behave this way, but that doesn’t make it right.
For all intents and purposes, the impact of this bug is not that big. The vast majority of sensible users will not expose their CCTV system directly to the internet. Your camera may access the internet – outbound – to send notifications etc. but that will not make you vulnerable. To be vulnerable, you need to map a port on your firewall to your camera, so that you can see the camera web-interface from a public IP address.
But there are a minority of users who will, and who do this sort of thing. And even if it is the minority, they can do a lot of damage to others (by participating in botnets), and obviously to themselves too (paying $$$ for a useless/dangerous device).
It’s depressing because Axis does a lot of things to improve security : no default passwords is a pain in the ass, but it does make things a little more secure (only just a little because people will put in 12346, pass, or something similar).