next thing you know…
next thing you know…
Sometime in 2014, I received a database dump from a high profile industry site. I received the file from an anonymous file sharing site via a Twitter user that quickly disappeared. The database contained user names, mail addresses, password hashes (SHA1), the salt used, IP address used to access the site and the approximate geographical location (IP geolocation lookup – nothing nefarious).
I had canceled my subscription in January 2014, and the breach happened later than that. I don’t believe I received a notification of a breach of the database. Many others did, but I absolutely would remember if I had received one – in part because I discussed the breach with a former employee at the blog, and in part, because I was in possession of said DB.
A user reached out to me, seemingly puzzled as to why I would be annoyed by not receiving a notification – seeing as I was no longer a member, why would I care that my credentials were leaked. No-one would be able to log into the site using my account anyways.
Here’s the issue I have with that. I happen to have different passwords for different things – but a lot of people do not. A lot of people use the same password for many different things. Case in point, say you find a user with the email address firstname.lastname@example.org, and someone uses a rainbow attack and finds the password, do you think there’s a likelihood that the same password would work if they try to log into the mail account at Gmail? Sure, it’s bad to reuse passwords, but do people do it. You bet.
So, when your site is breached, I think you have an obligation to inform everyone affected by the breach – regardless of whether they are current members or not. I would imagine anyone in the security industry would know this.
There’s a lot of paranoia in the industry right now, some warranted, some not. The primary issue is that when you plug something into your network you basically have to trust the vendor to not spy on you “by design” and to not provide a trivial attack vector to 3rd parties.
First things first. Perhaps you remember that CCTV means Closed Circuit Television. Pay attention to those first two words. I am pretty sure 50% or more of all “CCTV” installations are not closed at all. If your CCTV system is truly closed, there’s no way for the camera to “call home”, and it is impossible for hackers to exploit any attack vectors because there’s no access from the outside world to the camera. There are plenty of PC’s running terrible and vulnerable software out there, but as long as these systems are closed, there’s no problem. Granted, it also limits the flexibility of the system. But that’s the price you pay for security.
In the opposite end of the spectrum are cameras that are directly exposed to the internet. This is a very bad idea, and most professionals probably don’t do that. Well… some clearly do, because a quick scan of the usual sites reveal plenty of seemingly professional installations where cameras are directly accessible from the internet.
To expose a camera directly to the internet you usually have to alter the NAT tables in your router/firewall. This can be a pain in the ass for most people, so another approach is used called hole-punching. This requires a STUN server between the client sitting outside the LAN (perhaps on an LTE connection via AT&T) and the camera inside the LAN. The camera will register with the STUN server via an outbound connection. Almost all routers/firewalls allow outbound connections. The way STUN servers work, probably confuse some people, and they freak out when they see the camera making a connection to “suspicious” IP but that’s simply how things work, and not a cause for alarm.
Now, say you want to record the cameras in your LAN on a machine outside your LAN, perhaps you want an Azure VM to record the video, but how will the recorder on Azure (outside your LAN) get access to your cameras that are inside the LAN unless you set up NAT and thus expose your cameras directly to the internet?
This is where the $10 camera proxy comes in (the actual cost is higher because you’ll need an SD card and a PSU as well).
So, here’s a rough sketch of how you can do things.
EvoStream can receive an incoming RTMP stream, and make the stream available via RTSP, it basically changes the protocol, but uses the same video packets (no transcoding). So, if you were to publish a stream at say rtmp://evostreamserver/live/mycamera, that stream will be available at rtsp://evostreamserver/mycamera. You can then add a generic RTSP camera that reads from rtsp://evostreamserver/mycamera to your VMS.
The next step is to install the proxy, you can use a very cheap Pi clone, or a regular PC.
Say you have a camera that streams via rtsp://192.168.0.100/video/channels/1, the command looks something like this (all on one line)
ffmpeg -i rtsp://username:email@example.com/video/channels/1 -vcodec copy -f flv rtmp://evostreamserver/live/mycamera
This will make your PC grab the AV from the camera and publish it to the evostream server on Azure, but the camera is not directly exposed to the internet. The PC acts as a gateway, and it only creates an outbound connection to another PC that you control as well.
You can now access the video from the VMS on Azure, and your cameras are not exposed at all, so regardless how vulnerable they are, they will not expose any attack vectors to the outside world.
Using Azure is just an example, the point is that you want to isolate the cameras from the outside world, and this can be trivially accomplished by using a proxy.
As a side note. If cameras were deliberately spying on their users, by design, this would quickly be discovered and published. That there are bugs and vulnerabilities in firmware is just a fact of life and not proof of anything nefarious, so calm down, but take the necessary precautions.
This is getting ridiculous.
I just received my $10 computer from China. I paid a premium for the (required) SD card as I do not have the patience to wait for one to arrive in the mail. My 5V/2A charger for my old, functional, PSP works as a power supply. I then downloaded Armbian and booted.
A few commands later, and I have a $20 dollar camera proxy.
I don’t actually plan to use it as my camera proxy, but as a small controller for a number of sensors I plan to add. For example using a cheap modified PIR sensor as input to the controller.
As you may know, I also have a Raspberry Pi 2. This little device is incredibly stable, and has only been rebooted once in the last 3 months, and that was by accident.
Hopefully you’ll be able to get a $100 device that you simply plug into your infrastructure, and that little device will work as standalone, or as a node in a much larger VMS, but that’s a bigger project that I might pick up later.
Some of the commands I used :
sudo apt-get update sudo apt-get install ffmpeg ffmpeg -i rtsp://...... -vcodec copy -f flv rtmp://....
“Neverending stooooooryyyyyyy… ”
I like Prism Skylabs; they seem to have their shit together. It seems like a very well designed UI, and the idea – at first glance – seems to be great. Wouldn’t it be great to get feedback on your store layout strategy?
That which is measured improves. That which is measured and reported improves exponentially.
Unfortunately, I’ve measured my pet rock for the last 6 months to no avail. That damn thing hasn’t improved one bit.
There’s no question that Prism Skylabs can provide some interesting metrics in a very beautiful package, but I started wondering if Prism Skylabs was able to prove that using the system will increase revenue (and ultimately profit) for stores that actively use the system. I believe that most stores have pretty fine-tuned systems for correlating ads, events, placement and revenue based on years of experience (and most likely recorded video).
Maybe it’s like tracking the time it takes for me to drive through the city every day. When I get to my destination I already know if I had a good day, and if my phone told me “today it took 20% longer than usual” there’s not much I could do with that. It doesn’t make sense for me to go back to the beginning and do the run once again. I can try again tomorrow, but the next day the circumstances may have changed. And the system can’t say, for certain, that if I took another route I’d be 20% faster in the long term. It can just measure what happened on a given day, under those circumstances, and those circumstances will have changed tomorrow.
So, if you get Prism Skylabs, do you keep re-iterating and shuffling your wares around on the shelf? When do you stop? What if shuffling things around improves things in the very short term, but is counter productive on the long term (where the hell are the Twizzlers now?)
But it does look nice…
It’s been 2 years since I built my current workstation, and it’s still a very capable machine. It has an i7 3770K, 32GB RAM and a nVidia GTX 670. There really isn’t a rational reason to upgrade. While full recompilation of the source code takes 10 minutes, I rarely need to do so, and so most of the time, the limitation is really on how quickly I can type and move the mouse around.
I suppose it’s like getting a new car. How often do you really need a new car? And do you really need a car of that size, with that acceleration? In most cases, the answer is no. Yet people buy new cars they don’t need all the time.
So I might be able to rationalize that getting a trophy workstation is irrational, but normal, and so, therefore, it is OK for me to indulge. But then I look at the cost/performance of the high-end gear and then the predicament returns. Do I buy the high-end gear, that I want, but is too expensive compared the performance it offers, or do I go for the sweet spot? For example, the i7 6700K quad-core offers pretty good performance vs the more expensive 6800K hex-core CPU, but the 6700K will not deliver a noticeable performance boost compared to the 3770K…
In many ways it is similar to getting a fast car; you have seen people drive these cars fast or on winding roads mountain roads, and you might do so too, once in awhile, from time to time, but nowhere near as often as you make yourself believe when you get it. Same with the PC, I see people running GTA V and Battlefield 1 at a level of fidelity that is just mind blowing, but I know, in my heart, that I won’t spend more than an hour per month playing these games. Perhaps I am paying for the privilege of knowing that if I wanted to, I too could play Titanfall 2 at the ultra setting.
Perhaps I will just buy some DIY IoT gear and have fun with that…
About 13 years ago, we had a roundtable discussion about using RAM for the pre-buffering of surveillance video. I was against it. Coding wise, it would make things more complicated (we’d essentially have 2 databases), and the desire to support everything, everywhere, at any time made this a giant can of worms that I was not too happy to open. At the time physical RAM was limited, and chances were that the OS would then decide to push your RAM buffer to the swap file, causing severe degradation of performance. Worst of all, it would not be deterministic when things got swapped out, so all things considered, I said Nay.
As systems grew from the 25 cameras that was the maximum number supported on the flagship platform (called XXV), to 64 and above, we started seeing severe bottlenecks in disk IO. Basically, since pre-buffering was enabled per default, every single camera would pass through the disk IO subsystem only to be deleted 5 or 10 seconds later. A quick fix was to disable pre-buffering entirely, which would help enormously if the system only recorded on event, and the events were not correlated across many cameras.
However, recently, RAM buffering was added to the Milestone recorders, which makes sense now that you have 64 bit OS’s with massive amounts of RAM.
I always considered “record on event” as a bit of a compromise. It came about because people were annoyed with the way the system would trigger when someone passed through a door. Instead of having the door being closed at the start of the clip, usually the door would be 20% open by the time the motion-detection triggered, thus the beginning of the door opening would be missing.
A pre-buffer was a simple fix, but some caveats came up: systems that were setup to record on motion, would often record all through the night, due to noise in the images. If the system also triggered notifications, the user would often turn down the motion detection sensitivity until the false alarms disappeared. This had the unfortunate side effect of making the system too dull to properly detect motion in daylight, and thus you’d get missing video, people and cars “teleporting” all over the place and so on. Quite often the user would not realize the mistake until an incident actually did occur, and then it’s too late.
Another issue is that the video requires a lot more bandwidth when there is a lot of noise in the scene. This meant that at night, all the cameras would trigger motion at the same time, and the video would take up the max bandwidth allocated.
Notice that the graph above reaches the bandwidth limit set in the configuration in the camera and then seemingly drops through the night. This is because the camera switches to black/white which requires less bandwidth. Then, in the morning, you see a spike as the camera switches back to color mode. Then it drops off dramatically during the day.
Sticking this in a RAM based prebuffer won’t help. You’ll be recording noise all through the night, from just about every camera in your system, completely bypassing the RAM buffer. So you’ll see a large number of channels trying to record a high bandwidth video – which is the worst case scenario.
Now you may have the best server side motion detection available in the industry, but what good does it do if the video is so grainy you can’t identify anyone in the video (it’s a human – sure – but which human?).
During the day (or in well-lit areas), the RAM buffer will help, most of the time, the video will be sent over the network, reside in RAM for 5-10-30 seconds, and then be deleted, never to be seen again – ever. This puts zero load on disk IO and is basically the way you should do this kind of thing.
But this begs the questions – do you really want to do that? You are putting a lot of faith in the systems ability to determine what might be interesting now, and possibly later, and in your own ability to configure the system correctly. It’s very easy to see when the system is creating false alarms, it is something entirely different to determine if it missed something. The first problem is annoying, the latter makes your system useless.
My preference is to record everything for 1-2-3 days, and rely on external sensors for detection, which then also determines what to keep in long term storage. This way, I have a nice window to go back and review the video if something did happen, and then manually mark the prebuffer for long term storage.
“Motion detection” does provide some meta-information, that can be used later when I manually review the video, but relying 100% on it to determine when/what to record makes me a little uneasy.
But all else being equal, it is an improvement…