Video Surveillance Databases are special. They are written to constantly, they are rarely read from, and the index is very simple (just a timestamp as the key). There’s no reason – really – to use anything fancy, certainly not SQL server.
I recently saw a marketing blurb for an expensive and cumbersome storage system that integrated to a VMS. It touted that the VMS had a “proprietary database highly optimized for video storage”. I guess “it uses the file system” did not sound fancy enough.
The entertaining puffery was uncovered as I was looking into the feasibility of geo-redundancy for a partner. Basically, they were looking for a fully mirrored backup system: If the primary site was to vanish, the backup site would take over, with all recorded data being readily available.
Database replication is nothing new; but typical database replication systems assume that you have a much higher outbound throughput than inbound. You may have a database with 2 million records, and if you add 1000 records per day, you’ll need those new records to propagate to the replication sets in your cluster – challenging, but a problem that has been solved a thousand times.
Video data is very different; its a constant torrent of data streaming into the system, and once in a while someone pulls out a few records to look at an incident. If the database uses the file system for its blocks, it’s almost trivial to provide replication. Just make sure the directory on the backup site looks identical to the one on the primary. This can be done with a simple rsync on Linux.
Another option is to use the Distributed Replicated Block Device (DRBD). This (Linux) tool allows you to create a drive that is mirrored 1:1 across a network. In other words, as files are written or changed, the exact same thing will happen on the backup drive. A Windows version appears to exist as well.
Surely, a better solution is to have the VMS be able to determine what files are most valuable, and push them to the remote site first. It might even chose to not mirror files that provide no value (zero motion files for example), or send a pruned version of the files to the backup system.
Depending on the sensitivity of the data, a customer might chose to extend/replicate their storage to the cloud. The problem here is that the upstream bandwidth is often limited, and thus in those cases a prioritization of the data is certainly needed.
A while back I got fed up with people parking their cars right in front of my driveway, and I decided to find a solution.
A camera could work, but since I am cheap, I decided to look for something a bit more… economical. A PIR sensor wouldn’t work because it triggers when there is motion, and cars and people pass by all the time, so I looked into ultrasonic sensors and eventually radars. If the distance measured drops below a pre-defined threshold and stays there, I know to run into the street, yelling and screaming.
The inspiration came from Adafruit and Andreas Spiess who has a great YouTube channel where you can get more information about ultrasonic sensors and radars (and about 1000 other things).
Basically, you can get an Arduino capable board. I hope the WiFi capable ESP8266 will work (since I have one lying around). Then get some cheap sensors from China via Alibaba and you’re ready to experiment. At the very least, it should give you some idea of the base cost of such a device.
Both Axis and Avigilon have launched commercial versions of miniature radars that can interface to your favorite VMS. Combined with a PTZ camera this might be a very interesting combination that offers a bit more smartness than the good old PIR/PTZ combo.
There’s a lot of paranoia in the industry right now, some warranted, some not. The primary issue is that when you plug something into your network you basically have to trust the vendor to not spy on you “by design” and to not provide a trivial attack vector to 3rd parties.
First things first. Perhaps you remember that CCTV means Closed Circuit Television. Pay attention to those first two words. I am pretty sure 50% or more of all “CCTV” installations are not closed at all. If your CCTV system is truly closed, there’s no way for the camera to “call home”, and it is impossible for hackers to exploit any attack vectors because there’s no access from the outside world to the camera. There are plenty of PC’s running terrible and vulnerable software out there, but as long as these systems are closed, there’s no problem. Granted, it also limits the flexibility of the system. But that’s the price you pay for security.
In the opposite end of the spectrum are cameras that are directly exposed to the internet. This is a very bad idea, and most professionals probably don’t do that. Well… some clearly do, because a quick scan of the usual sites reveal plenty of seemingly professional installations where cameras are directly accessible from the internet.
To expose a camera directly to the internet you usually have to alter the NAT tables in your router/firewall. This can be a pain in the ass for most people, so another approach is used called hole-punching. This requires a STUN server between the client sitting outside the LAN (perhaps on an LTE connection via AT&T) and the camera inside the LAN. The camera will register with the STUN server via an outbound connection. Almost all routers/firewalls allow outbound connections. The way STUN servers work, probably confuse some people, and they freak out when they see the camera making a connection to “suspicious” IP but that’s simply how things work, and not a cause for alarm.
Now, say you want to record the cameras in your LAN on a machine outside your LAN, perhaps you want an Azure VM to record the video, but how will the recorder on Azure (outside your LAN) get access to your cameras that are inside the LAN unless you set up NAT and thus expose your cameras directly to the internet?
This is where the $10 camera proxy comes in (the actual cost is higher because you’ll need an SD card and a PSU as well).
So, here’s a rough sketch of how you can do things.
On Azure you install your favorite VMS
Install Wowza or EvoStream as well
EvoStream can receive an incoming RTMP stream, and make the stream available via RTSP, it basically changes the protocol, but uses the same video packets (no transcoding). So, if you were to publish a stream at say rtmp://evostreamserver/live/mycamera, that stream will be available at rtsp://evostreamserver/mycamera. You can then add a generic RTSP camera that reads from rtsp://evostreamserver/mycamera to your VMS.
The next step is to install the proxy, you can use a very cheap Pi clone, or a regular PC.
Determine the RTSP address of the camera in question
Set up FFMpeg so that it publishes the camera to EvoStream (or Wowza) on Azure
Say you have a camera that streams via rtsp://192.168.0.100/video/channels/1, the command looks something like this (all on one line)
This will make your PC grab the AV from the camera and publish it to the evostream server on Azure, but the camera is not directly exposed to the internet. The PC acts as a gateway, and it only creates an outbound connection to another PC that you control as well.
You can now access the video from the VMS on Azure, and your cameras are not exposed at all, so regardless how vulnerable they are, they will not expose any attack vectors to the outside world.
Using Azure is just an example, the point is that you want to isolate the cameras from the outside world, and this can be trivially accomplished by using a proxy.
As a side note. If cameras were deliberately spying on their users, by design, this would quickly be discovered and published. That there are bugs and vulnerabilities in firmware is just a fact of life and not proof of anything nefarious, so calm down, but take the necessary precautions.
I just received my $10 computer from China. I paid a premium for the (required) SD card as I do not have the patience to wait for one to arrive in the mail. My 5V/2A charger for my old, functional, PSP works as a power supply. I then downloaded Armbian and booted.
A few commands later, and I have a $20 dollar camera proxy.
I don’t actually plan to use it as my camera proxy, but as a small controller for a number of sensors I plan to add. For example using a cheap modified PIR sensor as input to the controller.
As you may know, I also have a Raspberry Pi 2. This little device is incredibly stable, and has only been rebooted once in the last 3 months, and that was by accident.
Hopefully you’ll be able to get a $100 device that you simply plug into your infrastructure, and that little device will work as standalone, or as a node in a much larger VMS, but that’s a bigger project that I might pick up later.
About 13 years ago, we had a roundtable discussion about using RAM for the pre-buffering of surveillance video. I was against it. Coding wise, it would make things more complicated (we’d essentially have 2 databases), and the desire to support everything, everywhere, at any time made this a giant can of worms that I was not too happy to open. At the time physical RAM was limited, and chances were that the OS would then decide to push your RAM buffer to the swap file, causing severe degradation of performance. Worst of all, it would not be deterministic when things got swapped out, so all things considered, I said Nay.
As systems grew from the 25 cameras that was the maximum number supported on the flagship platform (called XXV), to 64 and above, we started seeing severe bottlenecks in disk IO. Basically, since pre-buffering was enabled per default, every single camera would pass through the disk IO subsystem only to be deleted 5 or 10 seconds later. A quick fix was to disable pre-buffering entirely, which would help enormously if the system only recorded on event, and the events were not correlated across many cameras.
However, recently, RAM buffering was added to the Milestone recorders, which makes sense now that you have 64 bit OS’s with massive amounts of RAM.
I always considered “record on event” as a bit of a compromise. It came about because people were annoyed with the way the system would trigger when someone passed through a door. Instead of having the door being closed at the start of the clip, usually the door would be 20% open by the time the motion-detection triggered, thus the beginning of the door opening would be missing.
A pre-buffer was a simple fix, but some caveats came up: systems that were setup to record on motion, would often record all through the night, due to noise in the images. If the system also triggered notifications, the user would often turn down the motion detection sensitivity until the false alarms disappeared. This had the unfortunate side effect of making the system too dull to properly detect motion in daylight, and thus you’d get missing video, people and cars “teleporting” all over the place and so on. Quite often the user would not realize the mistake until an incident actually did occur, and then it’s too late.
Another issue is that the video requires a lot more bandwidth when there is a lot of noise in the scene. This meant that at night, all the cameras would trigger motion at the same time, and the video would take up the max bandwidth allocated.
Notice that the graph above reaches the bandwidth limit set in the configuration in the camera and then seemingly drops through the night. This is because the camera switches to black/white which requires less bandwidth. Then, in the morning, you see a spike as the camera switches back to color mode. Then it drops off dramatically during the day.
Sticking this in a RAM based prebuffer won’t help. You’ll be recording noise all through the night, from just about every camera in your system, completely bypassing the RAM buffer. So you’ll see a large number of channels trying to record a high bandwidth video – which is the worst case scenario.
Now you may have the best server side motion detection available in the industry, but what good does it do if the video is so grainy you can’t identify anyone in the video (it’s a human – sure – but which human?).
During the day (or in well-lit areas), the RAM buffer will help, most of the time, the video will be sent over the network, reside in RAM for 5-10-30 seconds, and then be deleted, never to be seen again – ever. This puts zero load on disk IO and is basically the way you should do this kind of thing.
But this begs the questions – do you really want to do that? You are putting a lot of faith in the systems ability to determine what might be interesting now, and possibly later, and in your own ability to configure the system correctly. It’s very easy to see when the system is creating false alarms, it is something entirely different to determine if it missed something. The first problem is annoying, the latter makes your system useless.
My preference is to record everything for 1-2-3 days, and rely on external sensors for detection, which then also determines what to keep in long term storage. This way, I have a nice window to go back and review the video if something did happen, and then manually mark the prebuffer for long term storage.
“Motion detection” does provide some meta-information, that can be used later when I manually review the video, but relying 100% on it to determine when/what to record makes me a little uneasy.