Looping Canned Video For Demos

Here’s a few simple(?) steps to stream pre-recorded video into your VMS.

First you need to install an RTMP server that can do RTMP to RTSP conversion. You can use Evostream, Wowza or possibly Nimblestreamer.  Nginx-rtmp won’t work as it does not support RTSP output.

Then get FFMpeg (windows users can get it here).

Find or create the canned video that you want to use, and store it somewhere accessible.

In this example, I have used a file called R1.mp4 and my RTMP server (Evostream) is located at 192.168.0.109. The command used is this:

ffmpeg -re -stream_loop -1 -i e:\downloads\r1.mp4 -c copy -fflags +genpts -f flv rtmp://192.168.0.109/live/r1

Once this is streaming (and you can verify using VLC and opening the RTMP url you provided), you can go to your VMS and add a generic RTSP camera.

For Evostream, the RTSP output is on a different port, and has a slightly different format, so in the recorder I add:

rtsp://192.168.0.109:5544/r1

Other RTMP servers may have a slightly different transform of the URL, so check the manual.

I now have a video looping into the VMS and I can run tests and benchmarks on the exact same feed w/o needing an IP camera.

 

 

Advertisements

Open Systems and Integration

Yesterday I took a break from my regular schedule and added a simple, generic HTTP event source to Ocularis. We’ve had the ability to integrate to IFTTT via the Maker Channel for quite some time. This would allow you to trigger IFTTT actions whenever an event occurs in Ocularis. Soon, you will be able to trigger alerts in Ocularis via IFTTT triggers.

For example, IFTTT has a geofence trigger, so when someone enters an area, you can pop the appropriate camera via a blank screen. The response time of IFTTT is too slow and I don’t consider it reliable for serious surveillance applications, but it’s a good illustration of the possibilities of an open system. Because I am lazy, I made a trigger based on Twitter, that way I did not have to leave the house.

Making a HTTP event source did not require any changes to Ocularis itself. It could be trivially added to previous version if one wanted to do that, but even if we have a completely open system, it doesn’t mean that people will utilize it.

 

 

Camera Proxy

There’s a lot of paranoia in the industry right now, some warranted, some not. The primary issue is that when you plug something into your network you basically have to trust the vendor to not spy on you “by design” and to not provide a trivial attack vector to 3rd parties.

First things first. Perhaps you remember that CCTV means Closed Circuit Television. Pay attention to those first two words. I am pretty sure 50% or more of all “CCTV” installations are not closed at all. If your CCTV system is truly closed, there’s no way for the camera to “call home”, and it is impossible for hackers to exploit any attack vectors because there’s no access from the outside world to the camera. There are plenty of PC’s running terrible and vulnerable software out there, but as long as these systems are closed, there’s no problem. Granted, it also limits the flexibility of the system. But that’s the price you pay for security.

In the opposite end of the spectrum are cameras that are directly exposed to the internet. This is a very bad idea, and most professionals probably don’t do that. Well… some clearly do, because a quick scan of the usual sites reveal plenty of seemingly professional installations where cameras are directly accessible from the internet.

To expose a camera directly to the internet you usually have to alter the NAT tables in your router/firewall. This can be a pain in the ass for most people, so another approach is used called hole-punching. This requires a STUN server between the client sitting outside the LAN (perhaps on an LTE connection via AT&T) and the camera inside the LAN. The camera will register with the STUN server via an outbound connection. Almost all routers/firewalls allow outbound connections. The way STUN servers work, probably confuse some people, and they freak out when they see the camera making a connection to “suspicious” IP but that’s simply how things work, and not a cause for alarm.

Now, say you want to record the cameras in your LAN on a machine outside your LAN, perhaps you want an Azure VM to record the video, but how will the recorder on Azure (outside your LAN) get access to your cameras that are inside the LAN unless you set up NAT and thus expose your cameras directly to the internet?

This is where the $10 camera proxy comes in (the actual cost is higher because you’ll need an SD card and a PSU as well).

So, here’s a rough sketch of how you can do things.

  • On Azure you install your favorite VMS
  • Install Wowza or EvoStream as well

EvoStream can receive an incoming RTMP stream, and make the stream available via RTSP, it basically changes the protocol, but uses the same video packets (no transcoding). So, if you were to publish a stream at say rtmp://evostreamserver/live/mycamera, that stream will be available at rtsp://evostreamserver/mycamera. You can then add a generic RTSP camera that reads from rtsp://evostreamserver/mycamera to your VMS.

The next step is to install the proxy, you can use a very cheap Pi clone, or a regular PC.

  • Determine the RTSP address of the camera in question
  • Download FFMpeg
  • Set up FFMpeg so that it publishes the camera to EvoStream (or Wowza) on Azure

Say you have a camera that streams via rtsp://192.168.0.100/video/channels/1, the command looks something like this (all on one line)

ffmpeg -i rtsp://username:password@192.168.0.100/video/channels/1 
-vcodec copy -f flv rtmp://evostreamserver/live/mycamera

This will make your PC grab the AV from the camera and publish it to the evostream server on Azure, but the camera is not directly exposed to the internet. The PC acts as a gateway, and it only creates an outbound connection to another PC that you control as well.

You can now access the video from the VMS on Azure, and your cameras are not exposed at all, so regardless how vulnerable they are, they will not expose any attack vectors to the outside world.

Using Azure is just an example, the point is that you want to isolate the cameras from the outside world, and this can be trivially accomplished by using a proxy.

As a side note. If cameras were deliberately spying on their users, by design, this would quickly be discovered and published. That there are bugs and vulnerabilities in firmware is just a fact of life and not proof of anything nefarious, so calm down, but take the necessary precautions.

Thoughts on Low Light Cameras

For security applications, I don’t rely on a cameras low light performance to provide great footage. I recently installed a camera in my basement, and naturally I ran a few tests to ensure that I would get good footage if anyone were to break in. After some experimentation, I would get either

  • Black frames, no motion detected
  • Grainy, noisy footage triggering motion recording all night
  • Ghosts moving around

The grainy images were caused by the AGC. It simply multiplies the pixel values by some factor, this amplifies everything – including noise. Such imagery does not compress well – so the frames take up a lot of space in the DB, and may falsely trigger motion detection. Cranking up the compression to solve the size problem is a bad solution, as it will simply make the video completely useless. I suppose that some motion detection algorithms can take the noise into consideration, and provide a more robust detection, but you may not have that option on your equipment.

Black frames is dues to the shuttertime being too fast, and AGC being turned off. On the other hand, if I lower the shutter speed, I get blurry people (ghosts) unless the burglar is standing still, facing the camera, for some duration (Which I don’t want to rely upon).

A word of caution

Often you’ll see demonstrations of AGC, where you have a fairly dark image on one side, and a much better image on the other, usually accompanied by some verbiage about how Automatic Gain Control works its “magic”. Notice that in many cases the subject in the scene is stationary. The problem here is that you don’t know the settings for the reference frame. It might be that the shutter speed is set to 1/5th of a second.  The problem is that a 1/5th of second shutter speed is way too slow for most security applications – leading to motion blur.
Ghost

During installation of the cameras, there are too common pitfalls

  1. Leave shutter and gain settings to “Auto”
  2. Manually set the shutter speed with someone standing (still) in front of the camera

#1 will almost certainly cause problems in low light conditions, as the camera turns the shutter waaay down, leading to adverse ghosting. #2 is better, but your pal should be moving around, and you should look at single frames from the video when determining the quality. This applies to live demos too : always make sure that the camera is capturing a scene with some motion, and request an exported still frame, to make sure you can make out the features properly.

A low tech solution

What I prefer, is for visible light to turn on, when something is amiss. This alerts my neighbours, and hopefully it causes the burglar to abort his mission. I also have quite a few PIR (Passive InfraRed) sensors in my house. They detect motion like a champ, in total darkness (they are – in a sense – a 1 pixel FLIR camera), and they don’t even trigger when our cat is on the prowl.

So, if the PIR sensors trigger, I turn on the light. Hopefully, that scares away the burglar. And since the light is on, I get great shots, without worrying about AGC or buying an expensive camera.

The cheapest DIY PIR sensors are around $10 bucks, you’ll then need some additional gear to wire it all together, but if you are nerdy enough to read this blog, I am pretty sure you already have some wiring in the house to control the lighting too.

So – it’s useless – right?

Well, it depends on the application. I am trying to protect my belongings and my house from intruders, that’s the primary goal. I’d prefer if the cameras never, ever recorded a single frame. But there are many other types of applications where low light cameras come in handy. If you can’t use visible light, and the camera is good enough that it can save you from using an infrared source then that might be your ROI. All else being equal, I’d certainly chose a camera with great low light capabilities over one that is worse, but rarely are things equal. The camera is more expensive, perhaps it has lower resolution and so on.

Finally a word on 3D noise reduction

Traditionally noise reduction was done by looking at adjacent pixels. Photoshop has a filter called “despeckle” which will remove some of the noise in a frame. It does so by looking at other pixels in the frame, and so we get 2 dimensions (vertical and horizontal). By looking at frames in the past, we can add a 3rd dimension – time – to the mix (hence 3D). If the camera is stationary, the algorithm tries to determine if a change of pixel value between frames, is caused by true change, or because of noise. Depending on a threshold, the change is either discarded as noise, or kept as a true change. You might also hear terms such as spatiotemporal, meaning “space and time” – basically, another way of expressing the same as 3D noise reduction.

 

One Auga Per Ocularis Base*

In the Ocularis ecosystem, Heimdall is the component that takes care of receiving, decoding and displaying video on the screen. The functionality of Heimdall is offered through a class called Auga. So, to render video, you need to create an Auga object.

Ocularis was designed with the intent of making it easy for a developer to get video into their own application. Initially it was pretty simple – instantiate an Auga instance, and pass in a url, and viola, you had video. But as we added support for a wider range of NVRs, things became a little more complex. Now you need to instantiate an OCAdapter, log into an Ocularis Base Server. Then, pass the cameras to Auga via SetCameraIDispatch and then you can get video. The OCAdapter in turn, depends on a few NVR drivers. So deployment became more complex too.

One of the most common problems that I see today, is that people instantiate one OCAdapter, and one Auga instance per camera. This causes all sorts of problems; each instance counts as one login (which is a problem on login-restricted systems), every instance consumes memory and memory for fonts and other graphics are not shared between the instances. In many ways, I should have anticipated this type of use, but on the other hand, the entire Ocularis Client is using Heimdall/Auga as if it was a 3rd party component, and that seems to work pretty well (getting a little dated to look at, but hey…)

Heimdall also offers a headless mode. We call it video-hooks, and it allows you to instantiate an Auga object, and get decoded frames via a callback, or a DLL, instead of having Auga draw it on the screen. The uses for this are legion, I’ve used the video-hooks to create a web-interface, and until recently we used it for OMS to, video analytics can use the hooks to get live video in less than 75 lines of code . Initially the hooks only supported live video, but it now supports playback of recorded video too. But even when using Auga for hooks, should you ever only create one Auga instance per Ocularis Base. One Auga instance can easily stream from multiple cameras.

However, while Heimdall is named after a God, it does not have magical capabilities. Streaming from 16 * 5 MP * 30 fps will tax the system enormously – even on a beefy machine. One might be tempted to say “Well, the NVR can record it, so why can’t Auga show it?”. Well, the NVR does not have to decode every single frame completely to determine the motion level, but Auga has to decode everything, fully, all the way to the pixel format you specify when you set up the hook. If you specify BGR as your expected pixel format, we will give you every frame as a fully decoded BGR frame at 5MP. Unfortunately, there is no way to decode only every second or third frame. You can go to I-Frame only decoding (we do not support that right now), but that lowers the framerate to whatever the I-frame interval would be, typically 1 fps.

If you are running Auga in regular mode, you can show multiple cameras by using the LoadLayoutFromString function. It allows you to create pretty much any layout that you can think of, as you define the viewports via a short piece of text. Using LoadLayoutFromString  (account reqd.) you do not have to handle maximizing of viewports etc. all that is baked into Auga already. Using video hooks, you can set up (almost) any number of feeds via one Auga instance.

sdk_loadlayout

Granted, there are scenarios where making multiple Augas makes sense – certainly, you will need one per window (and hence the asterisk in the headline), and clearly if you have multiple processes running, you’d make one instance per process.

I’ll talk about the Direct3D requirement in another post.