One Auga Per Ocularis Base*

In the Ocularis ecosystem, Heimdall is the component that takes care of receiving, decoding and displaying video on the screen. The functionality of Heimdall is offered through a class called Auga. So, to render video, you need to create an Auga object.

Ocularis was designed with the intent of making it easy for a developer to get video into their own application. Initially it was pretty simple – instantiate an Auga instance, and pass in a url, and viola, you had video. But as we added support for a wider range of NVRs, things became a little more complex. Now you need to instantiate an OCAdapter, log into an Ocularis Base Server. Then, pass the cameras to Auga via SetCameraIDispatch and then you can get video. The OCAdapter in turn, depends on a few NVR drivers. So deployment became more complex too.

One of the most common problems that I see today, is that people instantiate one OCAdapter, and one Auga instance per camera. This causes all sorts of problems; each instance counts as one login (which is a problem on login-restricted systems), every instance consumes memory and memory for fonts and other graphics are not shared between the instances. In many ways, I should have anticipated this type of use, but on the other hand, the entire Ocularis Client is using Heimdall/Auga as if it was a 3rd party component, and that seems to work pretty well (getting a little dated to look at, but hey…)

Heimdall also offers a headless mode. We call it video-hooks, and it allows you to instantiate an Auga object, and get decoded frames via a callback, or a DLL, instead of having Auga draw it on the screen. The uses for this are legion, I’ve used the video-hooks to create a web-interface, and until recently we used it for OMS to, video analytics can use the hooks to get live video in less than 75 lines of code . Initially the hooks only supported live video, but it now supports playback of recorded video too. But even when using Auga for hooks, should you ever only create one Auga instance per Ocularis Base. One Auga instance can easily stream from multiple cameras.

However, while Heimdall is named after a God, it does not have magical capabilities. Streaming from 16 * 5 MP * 30 fps will tax the system enormously – even on a beefy machine. One might be tempted to say “Well, the NVR can record it, so why can’t Auga show it?”. Well, the NVR does not have to decode every single frame completely to determine the motion level, but Auga has to decode everything, fully, all the way to the pixel format you specify when you set up the hook. If you specify BGR as your expected pixel format, we will give you every frame as a fully decoded BGR frame at 5MP. Unfortunately, there is no way to decode only every second or third frame. You can go to I-Frame only decoding (we do not support that right now), but that lowers the framerate to whatever the I-frame interval would be, typically 1 fps.

If you are running Auga in regular mode, you can show multiple cameras by using the LoadLayoutFromString function. It allows you to create pretty much any layout that you can think of, as you define the viewports via a short piece of text. Using LoadLayoutFromString  (account reqd.) you do not have to handle maximizing of viewports etc. all that is baked into Auga already. Using video hooks, you can set up (almost) any number of feeds via one Auga instance.

sdk_loadlayout

Granted, there are scenarios where making multiple Augas makes sense – certainly, you will need one per window (and hence the asterisk in the headline), and clearly if you have multiple processes running, you’d make one instance per process.

I’ll talk about the Direct3D requirement in another post.