Worlds Shittiest NVR pt. 4.

We now have a circular list of video clips in RAM, we have a way to be notified when something happens, and we now need to move the clips in RAM to permanent storage when something happens.

In part 1 we set up FFmpeg to write to files in a loop, the files were called 0.mp4, 1.mp4 … up to 9.mp4. Each file representing 10 seconds of video. We can’t move the file that FFmpeg is writing to, so we’ll do the following instead: We will copy the file previous file that FFmpeg completed, and we’ll keep doing that for a minute or so. This means that we’ll get the file (10 seconds) before the event occurred copied to permanent storage. Then, when the file that was being written while the event happened is closed, we’ll copy that file over, then the next and so on.

We’ll use a node module called “chokidar”, so, cd to your working directory (where the SMTP server code resides) and type:

node install chokidar

Chokiar lets you monitor files or directories and gives you an event when a file has been altered (in our case, FFmpeg has added data to the file). Naturall, if you start popping your own files into the RAM disk and edit those files, you’ll screw up this delicate/fragile system (read the title for clarification).

So, for example if my RAM disk is x:\ we can do this to determine which is the newest complete file:

chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
    
    // we're only interested in files being written to  
    if ( event != "change")  
      return;
    
    // are we writing to a new file?  
   if ( currentlyModifiedFile != path )  
   {  
      // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
   }
});

Now, there’s a slight snag that we need to handle: Node.js’s built-in file handler can’t copy files from one device (the RAM disk) to another (HDD), so to make things easy, we grab an extension library called “fs-extra”

Not surprisingly

node install fs-extra

So, when the camera tries to send an email, we’ll set a counter to some value. We’ll then periodically check if the value is greater than zero. If it is indeed greater than zero, then we’ll copy over the file that FFmpeg just completed and decrement the counter by one.

If the value reaches 0 we won’t copy any files, and just leave the counter at 0.

Assuming you have a nice large storage drive on e:\, and the directory you’re using for permanent storage is called “nvr” we’ll set it up so that we copy from the RAM drive (x:\) to the HDD (e:\nvr). If your drive is different (it most likely is, then edit the code to reflect that change – it should be obvious what you need to change).

Here’s the complete code:

const smtp = require ( "simplesmtp");
const chokidar = require('chokidar');
const fs = require('fs-extra');

// some variables that we're going to needvar 
currentlyModifiedFile = null;
var lastFileCreate = null;
var lastCopiedFile = null;
var flag_counter = 0;
var file_name_counter = 0;

// fake SMTP server
smtp.createSimpleServer({SMTPBanner:"My Server"}, function(req) {
    req.accept();

    // copy files for the next 50 seconds (5 files)
    flag_counter = 10;
}).listen(6789);

// function that will be called every 5 seconds
// tests to see if we should copy files

function copyFiles ( )
{ 
  if ( flag_counter > 0 ) 
  { 
     // don't copy files we have already copied  
     // this will happen because we check the  
     // copy condition 2 x faster than files are being written 
     if ( lastCopiedFile != lastFileCreate ) 
     { 
        // copy the file to HDD 
        fs.copy (lastFileCreate, 'e:/nvr/' + file_name_counter + ".mp4", function(err) {     
           if ( err ) console.log('ERROR: ' + err); 
        });

        // files will be named 0, 1, 2 ... n 
        file_name_counter++;

        // store the name of the file we just copied 
        lastCopiedFile = lastFileCreate; 
     }
     
     // decrement so that we are not copying files  
     // forever 
     flag_counter--; 
  } 
  else 
  { 
     // we reached 0, there is no  
     // file that we copied before. 
     lastCopiedFile = null; 
  }
}

// set up a watch on the RAM drive, ignoring the . and .. files
chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
  // we're only interested in files being written to  
  if ( event != "change")  return;
   
  // are we writing to a new file?  
  if ( currentlyModifiedFile != path )  
  {  
     // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
  }
});

// call the copy file check every 5 seconds from now on
setInterval ( copyFiles, 5 * 1000 );

So far, we’ve written about 70 lines of code in total, downloaded ImDrive, FFMpeg, node.js and a few modules (simplesmtp, chokidar and fs-extra), and we now have a pre-buffer fully in RAM and a way to store things permanently. All detection is done by the camera itself, so the amount of CPU used is very, very low.

This is the UI so far :

folders

In the next part, we’ll take a look at how we can get FFmpeg and nginx-rtmp to allow us to view the cameras on our phone, without exposing the camera directly to the internet.

 

 

Advertisements

Worlds Shittiest NVR pt. 3.

Our wonderful NVR is now basically a circular buffer in RAM, but we’d like to do a few things if motion (or other things) occur.

Many cameras support notification by email when things happen; while getting an email is nice enough, it’s not really what we want. Instead, we’ll (ab)use the mechanism as a way for the camera to notify our “NVR”.

First, we need a “fake” SMTP server, so that the camera will think that it is talking to a real one and attempt to send an actual email. When we receive the request to send the email we’ll simply do something else. An idea would be to move the temporary file on the RAM drive to permanent storage, but first, we’ll see if we can do the fake SMTP server in a few lines of code.

Start by downloading and installing node.js. Node.js allows us to run javascript code, and to tap into a vast library of modules that we can use via npm (used to stand for “Node Package Manager).

Assuming you’ve got node installed, we’ll open a command prompt and test that node is properly installed by entering this command:

node -v

You should now see the version number of node in the console window. If this worked, we can move on.

Let’s make a folder for our fake SMTP server first; Let’s pretend you’ve made a folder called c:\shittynvr. In the command prompt cd to that directory, and we’re ready to enter a few more commands.

We’re not going to write an entire fake SMTP server from scratch, instead, we’ll be using a library for node. The library is called simplesmtp. It is deprecated and has been superseded by something better, but it’ll work just fine for our purpose.

To get simplesmtp, we’ll enter this command in the prompt:

npm intall simplesmtp

You should see the console download some stuff and spew out some warnings and messages, we’ll ignore those for now.

We now have node.js and the simplesmtp library, and we’re now ready to create our “event server”.

Create a text file called “smtp.js”, add this code to the file, and save it.

const smtp = require ( "simplesmtp");
smtp.createSimpleServer({SMTPBanner:"My NVR"}, function(req){
  req.pipe(process.stdout);
  req.accept();
  
  // we can do other stuff here!!!

}).listen(6789);
console.log ( "ready" );

We can now start our SMTP server, by typing

node smtp.js

Windows may ask you if you want to allow the server to open a port, if you want your camera to send events to your PC, you’ll need to approve. If you are using a different firewall of some sort, you’ll need to allow incoming traffic on port 6789.

We should now be ready to receive events via SMTP.

The server will run as long as you keep the console window open, or until you hit CTRL+C to stop it and return to the prompt.

The next step is to set up the camera to send emails when things happen. When you enter the SMTP setup for your camera, you’ll need to enter the IP address of your PC and specify the port 6789. How you set up your camera to send events via email varies with manufacturers, so consult your manual.

Here’s an example of the output I get when I use a Hikvision camera. I’ve set it up so that it sends emails when someone tries to access the camera with the wrong credentials:

output Next time, we’ll look at moving files from temporary RAM storage to disk.

Live Layouts vs. Forensic Layouts

In our client, if you hit browse, you get to see the exact same view, only in browse mode. The assumption is that the layout of the live view is probably the same as the one you want when you are browsing through video.

I am now starting to question if that was a wise decision.

When clients ask for a 100 camera view, I used to wonder why. No-one can look at 100 cameras at the same time. But then I realized that they don’t actually look at them the way I thought. They use the 100 camera view a “selection panel”. They scan across the video and if they see something out of place they maximize that feed.

I am guessing here, but I suspect that in live view, you want to see “a little bit of everything” to get a good sense of the general state of things. When something happens, you need a more focused view – suddenly you go from 100 cameras to 4 or 9 cameras in a certain area.

Finally, when you go back and do forensic work, the use pattern is completely different. You might be looking at one or two cameras at a time, zooming in and navigate to neighboring cameras quite often.

Hmmm… I think we can improve this area.

Preallocation of Storage

What is the principal argument against pre-allocating (formatting) the storage for the video database?

One problem that I am aware of is if you need to pre-allocate space for each camera. A camera with very little motion might record for 100 days in a 100 GB allocation, while a busy one might have just 1 day. Change the parameters and it gets real hard to figure out what a reasonable size should be.

But say that you pre-format the total storage you need for the entire system, and then let all the cameras share the storage on a FIFO basis. This way, all cameras would have roughly the same amount of time recorded in the database.

My, decidedly unscientific tests, show that writing a large block of data to a continuous area on the disk is much faster than writing to a file that is scattered across the platters. Disk drives now have large caches and command queuing, but these mechanisms were designed for desktop use, and not a torrent of video data being written and deleted over and over again.

Some people balk at the idea that you pre-format the disk for reasons I simply do not understand. If you have a 100 TB storage system, I would expect that you’d want to use the full capacity of the disk. There are no points awarded for having 20% of the disk empty, so why do people feel that pre-allocation is bad?

Any takers?

NVR Integration and PSIM

Axis makes cameras, and now they make access control systems and an NVR too. Should the traditional NVR vendors be concerned about it?

Clearly, the Axis NVR team is better positioned to fully exploit all the features of the cameras. Furthermore, the Axis only NVR does not have to spend (waste?) time, trying to fit all sorts of shapes into a round hole. They ONLY have to deal with their own hardware platforms. This is similar to Apple’s OS X that runs so smoothly because the number of hardware permutations are fairly limited.

What if Sony did the same? Come to think of it, Sony already has an NVR. But, it’s no stretch of the imagination to realize that a Sony NVR would support Sony cameras better than the “our NVR does everything”-type.

In fact, when you really look at the problem, an NVR is a proprietary database, and some protocol conversion. To move a PTZ camera, you need to issue various different commands depending on the camera, but the end result is always the same: the camera moves up. Writing a good database engine is really, really hard, but once you have a good one, it is hard to improve. The client and administration tools continue to evolve, and become increasingly complex.

Once it becomes trivial to do the conversion, then any bedroom developer will be able to create a small NVR. Probably very limited functionality, but very cheap.

The cheap NVR might have a simple interface, but what if you could get an advanced interface on top of that cheap NVR? What if you could mix cheap NVRs with NVRs provided by the camera manufacturers, and then throw in access control to the mix? You get the picture.

If you are an NVR vendor, it is going to be an uphill battle to support “foreign” NVRs. If Milestone decided to support Genetec, it would take 5 minutes for Genetec to break that compatibility and have Milestone scramble to update their driver. Furthermore, the end user would have to pay for two licenses, and the experience would probably be terrible.

The next time an NVR vendor says “we are an open architecture”, then take a look at their docs. If the docs do not describe interoperability with a foreign client, then they are not open. An ActiveX control does NOT equate “open”. Genetec could easily support the Milestone recorders too, but it would be cheaper and easier for Genetec to simply replace existing Milestone recorders for free (like a cuckoo chick).

In this market, you cannot get a monopoly and charge a premium. The “need to have” threshold is pretty low, and if you charge too much, someone will make a new, sufficient system and underbid you. Ultimately, NVRs might come bundled with the camera for free.

So, what about PSIM? Well, we started with DVR (which is really a good enough description of what it is), but then we decided to call it an NVR to distance ourselves from the – clearly inferior DVR. Then we weren’t satisfied with that, so then it became VMS. Sure, we added some bells and whistles along the way (we had mapping and videowall in NetSwitcher eons ago), so now we call it PSIM. It does the same as the old DVR. I think this kind of thing is called “marketing”.

Is SSD Viable Today?

The biggest problem with HDD based video is storage. You can’t compare it to the life time of you workstation although it seems to be “always hitting the disk”; although the platters are spinning 9 hours a day they usually are sleeping the rest of the time (or should be). If you want to record the video for later playback, you will need to hit the disk eventually, and this goes on 24/7. There’s no way to cache this stuff since the data is usually written once, and never read again. Caching mechanisms usually work great if you read the same data over and over again, but this is mostly fire and forget, so no amount of caching will save you here. As I’ve discussed before, I don’t believe that you can rely on “no recording, no incident” as proof of anything. You might want to record the “no motion” video at a low, low quality, but you probably should record (if possible).

Mechanical drives burn out as the heat in their small cupboard enclosure rises to the level where you can bake a loaf of bread in there. Fans are of no use if fresh, cold, air cannot be drawn in. Instead they will just spin faster and faster in a futile attempt to lower the temperature as they merely recycle the increasingly warmer air inside the box. In a way, hard drives then become modern day video tapes. They work for some amount of time, and then they got to be replaced.

Naturally, a large scale client will have a dedicated server room with sufficient air conditioning, but it all adds up to the cost of running the system. Offsite storage is one way of mitigating the cost of an expensive server room, but now you need to pay for the DSL line instead, and one way or the other you will pay for that server room (or a part of it at least).

So, will SSD save the day?

Not sure it will happen just now. I read that new drives can write about 30TB before they start showing problems, and someone even made a wiki about the topic of SSD durability.

But I can’t wait to get one (the older ones where just not viable for my kind of work).