Video Database Replication

Video Surveillance Databases are special. They are written to constantly, they are rarely read from, and the index is very simple (just a timestamp as the key). There’s no reason – really – to use anything fancy, certainly not SQL server.

I recently saw a marketing blurb for an expensive and cumbersome storage system that integrated to a VMS. It touted that the VMS had a “proprietary database highly optimized for video storage”. I guess “it uses the file system” did not sound fancy enough.

The entertaining puffery was uncovered as I was looking into the feasibility of geo-redundancy for a partner. Basically, they were looking for a fully mirrored backup system: If the primary site was to vanish, the backup site would take over, with all recorded data being readily available.

Database replication is nothing new; but typical database replication systems assume that you have a much higher outbound throughput than inbound. You may have a database with 2 million records, and if you add 1000 records per day, you’ll need those new records to propagate to the replication sets in your cluster – challenging, but a problem that has been solved a thousand times.

Video data is very different; its a constant torrent of data streaming into the system, and once in a while someone pulls out a few records to look at an incident. If the database uses the file system for its blocks, it’s almost trivial to provide replication. Just make sure the directory on the backup site looks identical to the one on the primary. This can be done with a simple rsync on Linux.

Another option is to use the Distributed Replicated Block Device (DRBD). This (Linux) tool allows you to create a drive that is mirrored 1:1 across a network. In other words, as files are written or changed, the exact same thing will happen on the backup drive. A Windows version appears to exist as well.

Surely, a better solution is to have the VMS be able to determine what files are most valuable, and push them to the remote site first. It might even chose to not mirror files that provide no value (zero motion files for example), or send a pruned version of the files to the backup system.

Depending on the sensitivity of the data, a customer might chose to extend/replicate their storage to the cloud. The problem here is that the upstream bandwidth is often limited, and thus in those cases a prioritization of the data is certainly needed.

Happy replicating…

 

 

Advertisements

My Bitcoin Problem

I didn’t get enough of them…. ?tulip-fever-movie-poster-e1505608260306

Back in the good old days, Hikvision NVRs part of an exploit that was used to mine Bitcoin, naturally, that was back when Bitcoin was used primarily to buy heroin and weapons via the darknet. Today, though, everyone and their dog is buying bitcoin like it was pets.com shares ca 2001,  and the hardware needed to mine coins today is a million times more powerful than a cheapo NVR.

First things first; why do we need “currency”. I think it’s worth revisiting the purpose, before moving on. Basically, “currency” is a promise, that someone (anyone) will “return the favor” down the line. In other words, I mow your lawn, and you give me an IOU, which I trade for some eggs at with the local farmer. The farmer then trades the IOU for getting picket fence painted by you (you then tear up the IOU).

Instead of crude IOU’s, we convert the work done into units of currency, which we then exchange. Mowing a lawn may be worth 10 units while doing the dishes is worth 5. In the sweet old days, the US had many different currencies, pretty much one per state. They served the same purpose. To allow someone to trade a cow for some pigs and eggs, some labor for food, food for labor and so on.

But pray tell, what politician, and what banker would not love to be able to issue IOUs in return for favors, without actually ever returning them?

Since politicians and bankers run the show, naturally, the concept got corrupted. Politicians and banks started issuing IOUs left and right, which basically defrauded you of your work. When you mowed the lawn on Monday, you would expect that you could exchange the IOU for a lawn mowing on Friday, but with politicians producing mountains of IOUs, you suddenly find that the sweat off your brow on Monday only paid for half the work on Friday.

This is classic inflation.

By the same token, it would be one hell of an annoyance if you mow my lawn on Monday, and now, to repay you, I would have to not only mow your damn lawn, but also paint your fence on Friday.

This is classic deflation.

What you want is a stable, and fair currency. That work you do on Monday can be exchanged for an equal amount of work on Friday.

You can then wrap layers of complexity around it, but at its core, the idea is that money is a store of work, and that store should be stable.  The idea that we “need 2% inflation” is utter nonsense. In a democracy, the government can introduce a tax on cash equivalent holdings if the voters so desire. This would be more manageable and precise than senile old farts in central banks trying to “manage inflation” by purchasing bonds and stock, with the predictable side effect that it props up sick and useless companies. The idea that you can get work done by just shuffling some papers around is an abomination in my book.

Bitcoin is an attempt at creating a currency that can’t be manipulated by (presumably corrupt or incompetent) politicians and bankers, but I think they’ve gone far, far away from that idea.

The people who are engaging in bitcoin speculation are not doing it because they want a fair and stable store of work (having discarded traditional fiat currency as being unstable and subject to manipulation). Instead, they do it, because, in the speculative frenzy, bitcoin is highly deflationary. You can get a thousand lawns mowed on Friday for the lawn you mowed on Monday. As a “stable currency”, Bitcoin has utterly failed. And we’re not even discussing the transaction issues (200K back-logged transactions, and a max of 2000 transactions every 10 minutes).

This happens because bitcoin is not a currency at all. It’s a simply the object underpinning a speculative bubble. And as it happens with all bubbles, there are people who will say “you don’t understand why this is brilliant, you see… ” and then a stream of illogical half-truths and speculation follows. People share stories about how they paid $100 for a cup of coffee 12 months ago when they used bitcoin to pay for it. But a cup of coffee in dollars cost about the same as it did 12 months ago, so while the dollar is being devalued by very mild inflation, and thus a much more stable store of work, bitcoin is promising free lunches for everyone.

People, for the most part, take part in this orgy with the expectation that at some point, they will settle the score for real currency – real dollars. Very few (and I happen to know one) will keep them “forever” on principle alone.

Furthermore, I don’t see any reason why the Bitcoin administrators wouldn’t just increase the self-imposed 21 million coin limit to 210 million of 2.1 billion coins. They already decided to create a new version, called Bitcoin Cash that essentially doubled the amount of bitcoin. That and the 1300 other cryptocurrencies out there makes it hard for me to buy into the idea that there is a “finite number of coins”. Not only that, to increase transaction speed to something useful, they are going to abandon the blockchain security, opening up for all sorts of manipulation (not unlike naked short selling of stock etc.)

And let’s not forget that before Nixon, the civilized world agreed to peg currencies to gold (a universal currency that could not be forged). In 1973, Nixon removed the peg from the US dollar and since then the number of dollars has exploded, and the value has dropped dramatically. In other words, what was a sure thing pre-1973, was suddenly not a sure thing.

This is not investing advice. You might buy bitcoin (or other crypto-“currencies”) today, and make 100% over the next few weeks. You might also lose it all. I would not be surprised by either.

 

Net Neutrality

You can’t be against net neutrality, and, at the same time, understand how the Internet works.

There is no additional cost to the IPS to offer access to obscure sites; it’s not like a cable package where the cable provider pays a fee to carry some niche channel that no-one watches.

Basically, net neutrality means that the ISP has to keep the queues fair; there are no VIP lanes on the Internet. Everyone gets in the same line, and are processed on a first come, first served basis. This is fundamentally fair. The business class traveler may be angered by the inability to buy his way to the front of the line (at the expense of everyone else), but that’s just tough titties.

It’s clear that not everyone has the same speed on the Internet; I live in an area where the owners association decided against having fiber installed, so I have a shitty (but sufficient) 20/2Mbit ADSL connection. My friend across the bridge, in Sweden, has a 100/100Mbit at half the cost. But that has nothing to do with net neutrality.

If my friend wants to access my server, my upstream channel is limited to 2 Mbit per second. This is by my choice, I can choose to host my server somewhere else, I could try to get a better link and so on, but basically, I decide for myself who, and how much I want to offer. There are sites that will flat out refuse to serve data to certain visitors, and that’s their prerogative.

However, with net neutrality removed, my site may get throttled or artificially bottlenecked to the point where people just quit visiting my site. I would have to deal with several ISP’s and possibly have to pay them a fee to remove the cap. If the site is not commercial* I may not have the funds to do that. I may not be aware that an ISP is throttling my site into oblivion, or even be offered an option to remove the cap.

Clearly, ending net neutrality is not the end of the world. Guatemala and Morroco are two examples of countries w/o net neutrality. In Morroco, the ISPs decided to block Skype, since it was competing with their (more profitable) voice service, so that might give you a hint of what’s to come. They did complain to the King when the ISPs went too far though.

Naturally, fast access to Facebook LinkedIn and Snapchat might be cheaper, and probably all you care about if you’re against NN.

With cloud-based IP video surveillance starting to become viable, this might prove to be another, unpredictable cost of the system. Some ISPs already take issue with you hosting a web server via your retail connection. And they go out of their way to make it difficult for you to do so: Changing your IP address every 4 hours and so on. This is to push you into a more expensive “business plan”, where they simply disable the script that changes your IP. I think it is safe to assume that if you’re streaming 30 MBit/s 24/7 to an Amazon data center, the ISP will eventually find a way to make you pay. And pay dearly. Once you’ve hooked your entire IP video surveillance system into the cloud, what are you going to do? Switch to another ISP? #yeahright

I guess the problem is that the ISP business model used to be to sell the same bandwidth 100 times over. Now that people are actually using the bandwidth, that model falls apart, and the ISPs need other means to make sweet sweet moolah. And that’s their nature and duty. But why cheer them on?

*In the early days, commercial activity on the Internet was banned.

 

HomeKit Flaw

https://9to5mac.com/2017/12/07/homekit-vulnerability/

Does this vulnerability shipping mean you shouldn’t trust HomeKit or smart home products going forward? The reality is bugs in software happen. They always have and pending any breakthrough in software development methods, they likely always will. The same is true for physical hardware which can be flawed and need to be recalled. The difference is software can be fixed over-the-air without a full recall.*

*Unless it’s a Chinese IP camera, then all “mistakes” are deliberate backdoors put in place by the government.

Facts and Folklore in the IP Video Industry

A while ago, I argued that just because JPEGs took up more storage space, it did not mean that JPEG offered superior quality (and certainly not if you do compare H.264 to MJPEG at the same bitrate).

I now find that some people are assuming that high GPU utilization automatically means better video performance and that all you have to do is fire up GPU-Z and you’ll know if the decoder is using the GPU for decoding.

There are some that will capitalize on the collective ignorance of the layman and ignorant “professional”. I suppose there’s always a buck to be made doing that. And a large number of people that ought to know better are not going to help educate the masses, as it would effectively remove any (wrong) perception of the superiority of their offering.

Before we start with the wonkishness, let’s consider the following question: What are we trying to achieve? The way I see it, any user of a video surveillance system simply wants to be able to see their cameras, with the best possible utilization of the resources available. They are not really concerned if a system can hypothetically show 16 simultaneous 4K streams because a) they don’t have 4K cameras and b) they don’t have a screen big enough to show 16 x 4K feeds.

So, as an example, let’s assume that 16 cameras are shown on a 1080p screen. Each viewport (or pane) is going to use (1920/4) * (1080/4) pixels (at most), that’s around 130.000 pixels per camera.

A 1080p camera delivers 2.000.000 pixels, so 15 out of every 16 pixels are never actually shown. They are captured, compressed, sent across the network, decompressed, and then we throw away 93% of the pixels.

Does that make sense to you?

A better choice is to configure multiple profiles for the cameras and serve the profile that matches the client the best. So, if you have a 1080p camera, you might have 3 profiles; a 1080p@15fps, a 720p@8fps and a CIF@4fps. If you’re showing the camera in a tiny 480 by 270 pane, why would you send the 1080p stream, putting undue stress on the network as well as on the client CPU/GPU? Would it not be better to pick the CIF stream and switch to the other streams if the user picks a different layout?

In other words; a well-designed system will rarely need to decode more than the number of pixels available on the screen. Surely, there are exceptions, but 90% of all installations would never even need to discuss GPU utilization as a bog standard PC (or tablet) is more than capable of handling the load. We’re past the point where a cheap PC is the bottleneck. More often than not, it is the operator who is being overwhelmed with information.

Furthermore, heavily optimized applications often have odd quirks. I ran a small test pitting Quicksync against Cuvid; the standard Quicksync implementation simply refused to decode the feed, while Cuvid worked just fine. Then there’s the challenge of simply enabling Quicksync on a system with a discrete GPU and dealing with odd scalability issues.

GPU usage metrics

As a small test, I wrote the WPF equivalent of “hello, world”. There’s no video decoding going on, but since WPF uses the GPU to do compositing on the screen, you’d expect the GPU utilization to be visible in GPU-Z, and as you can see below, that is also the case:

The GPU load:

  • no app (baseline) : 3-7%
  • Letting it sit: 7-16%
  • Resizing the app: 20%

This app, that performs no video decoding what-so-ever, uses the GPU to draw a white background, some text, and a green box on the screen, so just running a baseline app will show a bit of GPU usage. Does that mean that the app has better video decoding performance than, say VLC?

If I wrote a terrible H.264 decoder in BASIC and embedded it in the WPF application, an ignorant observer might deduce that the junk WPF app I wrote was faster than VLC, because it had higher GPU utilization, whereas VLC did not.

As a curious side-note, VLC did not show any “Video Engine Load” in GPU-Z,  so I don’t think VLC uses Cuvid at all. To provide an example of Cuvid/OpenGL, I wrote a small test app that does use Cuvid. The Video Engine Load is at 3-4% for this 4CIF@30fps stream.

cuvid

It reminds me of arguments I had 8 years ago when people said that application X was better than application Y because X showed 16 cameras using only 12% CPU, while Y was at 100%. The problem with the argument was that Y was decoding and displaying 10x as many frames as X. Basically X was throwing away 9 out of 10 frames. It did so, because it couldn’t keep up, determined that it was skipping frames and instead switched to a keyframe-only mode.

Anyway, back to working on the worlds shittiest NVR….

 

Worlds Shittiest NVR pt. 4.

We now have a circular list of video clips in RAM, we have a way to be notified when something happens, and we now need to move the clips in RAM to permanent storage when something happens.

In part 1 we set up FFmpeg to write to files in a loop, the files were called 0.mp4, 1.mp4 … up to 9.mp4. Each file representing 10 seconds of video. We can’t move the file that FFmpeg is writing to, so we’ll do the following instead: We will copy the file previous file that FFmpeg completed, and we’ll keep doing that for a minute or so. This means that we’ll get the file (10 seconds) before the event occurred copied to permanent storage. Then, when the file that was being written while the event happened is closed, we’ll copy that file over, then the next and so on.

We’ll use a node module called “chokidar”, so, cd to your working directory (where the SMTP server code resides) and type:

node install chokidar

Chokiar lets you monitor files or directories and gives you an event when a file has been altered (in our case, FFmpeg has added data to the file). Naturall, if you start popping your own files into the RAM disk and edit those files, you’ll screw up this delicate/fragile system (read the title for clarification).

So, for example if my RAM disk is x:\ we can do this to determine which is the newest complete file:

chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
    
    // we're only interested in files being written to  
    if ( event != "change")  
      return;
    
    // are we writing to a new file?  
   if ( currentlyModifiedFile != path )  
   {  
      // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
   }
});

Now, there’s a slight snag that we need to handle: Node.js’s built-in file handler can’t copy files from one device (the RAM disk) to another (HDD), so to make things easy, we grab an extension library called “fs-extra”

Not surprisingly

node install fs-extra

So, when the camera tries to send an email, we’ll set a counter to some value. We’ll then periodically check if the value is greater than zero. If it is indeed greater than zero, then we’ll copy over the file that FFmpeg just completed and decrement the counter by one.

If the value reaches 0 we won’t copy any files, and just leave the counter at 0.

Assuming you have a nice large storage drive on e:\, and the directory you’re using for permanent storage is called “nvr” we’ll set it up so that we copy from the RAM drive (x:\) to the HDD (e:\nvr). If your drive is different (it most likely is, then edit the code to reflect that change – it should be obvious what you need to change).

Here’s the complete code:

const smtp = require ( "simplesmtp");
const chokidar = require('chokidar');
const fs = require('fs-extra');

// some variables that we're going to needvar 
currentlyModifiedFile = null;
var lastFileCreate = null;
var lastCopiedFile = null;
var flag_counter = 0;
var file_name_counter = 0;

// fake SMTP server
smtp.createSimpleServer({SMTPBanner:"My Server"}, function(req) {
    req.accept();

    // copy files for the next 50 seconds (5 files)
    flag_counter = 10;
}).listen(6789);

// function that will be called every 5 seconds
// tests to see if we should copy files

function copyFiles ( )
{ 
  if ( flag_counter > 0 ) 
  { 
     // don't copy files we have already copied  
     // this will happen because we check the  
     // copy condition 2 x faster than files are being written 
     if ( lastCopiedFile != lastFileCreate ) 
     { 
        // copy the file to HDD 
        fs.copy (lastFileCreate, 'e:/nvr/' + file_name_counter + ".mp4", function(err) {     
           if ( err ) console.log('ERROR: ' + err); 
        });

        // files will be named 0, 1, 2 ... n 
        file_name_counter++;

        // store the name of the file we just copied 
        lastCopiedFile = lastFileCreate; 
     }
     
     // decrement so that we are not copying files  
     // forever 
     flag_counter--; 
  } 
  else 
  { 
     // we reached 0, there is no  
     // file that we copied before. 
     lastCopiedFile = null; 
  }
}

// set up a watch on the RAM drive, ignoring the . and .. files
chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
  // we're only interested in files being written to  
  if ( event != "change")  return;
   
  // are we writing to a new file?  
  if ( currentlyModifiedFile != path )  
  {  
     // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
  }
});

// call the copy file check every 5 seconds from now on
setInterval ( copyFiles, 5 * 1000 );

So far, we’ve written about 70 lines of code in total, downloaded ImDrive, FFMpeg, node.js and a few modules (simplesmtp, chokidar and fs-extra), and we now have a pre-buffer fully in RAM and a way to store things permanently. All detection is done by the camera itself, so the amount of CPU used is very, very low.

This is the UI so far :

folders

In the next part, we’ll take a look at how we can get FFmpeg and nginx-rtmp to allow us to view the cameras on our phone, without exposing the camera directly to the internet.

 

 

Worlds Shittiest NVR pt. 3.

Our wonderful NVR is now basically a circular buffer in RAM, but we’d like to do a few things if motion (or other things) occur.

Many cameras support notification by email when things happen; while getting an email is nice enough, it’s not really what we want. Instead, we’ll (ab)use the mechanism as a way for the camera to notify our “NVR”.

First, we need a “fake” SMTP server, so that the camera will think that it is talking to a real one and attempt to send an actual email. When we receive the request to send the email we’ll simply do something else. An idea would be to move the temporary file on the RAM drive to permanent storage, but first, we’ll see if we can do the fake SMTP server in a few lines of code.

Start by downloading and installing node.js. Node.js allows us to run javascript code, and to tap into a vast library of modules that we can use via npm (used to stand for “Node Package Manager).

Assuming you’ve got node installed, we’ll open a command prompt and test that node is properly installed by entering this command:

node -v

You should now see the version number of node in the console window. If this worked, we can move on.

Let’s make a folder for our fake SMTP server first; Let’s pretend you’ve made a folder called c:\shittynvr. In the command prompt cd to that directory, and we’re ready to enter a few more commands.

We’re not going to write an entire fake SMTP server from scratch, instead, we’ll be using a library for node. The library is called simplesmtp. It is deprecated and has been superseded by something better, but it’ll work just fine for our purpose.

To get simplesmtp, we’ll enter this command in the prompt:

npm intall simplesmtp

You should see the console download some stuff and spew out some warnings and messages, we’ll ignore those for now.

We now have node.js and the simplesmtp library, and we’re now ready to create our “event server”.

Create a text file called “smtp.js”, add this code to the file, and save it.

const smtp = require ( "simplesmtp");
smtp.createSimpleServer({SMTPBanner:"My NVR"}, function(req){
  req.pipe(process.stdout);
  req.accept();
  
  // we can do other stuff here!!!

}).listen(6789);
console.log ( "ready" );

We can now start our SMTP server, by typing

node smtp.js

Windows may ask you if you want to allow the server to open a port, if you want your camera to send events to your PC, you’ll need to approve. If you are using a different firewall of some sort, you’ll need to allow incoming traffic on port 6789.

We should now be ready to receive events via SMTP.

The server will run as long as you keep the console window open, or until you hit CTRL+C to stop it and return to the prompt.

The next step is to set up the camera to send emails when things happen. When you enter the SMTP setup for your camera, you’ll need to enter the IP address of your PC and specify the port 6789. How you set up your camera to send events via email varies with manufacturers, so consult your manual.

Here’s an example of the output I get when I use a Hikvision camera. I’ve set it up so that it sends emails when someone tries to access the camera with the wrong credentials:

output Next time, we’ll look at moving files from temporary RAM storage to disk.