InfluxDB and Grafana

InfluxDBGrafana

When buzzards are picking at your eyes, maybe it’s time to move a little. Do a little meandering, and you might discover that the world is larger, and more fun, than you imagined. Perhaps you realize that what was once a thriving oasis has now turned into a putrid cesspool riddled with parasites.

InfluxDB is what’s knowns as a “streaming database”. The idea is that it’s a database that collects samples over time. Once the sample is collected, it doesn’t change. Eventually the sample gets too old, and is discarded. This is different from traditional databases where the values may change over time, and the deletion of records is not normally based on age.

This sounds familiar doesn’t it?

Now, you probably want to draw some sort of timeline, or graph, that represents the values you popped into InfluxDB. Enter, Grafana. It’s a dashboard designer that can interface with InfluxDB (and other databases too) and show pretty graphs and tables in a web page w/o requiring any HTML/Javascript coding.

If you want to test this wonderful combination of software, you’ll probably want to run Docker, and visit this link.

Now, I’ve already abandoned the idea of using InfluxDB/Grafana for the kind of stuff I mess around with. InfluxDB’s strength is that it can return a condensed dataset over a potentially large time-range. And it can make fast and semi-complex computations over the samples it returns (usually of the statistical kind). But the kind of timeline information I usually record is not complex at all, and there aren’t really any additional calculations I can do over the data. E.g. what’s the average of “failed to connect” and “retention policy set to 10 days”.

InfluxDB is also schema-less. You don’t need to do any pre-configuration (other than creating your database), so if you suddenly feel the urge to create a table called “dunning” then you just insert some data into “dunning”. You don’t need to define columns or their types etc. you just insert data.

And you can do this via a standard HTTP call, so you can use curl on the command line, or use libcurl in your c++ app (which is what I did).

The idea that you can issue a single command to do a full install of InfluxDB and Grafana, and then have it consume data from your own little app in about the time it takes to ingest a cup of coffee says a lot about where we’re headed.

Contrast the “open platforms” that require you to sign an NDA, download SDKs, compile DLLs, test on 7 different versions of the server and still have to nurse it every time there’s a new version. Those systems will be around for a long time, but I think it’s safe to say they’re way past their prime.

 

 

Looping Canned Video For Demos

Here’s a few simple(?) steps to stream pre-recorded video into your VMS.

First you need to install an RTMP server that can do RTMP to RTSP conversion. You can use Evostream, Wowza or possibly Nimblestreamer.  Nginx-rtmp won’t work as it does not support RTSP output.

Then get FFMpeg (windows users can get it here).

Find or create the canned video that you want to use, and store it somewhere accessible.

In this example, I have used a file called R1.mp4 and my RTMP server (Evostream) is located at 192.168.0.109. The command used is this:

ffmpeg -re -stream_loop -1 -i e:\downloads\r1.mp4 -c copy -fflags +genpts -f flv rtmp://192.168.0.109/live/r1

Once this is streaming (and you can verify using VLC and opening the RTMP url you provided), you can go to your VMS and add a generic RTSP camera.

For Evostream, the RTSP output is on a different port, and has a slightly different format, so in the recorder I add:

rtsp://192.168.0.109:5544/r1

Other RTMP servers may have a slightly different transform of the URL, so check the manual.

I now have a video looping into the VMS and I can run tests and benchmarks on the exact same feed w/o needing an IP camera.

 

 

Video Database Replication

Video Surveillance Databases are special. They are written to constantly, they are rarely read from, and the index is very simple (just a timestamp as the key). There’s no reason – really – to use anything fancy, certainly not SQL server.

I recently saw a marketing blurb for an expensive and cumbersome storage system that integrated to a VMS. It touted that the VMS had a “proprietary database highly optimized for video storage”. I guess “it uses the file system” did not sound fancy enough.

The entertaining puffery was uncovered as I was looking into the feasibility of geo-redundancy for a partner. Basically, they were looking for a fully mirrored backup system: If the primary site was to vanish, the backup site would take over, with all recorded data being readily available.

Database replication is nothing new; but typical database replication systems assume that you have a much higher outbound throughput than inbound. You may have a database with 2 million records, and if you add 1000 records per day, you’ll need those new records to propagate to the replication sets in your cluster – challenging, but a problem that has been solved a thousand times.

Video data is very different; its a constant torrent of data streaming into the system, and once in a while someone pulls out a few records to look at an incident. If the database uses the file system for its blocks, it’s almost trivial to provide replication. Just make sure the directory on the backup site looks identical to the one on the primary. This can be done with a simple rsync on Linux.

Another option is to use the Distributed Replicated Block Device (DRBD). This (Linux) tool allows you to create a drive that is mirrored 1:1 across a network. In other words, as files are written or changed, the exact same thing will happen on the backup drive. A Windows version appears to exist as well.

Surely, a better solution is to have the VMS be able to determine what files are most valuable, and push them to the remote site first. It might even chose to not mirror files that provide no value (zero motion files for example), or send a pruned version of the files to the backup system.

Depending on the sensitivity of the data, a customer might chose to extend/replicate their storage to the cloud. The problem here is that the upstream bandwidth is often limited, and thus in those cases a prioritization of the data is certainly needed.

Happy replicating…

 

 

Worlds Shittiest NVR pt. 4.

We now have a circular list of video clips in RAM, we have a way to be notified when something happens, and we now need to move the clips in RAM to permanent storage when something happens.

In part 1 we set up FFmpeg to write to files in a loop, the files were called 0.mp4, 1.mp4 … up to 9.mp4. Each file representing 10 seconds of video. We can’t move the file that FFmpeg is writing to, so we’ll do the following instead: We will copy the file previous file that FFmpeg completed, and we’ll keep doing that for a minute or so. This means that we’ll get the file (10 seconds) before the event occurred copied to permanent storage. Then, when the file that was being written while the event happened is closed, we’ll copy that file over, then the next and so on.

We’ll use a node module called “chokidar”, so, cd to your working directory (where the SMTP server code resides) and type:

node install chokidar

Chokiar lets you monitor files or directories and gives you an event when a file has been altered (in our case, FFmpeg has added data to the file). Naturall, if you start popping your own files into the RAM disk and edit those files, you’ll screw up this delicate/fragile system (read the title for clarification).

So, for example if my RAM disk is x:\ we can do this to determine which is the newest complete file:

chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
    
    // we're only interested in files being written to  
    if ( event != "change")  
      return;
    
    // are we writing to a new file?  
   if ( currentlyModifiedFile != path )  
   {  
      // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
   }
});

Now, there’s a slight snag that we need to handle: Node.js’s built-in file handler can’t copy files from one device (the RAM disk) to another (HDD), so to make things easy, we grab an extension library called “fs-extra”

Not surprisingly

node install fs-extra

So, when the camera tries to send an email, we’ll set a counter to some value. We’ll then periodically check if the value is greater than zero. If it is indeed greater than zero, then we’ll copy over the file that FFmpeg just completed and decrement the counter by one.

If the value reaches 0 we won’t copy any files, and just leave the counter at 0.

Assuming you have a nice large storage drive on e:\, and the directory you’re using for permanent storage is called “nvr” we’ll set it up so that we copy from the RAM drive (x:\) to the HDD (e:\nvr). If your drive is different (it most likely is, then edit the code to reflect that change – it should be obvious what you need to change).

Here’s the complete code:

const smtp = require ( "simplesmtp");
const chokidar = require('chokidar');
const fs = require('fs-extra');

// some variables that we're going to needvar 
currentlyModifiedFile = null;
var lastFileCreate = null;
var lastCopiedFile = null;
var flag_counter = 0;
var file_name_counter = 0;

// fake SMTP server
smtp.createSimpleServer({SMTPBanner:"My Server"}, function(req) {
    req.accept();

    // copy files for the next 50 seconds (5 files)
    flag_counter = 10;
}).listen(6789);

// function that will be called every 5 seconds
// tests to see if we should copy files

function copyFiles ( )
{ 
  if ( flag_counter > 0 ) 
  { 
     // don't copy files we have already copied  
     // this will happen because we check the  
     // copy condition 2 x faster than files are being written 
     if ( lastCopiedFile != lastFileCreate ) 
     { 
        // copy the file to HDD 
        fs.copy (lastFileCreate, 'e:/nvr/' + file_name_counter + ".mp4", function(err) {     
           if ( err ) console.log('ERROR: ' + err); 
        });

        // files will be named 0, 1, 2 ... n 
        file_name_counter++;

        // store the name of the file we just copied 
        lastCopiedFile = lastFileCreate; 
     }
     
     // decrement so that we are not copying files  
     // forever 
     flag_counter--; 
  } 
  else 
  { 
     // we reached 0, there is no  
     // file that we copied before. 
     lastCopiedFile = null; 
  }
}

// set up a watch on the RAM drive, ignoring the . and .. files
chokidar.watch('x:\\.', {ignored: /(^|[\/\\])\../}).on('all', (event, path) => {
  // we're only interested in files being written to  
  if ( event != "change")  return;
   
  // are we writing to a new file?  
  if ( currentlyModifiedFile != path )  
  {  
     // now we have the last file created  
     lastFileCreate = currentlyModifiedFile;  
     currentlyModifiedFile = path;  
  }
});

// call the copy file check every 5 seconds from now on
setInterval ( copyFiles, 5 * 1000 );

So far, we’ve written about 70 lines of code in total, downloaded ImDrive, FFMpeg, node.js and a few modules (simplesmtp, chokidar and fs-extra), and we now have a pre-buffer fully in RAM and a way to store things permanently. All detection is done by the camera itself, so the amount of CPU used is very, very low.

This is the UI so far :

folders

In the next part, we’ll take a look at how we can get FFmpeg and nginx-rtmp to allow us to view the cameras on our phone, without exposing the camera directly to the internet.

 

 

Worlds Shittiest NVR pt. 3.

Our wonderful NVR is now basically a circular buffer in RAM, but we’d like to do a few things if motion (or other things) occur.

Many cameras support notification by email when things happen; while getting an email is nice enough, it’s not really what we want. Instead, we’ll (ab)use the mechanism as a way for the camera to notify our “NVR”.

First, we need a “fake” SMTP server, so that the camera will think that it is talking to a real one and attempt to send an actual email. When we receive the request to send the email we’ll simply do something else. An idea would be to move the temporary file on the RAM drive to permanent storage, but first, we’ll see if we can do the fake SMTP server in a few lines of code.

Start by downloading and installing node.js. Node.js allows us to run javascript code, and to tap into a vast library of modules that we can use via npm (used to stand for “Node Package Manager).

Assuming you’ve got node installed, we’ll open a command prompt and test that node is properly installed by entering this command:

node -v

You should now see the version number of node in the console window. If this worked, we can move on.

Let’s make a folder for our fake SMTP server first; Let’s pretend you’ve made a folder called c:\shittynvr. In the command prompt cd to that directory, and we’re ready to enter a few more commands.

We’re not going to write an entire fake SMTP server from scratch, instead, we’ll be using a library for node. The library is called simplesmtp. It is deprecated and has been superseded by something better, but it’ll work just fine for our purpose.

To get simplesmtp, we’ll enter this command in the prompt:

npm intall simplesmtp

You should see the console download some stuff and spew out some warnings and messages, we’ll ignore those for now.

We now have node.js and the simplesmtp library, and we’re now ready to create our “event server”.

Create a text file called “smtp.js”, add this code to the file, and save it.

const smtp = require ( "simplesmtp");
smtp.createSimpleServer({SMTPBanner:"My NVR"}, function(req){
  req.pipe(process.stdout);
  req.accept();
  
  // we can do other stuff here!!!

}).listen(6789);
console.log ( "ready" );

We can now start our SMTP server, by typing

node smtp.js

Windows may ask you if you want to allow the server to open a port, if you want your camera to send events to your PC, you’ll need to approve. If you are using a different firewall of some sort, you’ll need to allow incoming traffic on port 6789.

We should now be ready to receive events via SMTP.

The server will run as long as you keep the console window open, or until you hit CTRL+C to stop it and return to the prompt.

The next step is to set up the camera to send emails when things happen. When you enter the SMTP setup for your camera, you’ll need to enter the IP address of your PC and specify the port 6789. How you set up your camera to send events via email varies with manufacturers, so consult your manual.

Here’s an example of the output I get when I use a Hikvision camera. I’ve set it up so that it sends emails when someone tries to access the camera with the wrong credentials:

output Next time, we’ll look at moving files from temporary RAM storage to disk.

Worlds Shittiest NVR pt. 2.

In pt. 1 we set up FFmpeg to suck video out of your affordable Hikvision camera. I hope your significant other was more impressed with this feat than mine was.

The issue we have with this writing constantly to the drive is that most of the time, nothing happens, so why even commit it to disk? It obviously depends on the application, but if you’re sure your wonderful VMS will not be stolen or suffer an outage at the time of a (real) incident, you can simply keep things in RAM.

So, how do we get FFmpeg to store in RAM? Well … Enter the wonderful world of the RAM disk.

ImDisk Virtual Disk Driver, is a tool that allows us to set up a RAM drive. Once you’ve downloaded the tool, you can create a disk using this command:

imdisk -a -s 512M -m X: -p "/fs:ntfs /q /y"

Do you remember how I said that I had an x: drive? Total lie. It was a RAM drive the whole time!

The command shown creates a 512-megabyte NTFS drive backed by RAM. This means that if the computer shuts down (before committing to physical HDD) the data is gone. On the other hand, it’s insanely fast and it does not screw up your HDD.

When we restart FFmpeg, it will now think that it is writing to an HDD, but in reality, it’s just sticking it into RAM. To the OS the RAM disk is a legit harddrive so we can read/write/copy/move files to and fro the disk.

In part 3, we’ll set up node.js to respond to events.

Oh, and here’s a handy guide to imdisk.

Worlds Shittiest NVR pt. 1

In this tutorial, we’ll examine what it takes to create a very shitty NVR on a Windows machine. The UI will be very difficult to use, but it will support as many cameras as you’d like, for as long as you like.

The first thing we need to do is to download FFmpeg.

Do you have it installed?

OK, then we can move on.

Create a directory on a disk that has a few gigabytes to spare. On my system, I’ve decided that the x: drive is going to hold my video. So, I’ve created a folder called “diynvr” on that driver.

Note the IP address of your camera, the make and model too, and use google to find the RTSP address of the camera streams. Many manufacturers (wisely) use a common format for all their cameras. Others do not. Use Google (or Bing if you’re crazy).

Axis:
rtsp://[camera-ip-address]/axis-media/media.amp

Hanwha (SUNAPI):
rtsp://[camera-ip-address]/profile[#]/media.smp

Some random Hikvision camera:
rtsp://[camera-ip-address]/Streaming/Channels/[#]

Now we’re almost ready for the worlds shittiest NVR.

The first thing we’ll do is to open a command prompt (I warned you that this was shitty). We’ll then CD into the directory where you’ve placed the FFmpeg files (just to make it a bit easier to type out the commands).

And now – with a single line, we can make a very, very shitty NVR (we’ll make it marginally better at a later time, but it will still be shit).

ffmpeg -i rtsp://[...your rtsp url from google goes here...] -c:v copy -f segment -segment_time 10 -segment_wrap 10 x:/diynvr/%d.mp4

So, what is going on here?

We tell FFmpeg to pull video from the address, that’s this part

-i rtsp://[...your rtsp url from google goes here...]

we then tell FFmpeg to no do anything with the video format (i.e. keep it H.264, don’t mess with it):

-c:v copy

FFmpeg should save the data in segments, with a duration of 10 seconds, and wrap around after 10 segments (100 seconds in all)

-f segment -segment_time 10 -segment_wrap 10

It should be fairly obvious how you change the segment duration and number of segments in the database so that you can do something a little more useful than having just 100 seconds of video.

And finally, store the files in mp4 containers at this location:

 x:/diynvr/%d.mp4

the %d part, means that the filename will be the digits of the segment as filename, so we’ll get files named 0.mp4, 1.mp4 … up to and including 9.mp4.

So, now we have a little circular buffer with 100 seconds of video. If anyone breaks into my house, I just need to scour through the files to find the swine. I can open the files using any video player that can play mp4 files. I use VLC, but you might prefer something else. Each file is 10 seconds long, and with thumbnails, in Explorer, you have a nice little overview of the entire “database”.

In the next part we will improve on these two things:

  • Everything gets stored
  • Constant writing to HDD

Oh, and if your camera is password protected (it should be), you can pass in the credentials in the rtsp url like so:

rtsp://[username]:[password]@[ the relevant url for your camera]

 

Random Ramblings on RTSP

It stands for “Real Time Streaming Protocol” and it is pretty much the de-facto protocol for IP security cameras. It’s based on a “pull” principle; anyone who wants to get the feed must ask for it first.

Part of the RTSP protocol describes how the camera and the client exchange information about how the camera sends its data. You might assume that the video is sent back via the same socket as the one used for the RTSP negotiation. This is not the case.

So, in the bog standard usage, the client will have to set up ports that it will use to receive the data from the camera. At this point, you could say that the client actually becomes a server, as it is now listening on two different ports. If you were to capture the communication, you might see something like this:

C->S: 
 SETUP rtsp://example.com/media.mp4/streamid=0 RTSP/1.0
 CSeq: 3
 Transport: RTP/AVP;unicast;client_port=8000-8001

S->C: 
  RTSP/1.0 200 OK
 CSeq: 3
 Transport: RTP/AVP;unicast;client_port=8000-8001;server_port=9000-9001;ssrc=1234ABCD
 Session: 12345678

If both devices are on the same LAN (and you tolerate that the client app opens two new ports in the firewall of the PC), the camera will start sending UDP packets to those two ports. The camera has no idea if the packets arrive or not (it’s UDP), so it’ll just keep spewing those packets until the client tears down the connection or the camera fails to receive a keep-alive message (usually just a dummy request via the RTSP channel).

But what if they’re not on the same network?

My VLC player on my home network may open up port 8000 and 8001 locally on my PC, but the firewall in my router has no idea this happened (there are rare exceptions to this). So the VLC player says “hey, camera out there on the internet, just send video to me on port 8000”, but that isn’t going to work because my external firewall will filter out those packets.

To solve this issue, we have RTSP over TCP.

RTSP over TCP lets the client initiate all the connections, and the camera then sends data back through those connections. Most firewalls have no issue accepting data via a connection as long as it was initiated from the inside (UDP hole punching takes advantage of this). RTSP over HTTP is an additional layer to handle hysterical system admins that only tolerate “safe” HTTP traffic through their firewalls.

So, is that the only reason to use TCP?

Well… Hopefully, you know that UDP is non-ack; the sender has no idea if the client received the packet or not. This is useful for broad- and multicasting where getting acks would be impossible. So the server is happily spamming the network, not a care in the world if the packets make it or not.

TCP has ack and retransmission; in other words, the sender knows if the receiver is getting the data and will re-transmit if packets were dropped.

Now, imagine that we have two people sitting in a quiet meeting room, one of them reads a page of text to the other. The guy reading is the camera and the guy listening is the recorder (or VLC player).

So the reader starts reading “It was the best __ times, it was ___ worst of times“, and since this is UDP, the listener is not allowed to say “can you repeat that“. Instead, the listener simply has to make do. Since there’s not a lot of entropy in the message, we can make sense of the text even if we drop a few words here and there.

But imagine we have 10 people reading at the same time. Will the listener be able to make sense of what is being said? What about 100 readers? While this is a simplified model, this is what happens when you run every camera on UDP.

Using TCP, you kinda have the listener going “got it, got it, got it” as the reader works his way down the page. If the room is quiet, the listener will rarely have to say “can you repeat that”. In other words, the transmission will be just as fast as UDP.

If you have 10 readers, and some of them are not getting the “got it” message, they may decide to scale back a bit and read a bit more slowly. In the end, though, the listener will have a verbatim copy of what was being read, even if there are 1000 readers.

Modern video codecs are extremely efficiently encoded, h.264 and h.265 throw away almost everything that is not needed (and then some). This means that if you drop packets the impact is much greater, without those missing packets all you get is a gray blur because that is all that the receiver heard when 100 idiots were yelling on top of each other.

So TCP solves the firewall issue, and in a “quiet” environment it is just as efficient as UDP. In a noisy environment, it will slow things down because of retransmissions, but is that not a lot better than to get a blurry mess? Would it not be better if the cameras were able to adjust their settings if the receiver can’t keep up? Isn’t it better than just spewing a huge amount of HD video packets into the void, never to be seen?

In my opinion, for IP surveillance installations, you should pick RTSP over TCP, and only switch to UDP if you don’t care about losing video.

As an experiment, I set up my phone to blast UDP packets to my server to determine the packet loss on LTE. Assuming it would be massive. Turns out that LTE actually has retransmission of packets on the radio layer (at least for data packets), I don’t know if it does the same for voice data.

The difference may be academic for a lot of installations as the network is actually pretty quiet, but for large/poorly architected solutions it may make a real difference.

 

 

 

Docker

I recently decided to take a closer look at Docker. Many of us have probably used virtualization at one point or other. I used Parallels on OSX some time ago to run Windows w/o having to reboot. Then there’s Virtualbox, VMWare, and Hyper-V that all allow you to run an OS inside another.

Docker does virtualization a bit differently.

The problem

When a piece of software has been created, it eventually needs to be deployed. If it’s an internal tool, the dev or a support tech will boot the machine, hunch over the keyboard and download and install a myriad of cryptic libraries and modules. Eventually, you’ll be able to start the wonderful new application the R&D dept. created.

Sometimes you don’t have that “luxury”, and you’ll need to create an automated (or at least semi-automated) installer. The end user will obtain it, usually via a download and then proceed to install it on their OS (often Windows) by running the setup.exe file.

The software often uses 3rd party services and libraries, that can cause the installer to balloon to several gigabytes. Sometimes the application needs a SQL server installation, but that, in turn, is dependent on some version of something else, and all of that junk needs to be packaged into the installer file – “just in case” the end user doesn’t have them installed.

A solution?

Docker offers something a bit more extreme: There’s no installer (other than docker). Once docker is running, you run a fully functional OS with all the dependencies installed. As absurd as this may sound, it actually leads to much smaller “installers” and it ensures that you don’t have issues with services that don’t play nice with each other. A Debian OS takes up 100Mbytes  (there’s a slim version at 55 Mbytes).

My only quip with docker is that it requires Hyper-V on windows (which means you need a Windows Pro install AFAIK). Once you install Hyper-V, it’s bye-bye Virtualbox. Fortunately, it is almost trivial to convert your old Virtualbox VM disks to Hyper-V.

There are a couple of things I wish were a bit easier to figure out. I wanted to get a hello-world type app running (a compiled app). This is relatively simple to do, once you know how.

How?

To start a container (with Windows as host), open PowerShell, and type

docker run -it [name of preferred OS]

You can find images on the docker hub, or just google them. If this is the first time you run and instance of the OS, docker will attempt to download the image. This can take some time (but once it is installed, it will spawn a new OS in a second or so). Eventually, you’ll then get a shell (that’s what the -it does), and you can use your usual setup commands (apt-get install … make folders and so on).

When you’re done, you can type “exit”

The next step is important, because the next time you enter “docker run -it [OS]” all your changes will have disappeared! To keep your changes, you need to commit them. To save your changes, start by entering this command:

docker ps -a

You’ll then see something like this

docker

Take note of the Container ID, and enter

docker commit [Container ID] [image name]:[image tag]

For example

docker commit bbd096b363af debian_1/test:v1

If you take a look at the image list

docker images

You’ll see that debian_1/test:v1 is among them (along with the HDD use for that image).

You can now run the image with your changes by entering

docker run -it [image name]:[image tag]

You’ll have to do a commit every single time you want to save your change. This can be a blessing (like a giant OS based undo) and a curse (forgot to save before quitting), but it’s not something the end user would/should care about.

You can also save the docker image to a file, and copy it to another machine and run it there.

Granted, “Dockerization” and virtualization as a foundation is not for everyone, but for deployments that are handled by tech-savvy integrators it might be very suitable.

It sits somewhere between an appliance and traditional installed software. It’s not as simple as the former, not as problematic as the latter.

Safe, Easy, Advanced

You can only pick 2 though.

Admitting mistakes is hard; it’s so hard that people will pay good money just to be told that they are not to blame for the mistake. That someone else is responsible for their stupidity. And sometimes they’re right, sometimes not.

Anton Yelchin was only 27 when he died, he left his car in neutral on an incline. The car started rolling and killed him. Since it would be unbearable to accept that Anton simply made a mistake, lawsuits were filed.

Another suitor claimed that the lever was too complex for people to operate, therefore the manufacturer is liable for the damage that occurs when people don’t operate them correctly. The car had rolled over her foot, and while there were no broken bones, she was now experienced “escalating pains”, and demanded reparations. One argument was that the car did not have the same feature as a more expensive BMW.

Tragically, every year more than 30 kids are forgotten in cars and die. When I bring this up with people, everyone says “it won’t ever happen to us”, and so there’s zero motivation to spend extra on such a precaution. The manufacturers know this, and since there’s also liability risk, they are not offering it. So, every year, kids bake to death in cars. It’s a gruesome fate for the kids, but the parents will never recover either.

Is it wrong to “blame the victim”?

I think the word “blame” has too many negative connotations associated to be useful in this context. Did the person’s action/inaction cause the outcome? If the answer is a resounding yes, then sure…  we can say that we “blame the victim”.

It’s obviously a gray area. If a car manufacturer decides that P should mean neutral and N should mean park, and writes about this in their manual and tells the customers as they sign the contract, then I wouldn’t blame an operator for making the mistake. The question is – would a person of “normal intelligence” be more than likely to make the same mistake?

In our industry, not only are we moving the yard-post of what “normal intelligence” means. Some of the most hysterical actors are using the bad practices of the layman and arguing that the equipment, therefore, can’t be used by professionals.

It feels like it’s entirely reasonable to argue no-one should drive 18-wheelers because random people bought modified trucks at suspect prices in a grey market and then went ahead and caused problems for a lot of people.

As professionals, we’re expected to have “higher intelligence” when it comes to handling the equipment. You can’t call yourself “professional” if you have to rely on some hack and his gang to educate you online or through their “university”. And you sure as hell can’t dismiss the usability of a device based on what random amateurs do with it.

So what gives? You have a bunch of people who act like amateurs but feel like “professionals” because they are paying good money for this industry’s equivalent of 4chan and got paid to install a few units here and there.

It seems to me that the hysterical chicken-littles of this industry are conflating their own audiences with what actual professionals are experiencing. E.g. if someone suggests using a non-standard port to “protect their installation”, then you know that the guy is not professional (doesn’t mean he’s not paid, just means he’s not competent).

And that’s at the core of this debacle: people that are incompetent, feel entitled to be called professionals, and when they make mistakes that pros would never make, it’s the fault of the equipment and it’s not suitable for professionals either.

So, as I’ve stated numerous times, I have a Hikvision and an Axis camera sitting here on my desk. Both have default admin passwords, yet I have not been the victim of any hack – ever. The Hikvision camera has decent optics (for a surveillance camera) and provides an acceptable image at a much lower cost than the “more secure” option.

And I’ll agree that getting video from that camera to my phone, sans VPN is not “easy” for the layman. But it doesn’t have to be. It just has to be easy for the thousands of competent integrators know what to do, and more importantly, what not to do.

That said; the PoC of the HikVision authentication bypass string should cause heads to roll at Hikvisions (and Dahuas) R&D department. Either there’s no code-review (bad) or there was, and they ignored it (even worse). There’s just no excuse for that kind of crap to be present in the code. Certainly not at this day and age.