Building a New Thing

Take a look at this drawing.

architecture

This is an architectural sketch, and it looks as if it was drawn hastily w/o much thought.

What you’re not seeing is the other 100 drawings that were discarded. You’re not seeing the light-bulb filaments that didn’t work.

Now take a look at this,

Image result for nordea arkitekt skitse

This is a more refined image (of a different building), and it probably took longer to produce than the sketch.

If you’re building the same building over and over, then you’ll use the #2 drawing and just tweak it a little here and there. If there’s an issue with the ventilation, you’ll create a case, assign it to someone, and then track it’s progression. Eventually mark it as “done”.

But if you’re building a new thing, you gotta start with #1. You cannot afford the cost of pretty and detailed drawings when you’re going through 100 different designs and concepts. You can’t use Jira for phase #1. It’s too slow and too cumbersome. Just as you won’t use AutoCAD to draw concept sketches. A pen and paper is 100x faster, and you’ll need that speed to go through 100 concepts.

Sadly, what often happens is that the architect shows his sketches to people who do not understand the process, and they’re underwhelmed. They expect the #2 drawing, but demand the agility and speed of the #1 process.

The leads to a situation where just 2 or 3 concepts are tried (or maybe they just go with one), and because the concept phase is now expensive, there’s a tendency to just go with what we’ve got, even if it’s sub-par and doesn’t spark any joy.

A good architects sketches are anchored in reality, yet produce remarkable buildings that are pleasant to look at and live around. Bad architects produce ideas that aren’t feasible to actually build or – perhaps even worse – design buildings based solely on the knowledge of the technology, but with no empathy and understanding of human nature.

You’re going to need detailed drawings, but not until you’ve done 100 sketches.

Hyperdistributed IP video

Verkada and Meraki is freaking everyone out. They’re selling direct, and they don’t respect the hierarchy. They’re seemingly attracting big $$$ from VC’s, and they’re landing million dollar deals at a fairly impressive clip.

These spry Silicon Valley kids with their typescript, Kubernetes, hoodies and sandals don’t give a damn about the gospel we’ve long established as the only truth under the sun.

The jury is still out on their approach. Right now, to me, it smells a lot like snake-oil. There’s a distinct aura of petroleum coming from the flask the energetic salesman is offering. I’ll let someone else drink the tonic and see how it goes before I’m willing to ingest this swill. But, make no mistake – they’re on to something, and they are probably right to taunt the rest of the industry.

So, why is the hyperdistributed model so good?

Let’s say you have a campus and need 500 cameras. With a traditional solution, those 500 cameras would be streaming – non-stop – to a central location (a server room usually). At the server room, you’d have a very expensive storage solution, typically with some sort of redundancy. Along with the storage, you’d need a few PC’s who’s main objective is to simply read from the camera, and store the feed for later review. A review that 99% of the time never happens. Video is compressed, sent across the network, stored on a disk, and then deleted when it gets too old. If the data-center goes down, it takes with it ALL recordings, and you’re basically blind. You can then install redundant data-centers (certainly not free!), which brings you maybe a little close to a distributed model.

It might be that you have an area that doesn’t have great WiFi network coverage, but you might have cellular access. Do you then establish a mesh-network all the way to the shed? Or do you let the camera stream 24/7 across the LTE network? Both are costly, and the cost is sunk if you later decide to move the camera.

I see two major problems with the hyperdistributed “Veraki” solutions: 1) if the camera is damaged, the evidence is gone, and 2) you’re currently locked in with either if you go that route.

The first problem is simple enough. Veraki simply offer a box that physically separates the imager from the storage via a cable, such that the recording can be better protected and will not disappear with the destruction of the camera.

The other is harder, because establishing a standard will almost immediately cause commoditization and ultimately loss of margins. I don’t think that’s a solvable problem, and the hypothetical risk is that someone actually establishes a standard for this sort of thing. Knowing the industry, that’s not going to happen in the next 10 years.

I don’t see the lack of camera setting, sync playback and so on as a long term problem. Currently, the idea is that all things should be web-based, but they’ll soon learn that the web was not made for video surveillance. If they’ve got a good database design, it shouldn’t be a problem to write a native app that could offer a more feature-rich experience for those who crave it. Spotify, for example, requires an installed app on the PC to play, the web site only offers account management.

If memory serves, Milestone actually worked on putting a lightweight version of their recorder inside cameras, and I think they signed some partners, but did it ever go anywhere? Such a solution would be strongly preferred, as “anyone” can communicate with a Milestone VMS and even write a client from scratch based on it.

Some companies will respond to this new threat by digging themselves deeper into a hole, while claiming to be reaching for the stars.

The (Real) Problem With Cybersecurity

Having been in the sausage-factory for a long time, I’d like to share some thought about what I think is a problem when it comes to cyber-security.

Contrary to popular belief programmers are humans; we make stupid mistakes, some are big, some are small. Some days we are lazy, others we are energetic and highly motivated. So (exploitable) bugs inevitably creep in, this is just a fact of life.

The first step in writing robust and secure code is in the design and architecture. The next step is to have developers with good habits and skills, and finally you run a good selection of automated tests on the modules that make up the product.

But consider that a VMS typically runs on an OS that the vendor has very little control over, or uses a database (SQL server, mySQL, MaxDB etc) that is also outside the manufacturers reach. Furthermore, the VMS itself uses libraries from 3rd parties, and with good reason too. Open Source libraries are often much better tested and under an extreme amount of scrutiny compared to a VMS doing a homemade concoction reviewed by just a few peers, but they too have bugs (just fewer than the homebrew stuff usually).

Inevitably, someone finds a way to break the system, and when it happens, it’s a binary event. The product is now insecure. You can argue all you want that the other windows and doors are super-secure, but if the back door is open – who cares about the lock on the window?

To be fair, if the rest of the building is locked down well, then fixing the broken door may be a smaller event.

Contrast to a system that is insecure by design. Where fixing the security issues requires changes to the architecture. We’re no longer talking about replacing a broken lock, but upheaval of the entire foundation. An end-user doesn’t know if the cracks are due to a fundamental issue, or something that just needs a bit of plaster and paint.

And this brings me to the real issue.

Say a developer politely asks demand that resources are allocated to fixing these issues, what do you imagine will happen? In some companies, I assume that a task-force is assembled to estimate the severity of the issue, resources are then allocated to fix the issue. A statement is issued so that people know to apply the patch (they’re not going to do it, but it’s the right thing to do). This is what a healthy company ought to do. A sick company would make the following statement: “no-one has complained about this issue, and – actually – we have to make money”.

A good way to make yourself unpopular (as a programmer) is to respond by saying that if the issue IS discovered, you can forget about making any money. Your market will be limited to installations who really don’t care about security. The local Jiffy-Lube who replaced their VHS based recorder with a DVR that just sits on a dusty shelf may truly not care. The system is not exposed in any way – it is a CCTV (Closed, being the operative word here). They’re fine. And the root password is written on a post-it note stuck on the monitor. But what about a power plant? What about a bank? an airport?

You might imagine that an honest coder with integrity would resign on the spot, but this doesn’t solve the problem. Employees are often gagged by NDAs and non-disparagement clauses, and while disclosure of security flaws is clearly protected by the first amendment, it is generally a bad idea to talk about these things. The company may suffer heavy losses and you are putting (unsuspecting) customers at risk by making these things public. The threat of legal action and the asymmetry (a single person vs a corporation) ensures that flaws rarely surface.

It’s also conceivable that the dumbass programmer, is wrong about the risk of a bug/design issue. A developer may think that a trivial bypass of privilege checks is “dangerous”, but customers might genuinely not care.

Who knows? During the Black Hat convention in 2013, IP cameras from different manufacturers were shown to be hopelessly unsafe. Didn’t seem to make any difference.

I referenced this talk in an earlier post as well.

4 years later, cybersecurity is all the rage, and perhaps people do care – but from what I can tell, it’s just a few SJWs who crave the spotlight who pretend to care. Whether the crazy accusations have merit is irrelevant, all that matters is that viewers tune in, and the show will get increasingly grotesque to keep people entertained. And if the freaks-show is not bringing in the crowds, you can always turn it into a sort of “anonymous facebook” where people can back-stab each other – like the bitchiest teenage girls used to treat each other.

What the industry probably needs to do, is to pay professional penetration testers to go to work on the systems out there. I’m not talking about the kind of shitty automated tests that are being done today. They are far, far from being sufficient. You need people like Craig Heffner in the video to go to town to get to the bottom.

Happy hacking.

InfluxDB and Grafana

InfluxDBGrafana

When buzzards are picking at your eyes, maybe it’s time to move a little. Do a little meandering, and you might discover that the world is larger, and more fun, than you imagined. Perhaps you realize that what was once a thriving oasis has now turned into a putrid cesspool riddled with parasites.

InfluxDB is what’s knowns as a “streaming database”. The idea is that it’s a database that collects samples over time. Once the sample is collected, it doesn’t change. Eventually the sample gets too old, and is discarded. This is different from traditional databases where the values may change over time, and the deletion of records is not normally based on age.

This sounds familiar doesn’t it?

Now, you probably want to draw some sort of timeline, or graph, that represents the values you popped into InfluxDB. Enter, Grafana. It’s a dashboard designer that can interface with InfluxDB (and other databases too) and show pretty graphs and tables in a web page w/o requiring any HTML/Javascript coding.

If you want to test this wonderful combination of software, you’ll probably want to run Docker, and visit this link.

Now, I’ve already abandoned the idea of using InfluxDB/Grafana for the kind of stuff I mess around with. InfluxDB’s strength is that it can return a condensed dataset over a potentially large time-range. And it can make fast and semi-complex computations over the samples it returns (usually of the statistical kind). But the kind of timeline information I usually record is not complex at all, and there aren’t really any additional calculations I can do over the data. E.g. what’s the average of “failed to connect” and “retention policy set to 10 days”.

InfluxDB is also schema-less. You don’t need to do any pre-configuration (other than creating your database), so if you suddenly feel the urge to create a table called “dunning” then you just insert some data into “dunning”. You don’t need to define columns or their types etc. you just insert data.

And you can do this via a standard HTTP call, so you can use curl on the command line, or use libcurl in your c++ app (which is what I did).

The idea that you can issue a single command to do a full install of InfluxDB and Grafana, and then have it consume data from your own little app in about the time it takes to ingest a cup of coffee says a lot about where we’re headed.

Contrast the “open platforms” that require you to sign an NDA, download SDKs, compile DLLs, test on 7 different versions of the server and still have to nurse it every time there’s a new version. Those systems will be around for a long time, but I think it’s safe to say they’re way past their prime.

 

 

Lies, Damn Lies and Video Analytics

Today, doing object tracking using OpenCV can be done in just a few hours. The same applies to face detection and YOLO. Object tracking and recognition is no longer “magic” or require custom hardware. Most coders can whip something together in a day or two that will run on a laptop. Naturally, the research behind these algorithms is the work of some extremely clever guys who, commendably, are sharing their knowledge with the world (the YOLO license is legendary).

But there’s a catch.

During a test of YOLO, it would show me a couple of boxes. One around a face, and YOLO was about 51% certain that this was a person. Around my sock, there would be another where it was 54% sure it was also a person. But there was another face in the frame that was not identified as one.

It’s surprising and very cool that an algorithm can recognize a spoon on the table. But when the algorithm thinks a sock is a face and a face isn’t one, are you going to actually make tactical decisions in a security system based on it?

Charlatans will always make egregious claims about what the technology can do, and gullible consumers and government agencies are being sold on a dream that eventually turn out to be a nightmare.

Recently I saw a commercial where a “journalist” was interviewing a vendor about their analytics software (it wasn’t JH). Example footage was shown of a terrorist unpacking a gun, and opening fire down the street. This took place in your typical corner store in a middle eastern country. The video systems in these stores are almost always pretty awful, bad cameras, heavy compression.

bad_video

The claim being made in the advert is that their technology would be able to identify the terrorist and determine his path through the city in a few hours. A canned demo of the photographer walking through the offices of the vendor was offered as a demonstration of how easy and fast this could be done.

I call bullshit!

-village fool

First of all, most of the cameras on the path are going to be recording feed at similar quality to what you see above. This makes recognition a lot harder (useless/impossible?).

Second, if you’re not running object tracking while you are recording, you’ll need to process all the recorded video. Considering that there might be thousands of cameras, recorded on different equipment recording in different formats, the task of doing the tracking on the recorded video is going to take some time.

Tracking a single person walking down a well lit hallway, with properly calibrated and high quality cameras is one thing. Doing it on a camera with low resolution, heavily compressed video, and a bad sensor on the street with lots of movement, overlaps, etc. is a totally different ballgame.

You don’t know anything about marketing!

-arbitrary marketing person, yelling at Morten

Sure, I understand that this sort of hyperbole is just how things are done in this business. You come up with things that are fantastic and plausible for the uneducated user, and hope that it makes someone buy your stuff. And if your magical tool doesn’t work, then it’s probably too late, and who defines “works” anyways? If it can do it 20% of the time, then it “works”, doesn’t it. Like a car that can’t drive in the rain also “works”.

If you want to test this stuff, show up with real footage from your environment, and demand a demo on that content (if the vendor/integrator can’t do it, they need to educate themselves!). Keep an eye on the CPU and GPU load and ask if this will run on 300 cameras in your mall/airport without having to buy 100 new PC’s with 3 top of the line GPU’s in them.

I’m not saying that it doesn’t ever work. I’m saying that my definition of “works” is probably more dogmatic than a lot of people in this industry.

 

Managing the Manager-less Process

Fred George has quite a resume – he’s been in the software industry since the 70’s and is still active. His 2017 talk @ the GOTO conference is pure gold.

His breakdown of the role of the Business Analyst at 19:20 is spot on. The role of the manager is even saltier (23:12) – “I am the God here”.

Well worth an hour of your life (mostly for coders).

As a side node there are two characters in the Harry Potter movies called “Fred and George”, making searches for “Fred George” a pain.

Lambs to the Slaughter

When lambs are loaded on trucks, as they are sent to the slaughterhouse, I doubt the driver, or the farmer or basically anyone tells them that this is where they’re going.

I wonder what the lambs are thinking.

If they could talk, some would probably say “we’re doomed”, and others would stomp on them and say “shut the hell up, can’t you see everyone is getting anxious” or “why can’t you be more positive”.

Maybe anxiety is natures way of telling you that something bad may be coming your way. If you’re in a rustling truck, driving far away from the farm, it’s appropriate to feel anxious. It’s telling you to be aware of whats going on, and think of an escape route.

The lambs see the butcher, but they don’t know what they’re looking at. The guy is not going to scream “I’M HERE TO KILL YOU ALL”, he’ll whisper reassuringly “come here little friend, follow me”.

Don’t listen to him.

Run away.