Category Archives: thoughts

Listening to Customers

In 2011, BlackBerry peaked with a little more than 50 million devices sold. The trajectory had an impressive ~50% CAGR from 2007 where the sales were around 10 million devices. I am sure the board and chiefs were pleased and expected this trend to continue. One might expect that ~250 million devices were to be sold in 2016 if the CAGR could be sustained. Even linear growth would be fairly impressive.

Today, in 2017, BlackBerry commands a somewhat unimpressive 0.0% of the smartphone market.

There was also Nokia. The Finnish toilet-paper manufacturer pretty much shared the market with Ericsson in Scandinavia and was incredibly popular in many other regions. If I recall correctly, they sold more devices than any other manufacturer in the world. But they were the McDonalds of mobile phones: Cheap and simple (nothing wrong with that per se). They did have some premium phones, but perhaps they were just too expensive, too clumsy or maybe too nerdy?

ngage

Talking on a Nokia N-Gage phone

Nokia cleverly tricked Microsoft into buying their phone business, and soon after the Microsoft gave up on that too (having been a contender in the early years with Windows CE/Mobile).

I am confident that BlackBerry was “listening to their customers”. But perhaps they didn’t listen to the market. Every single customer at BlackBerry would state that they preferred the physical keyboard and the naive UI that BlackBerry offered. So why do things differently? Listen to your customers!

If BlackBerry was a consulting agency, then sure, do whatever the customer asks you to. If you’re selling hot-dogs, and the customer asks for more sauerkraut, then add more sauerkraut, even if it seems revolting to you. But BlackBerry is not selling hotdogs or tailoring each device to each customer. They are making a commodity that goes in a box and is pulled off a shelf by someone in a nice shirt.

As the marginally attached customers are exposed to better choices (for them), they will opt for those, and in time, as the user base dwindles, you’re left with “fans”. Fans love the way you do things, but unless your fan base is growing, you’re faced with the very challenging task of adding things your fans may not like. Employees that may be prostrate bowed but not believing, will leave and eventually you’ll have a group of flat-earth preachers evangelizing to their dwindling flock.

It might work as a small, cooky company that makes an outsider device, but it sure cannot sustain the amount of junk that you tag on over the years. Eventually that junk will drag the company under.

Or, perhaps BlackBerry was a popular hotdog stand, in a town where people just lost the appetite for hotdogs and had a craving for juicy burgers and pizza (or strange hotdogs)

Tagged , ,

LinkedIn is Worse Than Facebook

I suddenly realized I spent too much time on LinkedIn, and it dawned on me that LinkedIn is even worse than Facebook.

From time to time, people post virtue signalling memes that tell other people to not let LinkedIn turn into Facebook. The want to keep LinkedIn “professional”. That makes me wonder: If your primary interaction with business partners is through LinkedIn, are you really a professional?

The feed that LinkedIn thinks I should look has a few types of posts: Politically correct trivialities, annoying riddles, links to wise words written by someone else, and outright ads and appraisal of yourself or the company you work for.

The ads (not paid ads, but companies hawking something via LinkedIn) are tolerable from my standpoint. It’s pretty easy to filter those out, and move on to something with a little more substance. When I see someone saying “See why widget XYZ from SomeCompany is leading/helping/solving…. ” then you kinda know you don’t need to continue reading. If I see a post that starts with “visit us at …” I just move on. It’s not that I would recommend the company (I still work) for to not post these things, but I wonder who is genuinely impressed by this. It seems to me that this is a lot of choir preaching, with people – who most likely already know what you’re releasing – hitting “like” on a post that tells them nothing new.

I get pointers to good copy from Twitter, co-workers and friends, and from time to time there’s a good read on LinkedIn, but to find those, it feels like an online version of walking through a large bazar looking like a gullible tourist, red-faced from too much sun, complete with selfie stick and tasteless clothing. Every single vendor grabbing your arm, telling you about their wonderfully crafted pieces of shit. If you are willing to endure this torture, you might eventually find something worthwhile, but the chances are slim, and I am getting weary of wandering aimlessly around this crazy market.

Because LinkedIn is considered a “professional” network, i.e. a network between people who only want to engage with others if there’s money to be made. That means that the posts are even more self-censored and manipulative than on Facebook, Instagram, SnapChat or what have you. Every word is carefully chosen, you remember to “like” posts, not because of their content, but because of who wrote them. You might even make a positive comment, like a quick kiss on the old sphincter: “Well done”, someone will say, when a CEO praises his own ability to turn an advantage in currency exchange into revenue growth.

Maybe, just maybe, it’s the business that I am in that is fouling up my LinkedIn feed. In any event, the remedy is quite simple. I really shouldn’t go there..

 

 

 

Tagged

Re: 1-Star Review

Sometimes I make mistakes. This kinda, sorta feels like it, but not exactly (I make them so rarely, that I am not sure what they are supposed to feel like).

bikejpgAn old friend and ex-colleague of mine dropped by yesterday, and although the topic discussed was not even remotely related to the mobile apps, the discussion must have sowed some sort of seed in my head, that was then fully processed overnight and during my morning bike-ride through the city.

It has to do with asymmetry, and I am hopeful that we can cook something up that isn’t a screwdriver-hammer-toaster.

Time will tell.

My One Star Review

We recently released a new mobile app for android and iOS.

We promptly got a single 1-star review. It was not that the app was not performing smoothly (it’s a native app, smooth as silk), not that it crashed (it might have happened, but I don’t think it did), or that it used too much CPU for transcoding (we don’t transcode in live mode, and we support all codecs). Not was it that the user can handle incoming events in 10 seconds (and that includes grabbing your phone from your pocket). No, the issue was that it was not the same as the old app. 

The android app allows the user to stream video from his phone to the recorder, it shows cameras on google maps, and these features will soon be available in the iOS version as well. The app will pick the best matching stream profile for your device, but you can override this if you so desire. Hell, in the android version you can even add your recorder by scanning a QR code!

We continue to strive to make an app that you will actually want to use in the real world. Hopefully it will be more than just a gimmick that you show your friends (like the remote control I have for rotating my TV.. I used it 3 times).

Naturally, we will continue to add features to the app on at a steady pace, but I do not want to jeopardize the user experience by shoveling in features that the technology just can’t support. Basically, if we stick it is in the app, it should be a delight to use and there should be no asterixis in the contract telling you that you need an octa-core xeon processor to server 2 or 3 clients simultaneously. To me, it is not good enough that it works in the lab. It has to work in the real world, on bad LTE, on a cheap, low power phone too. In fact, I used an LG G2 mini as my reference device, it is small and not very powerful, but if it runs smooth on that, then it will run smooth on just about anything.

I also do not want to shove so much shit into the UI that you can’t easily operate the app, and this is where things become really difficult. You have to decide if you want to be a screwdriver or a hammer. You cannot do both and deliver a good experience. Our mobile app is a screwdriver, but this guy wanted a hammer, so our screwdriver was “a huge step backwards”.

Eventually I will be beaten into submission and I will be ordered to make a hammer-screwdriver-toaster that no-one really wants to use. It will check the appropriate boxes on a spec sheet though, and it will look impressive on store shelf, where we will display videos that show the app running under perfect conditions with carefully selected streams and with the phone being connected to a fast, dedicated network. You’ll then realize that in the real world, cameras fail to come up, the UI is hard to navigate and use, it’s slow and unresponsive and then we might get a 2-star star review. We can then bribe some people to give us 5-star reviews to up the average rating, because that’s how things are done.

I’ve written about the dangers of being too promiscuous for your own good before. It’s very tempting to add every single dish that anyone ever asks for to the menu, but it is a temptation best ignored. Find out who you are, and be loyal to it. Sure, it is easier said than done – why not add burgers, sushi and ramen noodles to the menu of your hotdog stand? How hard can it be? And aren’t we losing all those customers that crave burgers, sushi or ramen noodles? I think that people who want burgers, will go to a place that makes good burgers, rather than go to your elaborate hotdog stand for a mediocre burger on a stale bun.

What if you are wrong about selling hotdogs? Well, if you are wrong, you go out of business fast, but what is the alternative? Is it better to limp along for ages, offering yourself up for all sorts of depraved activity, like some badly aged prostitute?

Codecs and Client Applications

4K and H.265 is coming along nicely, but it is also part of the problems we as developers are facing these days. Users want low latency, fluid video, low bandwidth and high resolution, and they want these things on 3 types of platforms – traditional PC applications, apps for mobile devices and tablets, and a web interface. In this article, I’d like to provide some insights from a developer’s perspective.

Fluid or Low Latency

Fluid and low latency at the same time is highly problematic. To the HLS guys, 1 second is “low latency”, but to us, and the WebRTC hackers, we are looking for latencies in the sub 100 ms area. Video surveillance doesn’t always need super low latency – if a fixed camera has 2 seconds of latency, that is rarely a problem (in the real world). But as soon as any kind of interaction takes place (PTZ or 2-way audio) then you need low latency. Optical PTZ can “survive” low latency if you only support PTZ presets or simple movements, but tracking objects will be virtually impossible on a high-latency feed.

Why high latency?

A lot of the tech developed for video distribution is intended for recorded video, and not for low latency live video. The intent is to download a chunk of video, and while that plays, you download the next in the background, this happens over and over, but playing back does not commence until at least the entire first block has been read off the wire. The chunks are usually 5-10 seconds in duration, which is not a problem when you’re watching Bob’s Burgers on Netflix.

The lowest latency you can get is to simple draw the image when you receive it, but due to packetization and network latency, you’re not going to get the frames at a steady pace, which leads to stuttering which is NOT acceptable when Bob’s Burgers is being streamed.

How about WebRTC?

If you’ve ever used Google Hangouts,  then you’ve probably used WebRTC. It works great when you have a peer-to-peer connection with a good, low latency connection. The peer-to-peer part is important, because part of the design is that the recipient can tell the sender to adjust its quality on demand. This is rarely feasible on a traditional IP camera, but it could eventually be implemented. WebRTC is built into some web browsers, and it supports H.264 by default, but not H.265 (AFAIK) or any of the legacy formats.

Transcoding

Yes, and no. Transcoding comes at a rather steep price if you expect to use your system as if it ran w/o transcoding. The server has to decode every frame, and then re-encode it in the proper format. Some vendors transcodes to JPEG which makes it easier for the client to handle, but puts a tremendous amount of stress on the server. Not on the encoding side, but the decoding of all those streams is pretty costly.  To limit the impact on the transcoding server, you may have to alter the UI to reflect the limitation in server side resources.

Installed Applications

The trivial case is an installed application on a PC or a mobile device. Large install files are pretty annoying (and often unnecessary), but you can package all the application dependencies, and the developer can do pretty much anything they want. There’s usually a lot of disk-space available and fairly large amounts of RAM.

On a mobile device you struggle with OS fragmentation (in case of Android), but since you are writing an installed application, you are able to include all dependencies. The limitations are in computing power, storage, RAM and physical dimensions. The UI that works for a PC with a mouse is useless on a 5″ screen with a touch interface. The CPU/GPU’s are pretty powerful (for their size), but they are no-where near the processing power of a halfway decent PC. The UI has to take this into consideration as well.

“Pure” Web Clients

One issue that I have come across a few times, is that some people think the native app “uses a lot of resources”, while a web based client would somehow, magically, use fewer resources to do the same job. The native app uses 90.0% of the CPU resources to decode video, and it does so a LOT more efficient than a web client would ever be able to. So if you have a low end PC, the solution is not to demand a web client, but to lower the number of cameras on-screen.

Let me make it clear: any web client that relies on an ActiveX component to be downloaded and installed, might as well have been an installed application. ActiveX controls are compiled binaries that only run on the intended platform (IE, Windows, x86 or x64). They are usually implicitly (sometimes explicitly) left behind on the machine, and can be instantiated and used as an attack vector if you can trick a user to visit a malicious site (which is quite easy to accomplish).

The purpose of a web client is to allow someone to access the system from a random machine in the network, w/o having to install anything. An advantage is also that since there is no installer, there’s no need to constantly download and install upgrades every time you access your system. When you log in, you get the latest version of the “client”. Forget all about “better for low end” and “better performance”.

Technology

Java applets can be installed, but often setting up Java for a browser is a pain in the ass (security issues), and performance can be pretty bad.

Flash apps are problematic too, and suffer the same issues as Java applets. Flash has a decent H.264 decoder for .flv formatted streams, but no support for H.265 or legacy formats (unless you write them, from scratch.. and good luck with that 🙂 ) Furthermore, everyone with a web browser in their portfolio is trying to kill Flash due to it’s many problems.

NPAPI or other native plugin frameworks (NaCL, Pepper) did offer decent performance, but usually only worked on one or two browsers (Chrome or Firefox), and Chrome later removed support for NPAPI.

HTML5 offers the <video> tag, which can be used for “live” video. Just not low latency, and codec support is limited.

Javascript performance is now at a point (for the leading browsers) that you can write a full decoder for just about any format you can think of and get reasonable performance for 1 or 2 720p streams if you have a modern PC.

Conclusion

To get broad client side support (that truly works), you have to make compromises on the supported devices side. You cannot support every legacy codec and device and expect to get a decent client side experience on every platform.

As a general rule, I disregard arguments that “if it doesn’t work with X, then it is useless”. Too often, this argument gains traction, and to satisfy X, we sacrifice Y. I would rather support Y 100% if Y makes more sense. I’d rather serve 3 good dishes, than 10 bad ones. But in this industry, it seems that 6000 shitty dishes at an expensive “restaurant” is what people want. I don’t.

 

 

Tagged , , , ,

Ryzen

Youtubers are disappointed with Ryzen. They expected it to crush Intel in every single benchmark, and I had hoped that it would too. What was I thinking?

The problem that AMD has is that a lot of people seem think that you can get a baby faster if you just add more women.

I’ve been watching a lot of indie coders do a lot of single loop coding, which obviously will not scale across several cores. They are obsessed with avoiding L1/L2 cache misses, which is fine, but at the same time, they rarely talk about multi-threading. Some of the benchmarks I have seen leaves the CPU at 30% utilization across all cores, which means that there’s a lot of untapped potential.

Games with lots of autonomous and complex entities should scale quite well as – if the coder is not spending all his time on organizing his data in a way that makes little sense on a multi-core system, and is willing to shed the dogma that threads are bad.

I am not replacing my 3770K though. I was very tempted to get something that could substantially increase compilation times, but I spend <1% on compilations, so even a massive improvement in compilations would not really improve my productivity overall. And I am not thrilled on having to buy new RAM once again…

 

Marketing Technology

I recently saw a fun post on LinkedIn. Milestone Systems was bragging about how they have added GPU acceleration to their VMS, but the accompanying picture was from a different VMS vendor. My curiosity had the better of me, and I decided to look for the original press release. The image was gone, but the text is bad enough.

Let’s examine :

Pioneering Hardware Acceleration
In the latest XProtect releases, Milestone has harvested the advantages of the close relationships with Intel and Microsoft by implementing hardware acceleration. The processor-intensive task of decoding (rendering) video is offloaded to the dedicated graphics system (GPU) inside the processer [sic], leaving the main processor free to take on other tasks. The GPU is optimized to handle computer graphics and video, meaning these tasks will be greatly accelerated. Using the technology in servers can save even more expensive computer muscle.

“Pioneering” means that you do something before other people. OnSSI did GPU acceleration in version 1.0 of Ocularis, which is now 8 or 9 years old. Even the very old NetSwitcher app used DirectX for fast YUV conversion. Avigilon has been doing GPU acceleration for a while too, and I suspect others have as well. The only “pioneering” here is how far you can stretch the bullshit.

Furthermore, Milestone apparently needs a “close relationship” with Microsoft and Intel to use standard and publicly available quick sync tech. They could also have used FFMpeg.

We have experimented with CUDA on a high end nVidia card years ago, but came to the conclusion that the scalability was problematic, and while the CPU would show 5%, the GPU was being saturated causing stuttering video when we pushed for a lot of frames.

Using Quick sync is the right thing to do, but trying to pass it off as “pioneering” and suggesting that you have some special access to Microsoft and Intel to do trivial things is taking marketing too far.

The takeaway is that I need to remind myself to make my first gen client slow as hell, so that I can claim 100% performance improvement in v2.0.

keep-calm-and-ignore-bullshit-7-257x300

Tagged , ,

VR and Surveillance

Nauseating and sweaty I remove my VR goggles. I feel sick, and I need to lie down. Resident Evil 7 works great in VR because things can sneak up on you from behind. You have to actually turn your head to see what was making that noise behind you.

On a monitor I can do a full panoramic dewarp from several cameras at once, and the only nausea I experience is from eating too many donuts too fast. There’s no “behind” and I have a superhuman ability to see in every direction, from several locations, at once. A friend of mine who played computer games competitively (before it was a thing), used the maximum fov available to give him an advantage to mere humans.

panorama

One feature that might be of some use is the virtual video wall. It’s very reminiscent of the virtual desktop apps that are already available.

And I am not even sure about the gaming aspect of VR. In the gaming world, people are already wondering if VR is dead or dying. Steam stats seem to suggest that it is the case, and when I went to try the Vive in the local electronics supermarket, the booth was deserted and surrounded by boxes and gaming chairs. Apparently you could book a trial run, but the website to do so was slow, convoluted and filled with ads.

Time will tell if this takes off. I am not buying into it yet.

 

Cost of Error

When I got my first computer, the language it offered was BASIC. Ask any good programmer, and they’ll tell you that BASIC is a terrible language, but it got worse: my next language was 68K assembler on the Commodore Amiga, and with blatant disregard to what Commodore was telling us, I never used the BIOS calls. Instead, I bought the Amiga Hardware Reference Manual and started hitting the metal directly. During my time in school, I was taught Pascal, C++ and eventually I picked up a range of other languages.

What I’ve learned over the years is that the cost of a tool depends on two things: The time it takes to implement something, and (often overlooked) – the time it takes to debug when something goes wrong.

Take “Garbage Collection”, for example, the idea is that you will not have memory leaks because there’s no “new/delete” or “malloc/free” pair to match up. The GC will know when you are done with something you new/malloced and call delete/free when needed. This, surely, must be the best way to do things. You can write a bunch of code and have no worries that your app will leak memory and crash. After all, the GC cleans up.

But there are some caveats. I’ve created an app that will leak memory and eventually crash.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace LeakOne
{
 class Program
 {
   class referenceEater
   {
     public delegate void Evil ( );
     public Evil onEvil;
     ~referenceEater() {
       Console.WriteLine("referenceEater finalizer");
     }
   }

   class junk
   {
     public void noShit() { }
     public void leak() {
       for (int i = 0; i < 100000; i++) {
          referenceEater re = new referenceEater();
       }
     }

     ~junk() {
     }
   }
 
   static void Main(string[] args) {
      for (int i = 0; i < 1000000; i++) {
       junk j = new junk();
       j.leak();
      }
    }
  }
}

What on earth could cause this app to leak?

The answer is the innocent looking “Console.WriteLine” statement in the referenceEater finalizer. The GC runs in its own thread, and because Console.WriteLine takes a bit of time, the main thread will create millions of referenceEater objects and the GC simply can’t keep up. In other words, a classic producer/consumer problem, leading to a leak, and eventually a crash.

Running this app, the leak is fairly apparent just by looking at the task manager. On my laptop it only takes 5-10 minutes for the app to crash (in 32 bit mode), but in 64 bit mode the app would probably run for days, slowing things down for day, until eventually causing a crash.

It’s a bitch to debug, because the memory usage over a loop is expected to rise until the GC kicks in. So you get this see-saw pattern that you need to keep running for a fairly long time to determine, without a doubt, that you have a leak. To make matters worse, the leak may show up on busy systems, but not on the development machine that may have more cores or be less busy. It’s basically a nightmare.

java-memory-usage-example

There are other ways for .NET apps to leak – a good example is forgetting to unsubscribe from a delegate, which means that instead of matching new/delete, you now have to match subscription and unsubscription from delegates. Fragmentation of the Large Object Heap (not really a leak, but will cause memory use to grow, and ultimately kill the app)

The C++ code I have I can test for leaks by just creating a loop. Run the loop a million times, and when we are done we should have exactly the same amount of memory as before the loop.

I am not proposing that we abandon garbage collection, or that everything should be written in C++, not even by a long shot. As an example, writing a stress test for our web-server (written in C++), was done using node.js. This took less than 2 hours to put together, and I don’t really care if the test app leaks (it doesn’t). There are a myriad of places where I think C# and garbage collection is appropriate. I use C# to make COM objects that get spawned and killed by IIS, and it’s a delight to write those and not having to worry about the many required macros needed if I had done the same in C++.

And with great care, and attention, C# apps would never leak, but the reality is that programmers will make mistakes, and that the cost of this is taken into consideration when trying to determine what tool is appropriate for what task.

Tagged , ,

Armageddon

Oh, video surveillance industry, I have failed ye. And I apologize. I did my best.

The false prophet is constantly preaching to his obedient and subservient flock. Tail wagging, eyes wide open, listening to the dog-whistle playing tunes of fear, uncertainty, and doubt.

All we can do is sit back and watch as the industry gets destroyed by consuming the vile soup consisting of equal parts arrogance and ignorance, served up by his highness.

I shall never forget the time, about 13 years ago, when a store manager asked why the hell it had to be so advanced. He fondly remembered his VCR that had a nice red button and it just worked. Plug in the camera, and you had video. It was that simple.

Pretty much anyone could install these systems. Video quality was shit and tapes wore out, but it was simple and most people could operate it. Once we moved to IP we fucked it all up. It became a nightmare to install and operate, and you had to have a degree in engineering to make sense of any of it.

In this complex world, some people are now shitting their pants over the ownership of a technology company by a government entity. Perhaps I am wrong. Maybe the encopresis is not related to the new gospel, but is a more permanent state of affairs, who knows? But I am starting to notice the smell.

We’re past reasoning here. We’re past the point where the accuser delivers the proof, instead, the accused now has to prove his innocence. Apparently, The Court of Oyer and Terminer has been established, and our present day version of  Thomas Newton presenting his evidence for all to see – “The coat is cut or torn in two ways”.

There’s a reason why, in civilized societies, the accused is not carrying the burden of proving their innocence – it’s damn near impossible to do so. Proving guilt, on the other hand, provided there is any, may be hard, but certainly not impossible. So far, I have yet to see more compelling evidence than oddly torn coats.

Perhaps the leap from analog and coax cables to IP and CAT5 is a leap too far for some people, and in the whirlwind of technobabble, they desperately grasp for something to hold on to. Perhaps in time they will find out that they are clinging to the branches of an old, but potent, poison ivy that has spread all over the garden.

I’m not willing to pass judgment on any camera manufacturer right now. I am willing to accept that people make mistakes. NASA burned up the Mars Climate Orbiter because someone at Lockheed Martin used imperial units! People “carelessly” installed software that contained OpenSSL, which in turn was vulnerable to the Heartbleed bug, and I could go on.

Maybe I am the ignorant one. Maybe I am not “connecting the dots”. I do see the dots, and I do see how someone is trying to make you connect them. But without evidence, I am not going to draw that line. I do have ample evidence that “the flock” are ignorant fools, so I am judging members of that flock by association (fairly or not 🙂 )