Ryzen

Youtubers are disappointed with Ryzen. They expected it to crush Intel in every single benchmark, and I had hoped that it would too. What was I thinking?

The problem that AMD has is that a lot of people seem think that you can get a baby faster if you just add more women.

I’ve been watching a lot of indie coders do a lot of single loop coding, which obviously will not scale across several cores. They are obsessed with avoiding L1/L2 cache misses, which is fine, but at the same time, they rarely talk about multi-threading. Some of the benchmarks I have seen leaves the CPU at 30% utilization across all cores, which means that there’s a lot of untapped potential.

Games with lots of autonomous and complex entities should scale quite well as – if the coder is not spending all his time on organizing his data in a way that makes little sense on a multi-core system, and is willing to shed the dogma that threads are bad.

I am not replacing my 3770K though. I was very tempted to get something that could substantially increase compilation times, but I spend <1% on compilations, so even a massive improvement in compilations would not really improve my productivity overall. And I am not thrilled on having to buy new RAM once again…

 

Marketing Technology

I recently saw a fun post on LinkedIn. Milestone Systems was bragging about how they have added GPU acceleration to their VMS, but the accompanying picture was from a different VMS vendor. My curiosity had the better of me, and I decided to look for the original press release. The image was gone, but the text is bad enough.

Let’s examine :

Pioneering Hardware Acceleration
In the latest XProtect releases, Milestone has harvested the advantages of the close relationships with Intel and Microsoft by implementing hardware acceleration. The processor-intensive task of decoding (rendering) video is offloaded to the dedicated graphics system (GPU) inside the processer [sic], leaving the main processor free to take on other tasks. The GPU is optimized to handle computer graphics and video, meaning these tasks will be greatly accelerated. Using the technology in servers can save even more expensive computer muscle.

“Pioneering” means that you do something before other people. OnSSI did GPU acceleration in version 1.0 of Ocularis, which is now 8 or 9 years old. Even the very old NetSwitcher app used DirectX for fast YUV conversion. Avigilon has been doing GPU acceleration for a while too, and I suspect others have as well. The only “pioneering” here is how far you can stretch the bullshit.

Furthermore, Milestone apparently needs a “close relationship” with Microsoft and Intel to use standard and publicly available quick sync tech. They could also have used FFMpeg.

We have experimented with CUDA on a high end nVidia card years ago, but came to the conclusion that the scalability was problematic, and while the CPU would show 5%, the GPU was being saturated causing stuttering video when we pushed for a lot of frames.

Using Quick sync is the right thing to do, but trying to pass it off as “pioneering” and suggesting that you have some special access to Microsoft and Intel to do trivial things is taking marketing too far.

The takeaway is that I need to remind myself to make my first gen client slow as hell, so that I can claim 100% performance improvement in v2.0.

keep-calm-and-ignore-bullshit-7-257x300

Tagged , ,

Open Systems and Integration

Yesterday I took a break from my regular schedule and added a simple, generic HTTP event source to Ocularis. We’ve had the ability to integrate to IFTTT via the Maker Channel for quite some time. This would allow you to trigger IFTTT actions whenever an event occurs in Ocularis. Soon, you will be able to trigger alerts in Ocularis via IFTTT triggers.

For example, IFTTT has a geofence trigger, so when someone enters an area, you can pop the appropriate camera via a blank screen. The response time of IFTTT is too slow and I don’t consider it reliable for serious surveillance applications, but it’s a good illustration of the possibilities of an open system. Because I am lazy, I made a trigger based on Twitter, that way I did not have to leave the house.

Making a HTTP event source did not require any changes to Ocularis itself. It could be trivially added to previous version if one wanted to do that, but even if we have a completely open system, it doesn’t mean that people will utilize it.

 

 

Tagged ,

VR and Surveillance

Nauseating and sweaty I remove my VR goggles. I feel sick, and I need to lie down. Resident Evil 7 works great in VR because things can sneak up on you from behind. You have to actually turn your head to see what was making that noise behind you.

On a monitor I can do a full panoramic dewarp from several cameras at once, and the only nausea I experience is from eating too many donuts too fast. There’s no “behind” and I have a superhuman ability to see in every direction, from several locations, at once. A friend of mine who played computer games competitively (before it was a thing), used the maximum fov available to give him an advantage to mere humans.

panorama

One feature that might be of some use is the virtual video wall. It’s very reminiscent of the virtual desktop apps that are already available.

And I am not even sure about the gaming aspect of VR. In the gaming world, people are already wondering if VR is dead or dying. Steam stats seem to suggest that it is the case, and when I went to try the Vive in the local electronics supermarket, the booth was deserted and surrounded by boxes and gaming chairs. Apparently you could book a trial run, but the website to do so was slow, convoluted and filled with ads.

Time will tell if this takes off. I am not buying into it yet.

 

MxPEG to H.264

Get a Raspberry Pi, or one of the very cheap clones. Then install FFMpeg and an RTMP server with RTSP capability (EvoStream, Wowza).

Make sure the RTMP server is operational.

Ask FFMpeg to convert from MxPEG to H.264 and broadcast to the RTMP server, by entering this command (on one line)

ffmpeg 
  -f mxg 
  -i http://[user]:[pass]@[camera-host]/cgi-bin/faststream.jpg?stream=mxpeg 
  -codec:v libx264 
  -b:v 500k 
  -maxrate 500k 
  -bufsize 1000k 
  -vf scale=-1:480 
  -threads 0 
  -an 
  -f flv [rtmp address]

If you are using EvoStream, you might have entered something like this for the RTMP address:

rtmp://[ip of evostream]/live/mobotix

If that is the case, you can add a generic RTSP camera to your VMS

rtsp://[ip of evostream]/mobotix

The MxPEG stream will now be converted to H.264 and recorded as such. You’ll miss out on the advantages of MxPEG, but sometimes there’s no other way around the issue.

Tagged , ,

Facebook vs Zenimax

John Carmack is arguable a genius, and when Facebook lost to ZeniMax, he vented his frustration on Facebook. When programmers meet lawyers, the programmer usually ends up frustrated. When someone argues that “doing nothing can be considered doing something” then you start wondering if you are the only sane person in the room, or if you are being gaslighted by someone in suit.

I think John Carmack failed to realize just how far reaching non-compete covenants can be. In some states, an employment contract can contain elements that severely limit your ability to work in related industries, and – perhaps surprisingly – the company often owns everything you create while under contract, even if it was made in your spare time. In many cases, the company does the right thing, and let’s you own that indie-game you wrote on weekends and nights, but when Facebook buys a company for $2 billion, someone might catch the smell of sweet, sweet moolah.

Here’s how I see it.

In April 2012, John Carmack is working for ZeniMax and engages with Palmer Luckey regarding the initial HMD. It seems to me that John Carmack probably thought that what he did in his spare time was of no concern to ZeniMax, and that he was free to have fun with VR since ZeniMax was not interested in any of it.

At QuakeCon 2012 (August), Luckey Palmer is on stage with both Michael Abrash and John Carmack, talking about VR. Carmack, at this point in time is clearly a ZeniMax employee, and I have a very hard time thinking that Carmack didn’t work on VR related research at this point in time.

In August 2013, John Carmack leaves ZeniMax and joins Oculus. Then starts working full time on the VR stuff. ZeniMax doesn’t seem to care. Perhaps they expected that Oculus would soon crash and burn (it probably would have w/o Facebook intervening).

Less than a year later, in July 2014, 2 years after Palmer Luckey and John Carmack exchanged a few words on a message board, Oculus is worth $2 billion dollars to Facebook (According to Mark Zuckerberg they are now $3.5 bn in the hole with this acquisition).

John Carmack says that not a single line of ZeniMax code was used, and while that may be technically true, you could say that ZeniMax founded the research to figure the 700 ways not to make a lightbulb, Carmack then moves to Oculus, bangs the code together and the rest is history.

It’s pretty easy to convince someone who is not a programmer, that code was copied. It’s pretty easy to find a technically competent person who will say that code was copied, even if all programmers know that a lot of code looks kinda similar. Rendering a quad using OpenGL looks pretty much exactly the same in all apps, but is it copyright infringement?

Time will tell if Facebook/Oculus wins the appeal. I think the current verdict is fair (the initial $6 bn claim was idiotic).

Tagged ,

ASUS Joins the Fray

With the ASUS “Tinker Board“. While I believe it is perhaps overpowered and possibly even too expensive (for my purposes), it is nice to see a major manufacturer chime in. I can’t wait to get my hands on one.

Tagged , ,

Sing Along, Everyone!

An old colleague of mine made a post about trust, freedom and having everyone humming along to the same tune, and how the company’s (I suppose superior) culture could not easily be emulated by competitors.

It made me think of a book I read this summer : “Why Smart Executives Fail” by Sydney Finkelstein. Chapter 7 is called “Delusions of a Dream Company”.

Here’s a choice excerpt:

When businesses start losing touch with reality because of an arrogant belief in their own superiority and their company mission, they tend to adopt a pervasively positive attitude. The more insular the company’s outlook, the more buoyant its managers will tend to be about the company’s prospects.

troll-face

Tagged

Product Management

In May 2008, Mary Poppendieck did a presentation on leadership in software development at Google. In it, she points out that at Toyota and 3M the product champions are “deeply technical” and “not from marketing”. The reason this works, she states, is that you need to “marry the technical possibilities with what the market wants”. If the products are driven by marketing people, the engineers will constantly be struggling to explain why perpetual machines won’t work, even if the market is screaming for it. So, while other companies are building all-electric vehicles and hybrids, your company is chasing a pipe-dream.

Innovative ideas are not necessarily technically complex, and may not always require top technical talent to implement. However, such ideas are often either quickly duplicated by other players, or rely on user lock-in to keep the competition at bay. E.g. Facebook and Twitter are technically simple to copy (at small scale), but good luck getting even 100 users to sign up. Nest made a nice thermostat, but soon after the market offered cheaper alternatives. Same with Dropcam. With no lock-in, there is no reason for a new customer to pick Dropcam over something cheaper.

To be truly successful, you therefore need to have the ability to see what the market needs, even if the market is not asking for it. If the market is outright asking, then everyone else hears that, and thus it’s hardly innovation. That doesn’t mean that you should ignore the market, obviously you have to listen to the market and offer solutions for the trivial requests that do make sense (cheaper, better security, faster processing, higher resolution and so on), and weed out the ones that don’t (VR, Blackberry, Windows Mobile). It doesn’t matter how innovative you are, if you don’t meet the most basic requirements.

However, it’s not just a question of whether something is technically possible, it’s also a question as to whether your organization posses the technical competency and time to provide a solution.If your team has an extremely skilled SQL programmer, but the company uses 50% of her time to pursue pipe-dreams or work on trivialities (correcting typos, moving a button, adding a new field), then obviously less time is available to help the innovation along.

Furthermore, time is wasted by doing things in a non-optimal sequence and failing to group related tasks into one update whenever possible. This seem to happen when technical teams are managed by non-technical people (or “technical” people who are trained in unrelated areas). Eventually, the team will stop arguing that you really should install the handrail before the hydrant, and simply either procrastinate or do what they are told at great direct (and indirect!) cost.

useless_things_13

At GOTO 2016, Mary states that 50% of decisions made by product managers are wrong, and 2/3 of what is being specced is not necessary and provides no value to the end user, therefore, she argues, development teams must move from “delivery teams” to “problem solving teams”, and discard the notion that the product manager is a God-like figure that is able to provide a long list of do-this and do-that to his subordinates. Instead, the product manager must

  • able to listen to the market
  • accurately describe the technical boundaries and success criteria for a given problem
  • be able to make tradeoffs when necessary.

To do this, I too, believe the PM must be highly technical so that they have the ability to propose possible solutions to the team (when needed). Without technical competency (and I am not talking about the ability to use Excel like a boss here), the PM will not be able to make the appropriate tradeoffs and will instead engage in very long and exhaustive battles with developers who are asked to do something impossible.

Is Mary correct? Or does she not realize that developers are oddballs and mentally incapable of “understanding the market”? Comments welcome.

 

Tagged , ,

Cost of Error

When I got my first computer, the language it offered was BASIC. Ask any good programmer, and they’ll tell you that BASIC is a terrible language, but it got worse: my next language was 68K assembler on the Commodore Amiga, and with blatant disregard to what Commodore was telling us, I never used the BIOS calls. Instead, I bought the Amiga Hardware Reference Manual and started hitting the metal directly. During my time in school, I was taught Pascal, C++ and eventually I picked up a range of other languages.

What I’ve learned over the years is that the cost of a tool depends on two things: The time it takes to implement something, and (often overlooked) – the time it takes to debug when something goes wrong.

Take “Garbage Collection”, for example, the idea is that you will not have memory leaks because there’s no “new/delete” or “malloc/free” pair to match up. The GC will know when you are done with something you new/malloced and call delete/free when needed. This, surely, must be the best way to do things. You can write a bunch of code and have no worries that your app will leak memory and crash. After all, the GC cleans up.

But there are some caveats. I’ve created an app that will leak memory and eventually crash.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace LeakOne
{
 class Program
 {
   class referenceEater
   {
     public delegate void Evil ( );
     public Evil onEvil;
     ~referenceEater() {
       Console.WriteLine("referenceEater finalizer");
     }
   }

   class junk
   {
     public void noShit() { }
     public void leak() {
       for (int i = 0; i < 100000; i++) {
          referenceEater re = new referenceEater();
       }
     }

     ~junk() {
     }
   }
 
   static void Main(string[] args) {
      for (int i = 0; i < 1000000; i++) {
       junk j = new junk();
       j.leak();
      }
    }
  }
}

What on earth could cause this app to leak?

The answer is the innocent looking “Console.WriteLine” statement in the referenceEater finalizer. The GC runs in its own thread, and because Console.WriteLine takes a bit of time, the main thread will create millions of referenceEater objects and the GC simply can’t keep up. In other words, a classic producer/consumer problem, leading to a leak, and eventually a crash.

Running this app, the leak is fairly apparent just by looking at the task manager. On my laptop it only takes 5-10 minutes for the app to crash (in 32 bit mode), but in 64 bit mode the app would probably run for days, slowing things down for day, until eventually causing a crash.

It’s a bitch to debug, because the memory usage over a loop is expected to rise until the GC kicks in. So you get this see-saw pattern that you need to keep running for a fairly long time to determine, without a doubt, that you have a leak. To make matters worse, the leak may show up on busy systems, but not on the development machine that may have more cores or be less busy. It’s basically a nightmare.

java-memory-usage-example

There are other ways for .NET apps to leak – a good example is forgetting to unsubscribe from a delegate, which means that instead of matching new/delete, you now have to match subscription and unsubscription from delegates. Fragmentation of the Large Object Heap (not really a leak, but will cause memory use to grow, and ultimately kill the app)

The C++ code I have I can test for leaks by just creating a loop. Run the loop a million times, and when we are done we should have exactly the same amount of memory as before the loop.

I am not proposing that we abandon garbage collection, or that everything should be written in C++, not even by a long shot. As an example, writing a stress test for our web-server (written in C++), was done using node.js. This took less than 2 hours to put together, and I don’t really care if the test app leaks (it doesn’t). There are a myriad of places where I think C# and garbage collection is appropriate. I use C# to make COM objects that get spawned and killed by IIS, and it’s a delight to write those and not having to worry about the many required macros needed if I had done the same in C++.

And with great care, and attention, C# apps would never leak, but the reality is that programmers will make mistakes, and that the cost of this is taken into consideration when trying to determine what tool is appropriate for what task.

Tagged , ,