Looping Canned Video For Demos

Here’s a few simple(?) steps to stream pre-recorded video into your VMS.

First you need to install an RTMP server that can do RTMP to RTSP conversion. You can use Evostream, Wowza or possibly Nimblestreamer.  Nginx-rtmp won’t work as it does not support RTSP output.

Then get FFMpeg (windows users can get it here).

Find or create the canned video that you want to use, and store it somewhere accessible.

In this example, I have used a file called R1.mp4 and my RTMP server (Evostream) is located at 192.168.0.109. The command used is this:

ffmpeg -re -stream_loop -1 -i e:\downloads\r1.mp4 -c copy -fflags +genpts -f flv rtmp://192.168.0.109/live/r1

Once this is streaming (and you can verify using VLC and opening the RTMP url you provided), you can go to your VMS and add a generic RTSP camera.

For Evostream, the RTSP output is on a different port, and has a slightly different format, so in the recorder I add:

rtsp://192.168.0.109:5544/r1

Other RTMP servers may have a slightly different transform of the URL, so check the manual.

I now have a video looping into the VMS and I can run tests and benchmarks on the exact same feed w/o needing an IP camera.

 

 

Advertisements

Listening to Customers

In 2011, BlackBerry peaked with a little more than 50 million devices sold. The trajectory had an impressive ~50% CAGR from 2007 where the sales were around 10 million devices. I am sure the board and chiefs were pleased and expected this trend to continue. One might expect that ~250 million devices were to be sold in 2016 if the CAGR could be sustained. Even linear growth would be fairly impressive.

Today, in 2017, BlackBerry commands a somewhat unimpressive 0.0% of the smartphone market.

There was also Nokia. The Finnish toilet-paper manufacturer pretty much shared the market with Ericsson in Scandinavia and was incredibly popular in many other regions. If I recall correctly, they sold more devices than any other manufacturer in the world. But they were the McDonalds of mobile phones: Cheap and simple (nothing wrong with that per se). They did have some premium phones, but perhaps they were just too expensive, too clumsy or maybe too nerdy?

ngage
Talking on a Nokia N-Gage phone

Nokia cleverly tricked Microsoft into buying their phone business, and soon after the Microsoft gave up on that too (having been a contender in the early years with Windows CE/Mobile).

I am confident that BlackBerry was “listening to their customers”. But perhaps they didn’t listen to the market. Every single customer at BlackBerry would state that they preferred the physical keyboard and the naive UI that BlackBerry offered. So why do things differently? Listen to your customers!

If BlackBerry was a consulting agency, then sure, do whatever the customer asks you to. If you’re selling hot-dogs, and the customer asks for more sauerkraut, then add more sauerkraut, even if it seems revolting to you. But BlackBerry is not selling hotdogs or tailoring each device to each customer. They are making a commodity that goes in a box and is pulled off a shelf by someone in a nice shirt.

As the marginally attached customers are exposed to better choices (for them), they will opt for those, and in time, as the user base dwindles, you’re left with “fans”. Fans love the way you do things, but unless your fan base is growing, you’re faced with the very challenging task of adding things your fans may not like. Employees that may be prostrate bowed but not believing, will leave and eventually you’ll have a group of flat-earth preachers evangelizing to their dwindling flock.

It might work as a small, cooky company that makes an outsider device, but it sure cannot sustain the amount of junk that you tag on over the years. Eventually that junk will drag the company under.

Or, perhaps BlackBerry was a popular hotdog stand, in a town where people just lost the appetite for hotdogs and had a craving for juicy burgers and pizza (or strange hotdogs)

Taste and Craftsmanship

I think it would be unfair to say that blue is a better color than green, or that flat design is better than skeumorphic. And I don’t think we’d appreciate flat design w/o having been through skeumorphic design first (and poor, misguided attempts at that too).

The website craigslist.com is #50 in the world (#10 in the US), and I think most designers will agree that it looks pretty plain. Amazon.com is #11 and sports a pretty chaotic design. So it would seem that a design doesn’t make or break a product; a product that works really well can succeed in spite of not being pretty. I suppose the design just has to be appropriate. I wonder if craigslist would have been where it is today, if Craig Newmark had tried to keep up with the design trends over the years. I think what’s key for craigslist is that the design looks almost bohemian, which may resonate quite well with the self-image of its users.

But craftsmanship is something slightly different. Poor craftsmanship can be a dealbreaker, and I believe it carries bigger weight than aesthetic preferences. Even if craigslist looks rather plain, it does follow some fairly well-established design rules. It would appear that the craftsmanship is good – the designers know what they are doing, and while you may not be particularly attracted to the site (the taste), the design feels deliberate and done with/on  purpose.

craigslist

As an example of the polar opposite, take a look at Yahoo! classified registry. I don’t know what those pages are for, but it seems as if Yahoo! has no taste. They’ve just mashed a bunch of random ingredients into a very tasteless pie. I wonder if they are maintaining it at all – it sure looks like they’ve abandoned it a while ago. I think the Yahoo page is an example of no taste, and bad craftsmanship. The “New” icon seems completely out of place, and it looks pretty bad.

Yahoo

 

So, I think it’s safe to say that Yahoo! have failed. A poorly designed page, with no meaningful purpose (that I can see). This sort of thing must be avoided at all cost. Yet it seems that a lot of companies end up with something similar to Yahoo!’s abomination. A lot of times, as developers discover a new technique, they can’t wait to use it – somewhere – anywhere. When I was first dabbling in WPF I did a (terrible) mirror floor effect. I immediately popped it in the administrator app. As it was pretty cool, no-one told me to remove it. It still sits there. Inappropriate and annoying to me. And as time passes, my preferences change. I went from skeumorphic to flat, but I never had the time to redo all the assets. As a result, I have a mix of both styles.

So, I understand why design rot happens, and I am pretty sure that I know how to remedy the situation, but I can’t prove that theres a ROI on cleaning it up. People who didn’t buy our product, are not going to complain about the lack of consistency in the design, and people who did, probably don’t care enough to complain. Thus, the problem appears smaller than it might be. Furthermore, a new, overhauled design, may not sit well with our existing customers, and there is clearly a tendency to want to do everything vastly different a second time around (the second system effect).

 

One Auga Per Ocularis Base*

In the Ocularis ecosystem, Heimdall is the component that takes care of receiving, decoding and displaying video on the screen. The functionality of Heimdall is offered through a class called Auga. So, to render video, you need to create an Auga object.

Ocularis was designed with the intent of making it easy for a developer to get video into their own application. Initially it was pretty simple – instantiate an Auga instance, and pass in a url, and viola, you had video. But as we added support for a wider range of NVRs, things became a little more complex. Now you need to instantiate an OCAdapter, log into an Ocularis Base Server. Then, pass the cameras to Auga via SetCameraIDispatch and then you can get video. The OCAdapter in turn, depends on a few NVR drivers. So deployment became more complex too.

One of the most common problems that I see today, is that people instantiate one OCAdapter, and one Auga instance per camera. This causes all sorts of problems; each instance counts as one login (which is a problem on login-restricted systems), every instance consumes memory and memory for fonts and other graphics are not shared between the instances. In many ways, I should have anticipated this type of use, but on the other hand, the entire Ocularis Client is using Heimdall/Auga as if it was a 3rd party component, and that seems to work pretty well (getting a little dated to look at, but hey…)

Heimdall also offers a headless mode. We call it video-hooks, and it allows you to instantiate an Auga object, and get decoded frames via a callback, or a DLL, instead of having Auga draw it on the screen. The uses for this are legion, I’ve used the video-hooks to create a web-interface, and until recently we used it for OMS to, video analytics can use the hooks to get live video in less than 75 lines of code . Initially the hooks only supported live video, but it now supports playback of recorded video too. But even when using Auga for hooks, should you ever only create one Auga instance per Ocularis Base. One Auga instance can easily stream from multiple cameras.

However, while Heimdall is named after a God, it does not have magical capabilities. Streaming from 16 * 5 MP * 30 fps will tax the system enormously – even on a beefy machine. One might be tempted to say “Well, the NVR can record it, so why can’t Auga show it?”. Well, the NVR does not have to decode every single frame completely to determine the motion level, but Auga has to decode everything, fully, all the way to the pixel format you specify when you set up the hook. If you specify BGR as your expected pixel format, we will give you every frame as a fully decoded BGR frame at 5MP. Unfortunately, there is no way to decode only every second or third frame. You can go to I-Frame only decoding (we do not support that right now), but that lowers the framerate to whatever the I-frame interval would be, typically 1 fps.

If you are running Auga in regular mode, you can show multiple cameras by using the LoadLayoutFromString function. It allows you to create pretty much any layout that you can think of, as you define the viewports via a short piece of text. Using LoadLayoutFromString  (account reqd.) you do not have to handle maximizing of viewports etc. all that is baked into Auga already. Using video hooks, you can set up (almost) any number of feeds via one Auga instance.

sdk_loadlayout

Granted, there are scenarios where making multiple Augas makes sense – certainly, you will need one per window (and hence the asterisk in the headline), and clearly if you have multiple processes running, you’d make one instance per process.

I’ll talk about the Direct3D requirement in another post.

Pareto Principle

…or the 80-20 rule as it is also known. I am not fully convinced that it holds true; that 80% of your profits come from 20% of your clients, that 80% of the work is done by 20% of the staff, and that 80% of the peas come from just 20% of the pods. But when designing software, I think you need to keep the Pareto principle in mind.

As we add features to our product, we usually sit down and have a meeting about how to create the UI to enable a user to accomplish some task. As we wireframe the UI, people around the table will come up with additional ideas, and point out weaknesses in the design. But quite often, the longest discussions are about what I call “fringe use”. The reality is that people tend to imagine that they are going to use a lot more functionality than they actually do. Since the feature is not in the product, we really don’t know if they are going to use it, and there really isn’t any scientific way of knowing if people will actually use a function. Merely Asking people simply doesn’t work, and we can’t really do A/B testing on software such as ours.

We might not spend 80% of our time discussing and designing UIs for the 20% (or less) that will actually use the feature, but we certainly spend a lot more time designing for the minority, than the expected revenue from these border cases seems to justify.

While the 20% might be getting a good deal, there are more serious consequences. The 80% that really don’t care about the 20%’ers special needs, are getting a shittier deal. Time is a limited resource, and every minute we spend on fringe, is a minute stolen from normal users experience. At times it also means that the interface for Mr. Normal becomes cluttered with a lot of irrelevant options. More options means that the UI is more taxing on the old brain.

I think we need to get back to spending 80% of the design time on the 80% of the users. Am I wrong?

Let’s Ask the User!

It has been demonstrated over and over again that “asking the users” can be a terrible idea when it comes to user interface design, yet you still encounter people who fancy themselves UI designers,  but feel that the users should tell you how to solve a UI problem.

You can’t design an application without interacting with users, but I never, ever, ask people to tell me how to design things. I often ask “in your opinion, what is the most difficult or annoying thing about XYZ”. If there is a consensus about something being difficult to accomplish, I will do a mock-up and show it to the interested parties. When a designer says that he has found a usability problem, but he is asking the users how to resolve it, I get lightheaded and have to sit down for a while.

So please, if you fancy yourself a designer, and there is a problem, you find the solution. not the users.

Live Layouts vs. Forensic Layouts

In our client, if you hit browse, you get to see the exact same view, only in browse mode. The assumption is that the layout of the live view is probably the same as the one you want when you are browsing through video.

I am now starting to question if that was a wise decision.

When clients ask for a 100 camera view, I used to wonder why. No-one can look at 100 cameras at the same time. But then I realized that they don’t actually look at them the way I thought. They use the 100 camera view a “selection panel”. They scan across the video and if they see something out of place they maximize that feed.

I am guessing here, but I suspect that in live view, you want to see “a little bit of everything” to get a good sense of the general state of things. When something happens, you need a more focused view – suddenly you go from 100 cameras to 4 or 9 cameras in a certain area.

Finally, when you go back and do forensic work, the use pattern is completely different. You might be looking at one or two cameras at a time, zooming in and navigate to neighboring cameras quite often.

Hmmm… I think we can improve this area.