TileMill and Ocularis

A long, long time ago, I discovered TileMill. It’s an app that lets you import GIS data, style the map and create a tile-pyramid, much like the tile pyramids used in Ocularis for maps.

tilemill

There are 2 ways to export the map:

  • Huge JPEG or PNG
  • MBTiles format

So far, the only supported mechanism of getting maps into Ocularis is via a huge image, which is then decomposed into a tile pyramid.

Ocularis reads the map tiles the same way Google Maps (and most other mapping apps) reads the tiles. It simply asks for the tile at x,y,z and the server then returns the tile at that location.

We’ve been able to import Google Map tiles since 2010, but we never released it for a few reasons:

  • Buildings with multiple levels
  • Maps that are not geospatially accurate (subway maps for example)
  • Most maps in Ocularis are floor plans, going through google maps is an unnecessary extra step
  • Reliance on an external server
  • Licensing
  • Feature creep

If the app is relying on Google’s servers to provide the tiles, and your internet connection is slow, or perhaps goes offline, then you lose your mapping capability. To avoid this, we cache a lot of the tiles. This is very close to bulk download which is not allowed. In fact, at one point I downloaded many thousands of tiles, which caused our IP to get blocked on Google Maps for 24 hours.

Using MBTiles

Over the weekend I brought back TileMill, and decided to take a look at the MBTile format. It’s basically a SQLite DB file, with each tile stored as a BLOB. Very simple stuff, but how do I serve the individual tiles over HTTP so that Ocularis can use them?

Turns out, Node.js is the perfect tool for this sort of thing.

Creating a HTTP server is trivial, and opening a SQLite database file is just a couple of lines. So with less than 50 lines of code, I had made myself a MBTile server that would work with Ocularis.

tileserver

A few caveats : Ocularis has the Y axis pointing down, while MBTiles have the Y axis pointing up. Flipping the Y axis is simple. Ocularis has the highest resolution layer at layer 0, MBTiles have that inverted, so the “world tile” is always layer 0.

So with a few minor changes, this is what I have.

 

I think it would be trivial to add support for ESRI tile servers, but I don’t really know if this belongs in a VMS client. The question is time was not better utilized by making it easy for the GIS guys to add video capabilities to their app, rather than having the VMS client attempt to be a GIS planning tool.

 

Advertisements

Camera Thumbnails

In the previous version of the administrator tool, we relied heavily on camera thumbnails. In the newest version, we have opted for a more compact tree control. We experimented with thumbnails in the tree, but the UI started to look more like an abstract painting. Sometimes you need a little visual reminder though, so we added a thumbnail panel.

In the lower left corner of any camera picker control, you will see a little triangle. The triangle pops the panel that shows a thumbnail, the camera label, and any comment you may have associated with the camera.

Here’s how it works.

Live Layouts vs. Forensic Layouts

In our client, if you hit browse, you get to see the exact same view, only in browse mode. The assumption is that the layout of the live view is probably the same as the one you want when you are browsing through video.

I am now starting to question if that was a wise decision.

When clients ask for a 100 camera view, I used to wonder why. No-one can look at 100 cameras at the same time. But then I realized that they don’t actually look at them the way I thought. They use the 100 camera view a “selection panel”. They scan across the video and if they see something out of place they maximize that feed.

I am guessing here, but I suspect that in live view, you want to see “a little bit of everything” to get a good sense of the general state of things. When something happens, you need a more focused view – suddenly you go from 100 cameras to 4 or 9 cameras in a certain area.

Finally, when you go back and do forensic work, the use pattern is completely different. You might be looking at one or two cameras at a time, zooming in and navigate to neighboring cameras quite often.

Hmmm… I think we can improve this area.

NVR Integration and PSIM

Axis makes cameras, and now they make access control systems and an NVR too. Should the traditional NVR vendors be concerned about it?

Clearly, the Axis NVR team is better positioned to fully exploit all the features of the cameras. Furthermore, the Axis only NVR does not have to spend (waste?) time, trying to fit all sorts of shapes into a round hole. They ONLY have to deal with their own hardware platforms. This is similar to Apple’s OS X that runs so smoothly because the number of hardware permutations are fairly limited.

What if Sony did the same? Come to think of it, Sony already has an NVR. But, it’s no stretch of the imagination to realize that a Sony NVR would support Sony cameras better than the “our NVR does everything”-type.

In fact, when you really look at the problem, an NVR is a proprietary database, and some protocol conversion. To move a PTZ camera, you need to issue various different commands depending on the camera, but the end result is always the same: the camera moves up. Writing a good database engine is really, really hard, but once you have a good one, it is hard to improve. The client and administration tools continue to evolve, and become increasingly complex.

Once it becomes trivial to do the conversion, then any bedroom developer will be able to create a small NVR. Probably very limited functionality, but very cheap.

The cheap NVR might have a simple interface, but what if you could get an advanced interface on top of that cheap NVR? What if you could mix cheap NVRs with NVRs provided by the camera manufacturers, and then throw in access control to the mix? You get the picture.

If you are an NVR vendor, it is going to be an uphill battle to support “foreign” NVRs. If Milestone decided to support Genetec, it would take 5 minutes for Genetec to break that compatibility and have Milestone scramble to update their driver. Furthermore, the end user would have to pay for two licenses, and the experience would probably be terrible.

The next time an NVR vendor says “we are an open architecture”, then take a look at their docs. If the docs do not describe interoperability with a foreign client, then they are not open. An ActiveX control does NOT equate “open”. Genetec could easily support the Milestone recorders too, but it would be cheaper and easier for Genetec to simply replace existing Milestone recorders for free (like a cuckoo chick).

In this market, you cannot get a monopoly and charge a premium. The “need to have” threshold is pretty low, and if you charge too much, someone will make a new, sufficient system and underbid you. Ultimately, NVRs might come bundled with the camera for free.

So, what about PSIM? Well, we started with DVR (which is really a good enough description of what it is), but then we decided to call it an NVR to distance ourselves from the – clearly inferior DVR. Then we weren’t satisfied with that, so then it became VMS. Sure, we added some bells and whistles along the way (we had mapping and videowall in NetSwitcher eons ago), so now we call it PSIM. It does the same as the old DVR. I think this kind of thing is called “marketing”.

Mindless Consistency Is The Hobgoblin…

When you work with WPF for a while, you realize that a lot of people who decide to share their knowledge are probably dogmatists. The dogma seem to be that separation of presentation and data is paramount, any mixture of code and presentation is evil and must be avoided at all cost.

I might be wrong about this, but I think we are hinging a lot of the examples on the facade pattern. The idea that you create different facades depending on the presentation context. E.g. a geometric object – say a circle – does not know anything about the presentation. To present a circle to a user, a facade is created. If the context is a web-page, the facade might create a PNG and return it to the user, in a regular app, we might just use GDI to draw a circle on the screen.

This frees the Circle-developer from knowing anything about HTML and mime-types, or GDI, or if a new technology comes along we just create a new facade. But the flipside is that the interface between the circle and the facades need to be fairly static. The circle dev might decide to add color to the circle, but now the facade developers in the other end of the world need to update all their circle presenting facades… ugh..

I like to have object “render themselves”. I think this is a little more object oriented. If I decide to add color to my circle object, then I will need to fix “drawToBitmap” and “drawToScreen” in my class (or inherited classes). But it seems logical to me that when I change the object, I will frequently know if the change also affects the presentation, and subsequently require editing of the presentation FUNCTIONS (rather than classes).

One example I came across was to show and hide an element based on whether the mouse was over ANOTHER element. Fairly simple to implement in code (a.onMouseOver = show b), but in “pure” XAML it becomes a hellish brew of ancestral selectors and relative bindings. In Javascript this would be an absolute piece of cake, and we’re mixing “code” with “presentation” like there was no tomorrow. WPF does not prevent you from doing this (at all, and that’s the route I take), but naturally I have to be aware if the code I am writing is “logic code” or “presentation code”.

If XAML is a hammer, some XAML tutors are seeing nails all over the place.