The timeslicer allows you to view the camera at different times, all in the same view. There’s nothing magical about that. What we added was “synchronized digital PTZ”. Basically, you can select a ROI and all the slices will show that ROI too (maybe that isn’t magical either).
As a UI designer I heard this one a lot : “If my mom can’t use it the first time she tries, it is not intuitive”. First of all, it is not your mother who is confused – it is you 🙂 But if we were to follow that logic, we wouldn’t have a computer mouse or even a touch screen. It may be semantics, but to me “Intuitive” means that 99.9% of all people can use the system after being shown how to use it ONCE. Trying to design a system that everyone can use with no introduction is not cost efficient. As soon as people become familiar with the system, they are annoyed that they have to use a wizard to add a camera, annoyed that the interface is cluttered with labels (100 labels are as confusing as none). You can do a “let’s get to know each other” like on a lot of mobile devices and video games these days, but so far no-one is willing to make that investment.
We looked at the traditional systems; most have an L-type layout (menu on the left and some controls at the bottom, video squeezed in at the top/right corner). In a 16 camera view, you may have the controls on the left or bottom panel, but the video panel is not adjacent to the controls, which – to me – feels strange. My other concern is that the vast majority of the time, people are simply looking at video. Not using the controls at all, yet the controls take up more than 30% of the screen estate. Frequently, functions that are never, ever, used are prominently displayed (along with a vendor logo!). The many functions that are always visible are then grouped in tabs and hidden in panels that get scrolled below the edge of the screen and so on.
Very early on we decided to make some tough decisions on the design. The toughest task is to decide NOT to show something. Any design committee can fall into the trap of “show everything”. As people complain they cant find a particular button, the solution is to “make it bigger”, “make it red”, “make it blink”, and before you know it…you’ve got Times Square.
Granted, we did go to extremes at times, and we had heated arguments about just about every button on the screen. It would have been easier on the team to just copy what everyone else is doing, but luckily we were able to try something new.
Need Camera Grouping?
Some systems allow you to group cameras. You can create a “lobby group”, a “1st floor group” and so on. This is functionally very close to creating a view. All the cameras in the lobby could be added to a view called “lobby view”. The user could pick that view and get an overview, and then maximize each camera to view what happens on that camera in detail. In a way, the view is a group of cameras in itself. This means that there is no need for “camera groups” in the client.
If you wanted to bypass the “select view, maximize camera”, we could make an additional node on a view in the tree. This node would contain automatically generated 1×1 views of each camera in the parent view. The user could then either select “lobby view”, or directly from the view selection menu, pick “lobby camera 1”.
Another alternative to groups
As an experiment I’ve added a filtering mechanism to our camera selector; When the camera selection list is visible you can start typing on the keyboard. The camera list is then filtered by what you enter. Say you enter L… O… B… now all cameras with the string “lob” in their name will be displayed. This means “Lobby East Camera 1″ but also “Lobster Bay Camera”. Keep typing.. B… Y and now the list is filtered even further. In most cases, you don’t really need to type the entire word.
The camera menu can be invoked from the keyboard (the tilde key), which saves another few mouse clicks.
Keep in mind that this is a preview, and there are no guarantee that this feature will be in the next release (you can tell that Ocularis is hosted in my test-harness too)
To me, tool tips are a designers crutch. Can’t think of an icon? Just add a tool tip! Then you get tooltips that are really miniature manuals. “Click the button below to play video from the selected camera in forward direction at the speed selected on the playback speed widget in the left panel”. Others are meaningless; you have a button that has a label called “open”. Hover the mouse, and the tool tip says “open”.
But in this case, we needed tool tips – no question about that – so we added them a while back 🙂
Instant Playback in Live Mode
We added instant playback to make it possible for the operator to back up and review something that just happened without having to switch into full browse mode, and thus pausing all the other cameras. It is not a tool meant to perform forensics, but to allow the operator to make a quick decision on whether to investigate further. Naturally, this makes no sense in a 1×1 view where there are no other cameras (not sure why John Honovich decided to demonstrate this feature in a 1×1 view 🙂 – why, John, WHYYYYY???)
Playing back exported video
In the past Ocularis did not have a separate, low spec database player. We do now. I felt that most external entities would prefer individual snapshots or AVI files, rather that some proprietary player. If a TV-network is putting out an APB they need to prepare the video for broadcast, in that case an external player is useless. Sending evidence to law enforcement would probably also be in JPEG/AVI format rather than some binary executable (although a friend of mine tried to email local police, but had to use the handling officers private gmail account as the police did not allow emails with attachments!)
My impression was that we’d made up this use case, where the judge needs to see the video, and it HAS to be the original format, and the judge has a low spec PC (or something like that). I felt that the truth was that the client we had in the past was unable to open exported video by itself, so we made a “player”. Furthermore, to run the client, you needed to “log in” somewhere which is not always possible. If the client could run in “disconnected” mode, then there would be no need for an external player.
The player also serves another purpose. If I export the evidence as AVI, then I have no real good tools to seek in the video on a frame by frame basis, nor can I print a “report” from Media Player, or export singular frame (without having a fair amount of skill) or perform digital zoom on an area. A simple AVI player that could offer some rudimentary forensics would probably do just as well.
Multiple Camera Export
In a 2×2 view, you can actually export any or all 4 cameras at once. The export dialog allows you to select which feeds to include, and exporting happens in the background. Once the export has started, you can go back to live view, perform further investigation and so on.
Yes, I am shamelessly promoting the product that a quite a few people put a lot of effort into making. I think there are some misconceptions about why we did things the way we did, perhaps I can offer some insights. All of us care very much about external input, and critique from peers. It does not mean that I have to agree, but trust me – we do take it very seriously.
No, Ocularis is not perfect, never will be. There will always be things that we can improve, and we certainly can’t be perfect to everyone. Hopefully we can be close to perfect for someone.
When I first co-wrote Milestone Surveillance Lite and XXV we had a performance problem. My PC was a Celeron 300, and the Axis 200+ was unable to stream more than a couple of frames per second. Analog Matrix systems would run full framerate (25 or 30 fps), show 9 or even 16 cameras at any given time, and have virtually zero lag for joystick control.
As the hardware became more powerful we were able to add more cameras. Few people ran XXV (named after its ability to show 25 cameras) at full capacity, but 25 was more than 16 and more is better. People had the theoretical option to run 25 cameras which was a good selling point. People understood the argument instantly.
Since the jump in cameras on the screen was such a good story, we went on and said why not place 64 cameras on the screen at once. Again, few people ever ran 64, but they had the option. Again 64 is better than 25, and it is such a simple principle to explain.. more = better.
Now we can do hundreds of cameras on the screen at once. No-one can make sense of what is going on, but more is better..
What would happen if we released a software that went back to 16 cameras? Would anyone buy such a system? Since we’ve kept preaching that more is better, then 16 must surely be vastly inferior to a 200 camera layout.
That’s a difficult sales-pitch!
We’ve painted ourselves into a corner. Leading the clients to believe that “more is better”. More features, more cameras, more frames per second and so on.
Which would be true if we had infinite resources.
When a company decides to spend time on A, then they are NOT spending time on B. Adding one more camera driver, might mean that the IP auto-detection function did not get done, spending a lot of time on optimizing the decoding pipeline means NOT spending time on simplifying the UI and so on.
I think people like the idea that they CAN go to 100 cameras, just like the speedometer suggests that I can go to 160 mph if i so desire.
Truth is, we never do, and we really can’t – even if we tried.
The iPhone showed the world that people will trade more for less, if things are done right. The world was awash with phones that had myriads of features. Microsoft laughed at Apple – a phone with no bluetooth, no exchange server support, no cut-and-paste! Microsoft had long followed the strategy that five shitty features had to be better than one good one, and now a newcomer was going to do things totally differently. No chance they would succeed.
Perhaps video surveillance is different.
Why do people REALLY need a 64 camera view? Help me out here!
One of the things I was taught before I started gambling in the stock market was to never “fall in love with a company”. The flip-side is probably just as bad; Short selling AIG because of hatred is not a rational thing to do.
If you are going to pretend to be a journalist, you have 2 options. Pretend to be objective or acknowledge that you are totally biased and confess to it. Consider Paul Krugman vs. Glenn Beck; Krugman does not pretend to be objective at all, and when you read him you KNOW that you are going to get a doze of Democrat and/or neo-Keynsian propaganda. That is Paul Krugmans vantage point. There are no hidden agendas, but plenty obvious ones. Beck on the other hand pretends to be objective. Opinions can’t be objective – ever. Even opinions based on facts are subjective. Some opinions are not based on facts at all, and we have a tendency to attribute more value to facts that support our opinion, while dismissing facts that do not (it’s called confirmation bias).
Stating a number of verifiable facts that support your opinion, does not make it objective. Lots of people think it does. They forget all the other factual information that do not support the stated opinion. Some facts are not brought to the table at all, others are intentionally forgotten, dismissed or disregarded as mere fantasy.
If you, as a blogger, pretend to be objective, and you might even believe that your opinions truly are objective (after all, they are based on facts), you’ll find that your audience becomes self-radicalized. Since what you are presenting is facts, it’s extremely hard to argue. Host of show : “ARE you denying the fact that….”, guest : “No, but we also found that..”, host : “those facts are not important, MY facts are!” (or alternatively, the host might just dispute the facts as being lies or fiction). As time passes, your only audience will be people who agree with you, and as “everyone agrees” it only reaffirms your opinion as being objective. Lou Dobb’s and Bill O’Reilly likes to run quote panels from “regular folks” who praise their wisdom as a sort of ego-booster.
So, opinions based solely on facts, are not objective. Nor are opinions based on fiction. That does not mean that I take opinions based on fiction as seriously as opinions based on facts. I am also painfully aware that I have to trust someone, which adds a secondary layer to the opinion shaping. As we have confirmation bias, we will trust whoever presents information that supports our own ideas. If I believe the earth is flat, I will consider a scientist who says it is round a fool. Why should I listen to a fool at all?
Yes, but we are moving into a new house and we are working on some fairly cool new stuff. Took notice of a few things.
Groupon rejected a giant offer from Google. Had to fathom, but I suppose that the difference in quality of life is does not improve from $100 million to $1000 million, but giving up your company might actually make you miserable.
Grooveshark changed its interface, they ditched Flash and are now HTML based. I am constantly amazed by these guys.