When the UI mask “real” Issues

Let’s be clear; UI issues are real issues, and should be treated as such, but they are different in the sense that a poor UI can be usable, given enough practice and training, but technical issues (such as crashes) will render a product useless.

For the NVR writer the test is pretty simple. The NVR must acquire video and audio when instructed to do so, and it should be able to deliver the acquired data to the end user whenever they feel compelled to gaze at it. Sure, there are plenty more things the NVR need to do reliably, but the crux is that the testing is pretty static and deterministic.

Enter the grey area of analytics – it is much more difficult to test an analytics solution. Sure, we could set up a very wide range of reference tests that might be used to measure the quality of the algorithms (assuming the thing does not crash), but these tests might be “gamed” by the vendor, or then number of required tests would make the testing prohibitively expensive. I am sure there would be a market for an analytics test generator, simulating fog, rain, reflections, shadows, vibration and so on ad infinitum at almost no cost, but that’s beside the point.

My concern is that sometimes the actual, real performance is masked by poor UI; the operator blames the poor performance not on the algorithms, but instead themselves for not configuring, calibrating and nursing the system enough. When the vendors techies come on site they are always able to tweak the system to capture the events – “see, you just have to mask out this area, set up this rule, disable this other rule” and so on. The problem is that 2 days later, when it’s raining AND there is fog, the analytics come up short again. You tweak the rules, and now you’ve broken the setup for reflections and shadows and so it goes on and on.

This dark merry-go-round of tweaking is sometimes masked by the poor (crap?) UI. So this brings us to my argument: If the UI had been stellar, logical, easy to use, intuitive and all that, then the operator would quickly realize that the system is flawed and should be tossed. But if the UI is complex, weird, counterintuitive it takes longer for the end user to realize that something is fundamentally wrong, and subsequently they might stay in the business for longer.

Sure, at times things are just inherently complex (the complexities of a program can never be removed by replacing text-entry with drag’n drop), NAT setups and port forwarding is always a little difficult to handle in an elegant way and so on, and naturally analytics do require a fair bit of massaging before they work entirely as intended (if you have realistic expectations!! – which reminds me of a book I read recently on the financial meltdown; if people demand snakeoil, snakeoil will be provided).

End rant.

Advertisements

Author: prescienta

Prescientas ruler

2 thoughts on “When the UI mask “real” Issues”

  1. The topic of video analytics configuration is a favorite of mine. When developing our analytics, we realized we didn’t know how to test them deterministically from release to release. We wanted to make sure that changes to the algorithms didn’t change their behavior, but quickly realized that if we relied only on functional testing, different operators (that is, the test tech who was configuring the analytics algorithm for a test) had different levels of skill at setup, and thus could produce quite different results–even using the same test procedures.

    Also, considering that analytics are often applied to mission critical applications, there was a property of “trustworthiness” that we had to convey to the user that the analytic would in fact work as *they* intended during live operation. In other words, how confident the customer felt about the configuration experience was an important aspect of operation.

    The big ah-ha came when it was pointed out to us that the user was a component of the system under test, and thus their training was being scrutinized as well as the algorithm. (Think of a pilot flying an airplane. The pilot is effectively part of the system).

    Eventually we decided to apply strict standards to context of use in testing (which includes the user). We normalized operator training to remove variability (this was preceded by usability testing designed to identify the UI issues that might cause problems with effectiveness, and training was designed to optimize around those issues). We also standardized a video clip library to use in testing (in our case, i-Lids). We then adopted guidelines and metrics to use to determine “quality in use” that included effectiveness, productivity, safety and satisfaction (see ISO 9241-11).

    Through the use of both functional testing as well as QIU metrics, we are able to deterministically test video analytics with greater confidence.

    (Credit is due to Steve Loveless and Sukhpreet Gill for the majority of this work at Pelco. And to Brent Auernheimer for his guidance)

  2. Good points! I especially like the airplane analogy; even if the user recognizes the hardware (the pc, the mouse, keyboard and screen), the system is highly specialized and WILL require training. No airplane manufacturer is expected to do a usability test on a random population – everyone operating an airplane is expected to have been through training, and so the UI and training goes hand in hand.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s