Building a New Thing

Take a look at this drawing.

architecture

This is an architectural sketch, and it looks as if it was drawn hastily w/o much thought.

What you’re not seeing is the other 100 drawings that were discarded. You’re not seeing the light-bulb filaments that didn’t work.

Now take a look at this,

Image result for nordea arkitekt skitse

This is a more refined image (of a different building), and it probably took longer to produce than the sketch.

If you’re building the same building over and over, then you’ll use the #2 drawing and just tweak it a little here and there. If there’s an issue with the ventilation, you’ll create a case, assign it to someone, and then track it’s progression. Eventually mark it as “done”.

But if you’re building a new thing, you gotta start with #1. You cannot afford the cost of pretty and detailed drawings when you’re going through 100 different designs and concepts. You can’t use Jira for phase #1. It’s too slow and too cumbersome. Just as you won’t use AutoCAD to draw concept sketches. A pen and paper is 100x faster, and you’ll need that speed to go through 100 concepts.

Sadly, what often happens is that the architect shows his sketches to people who do not understand the process, and they’re underwhelmed. They expect the #2 drawing, but demand the agility and speed of the #1 process.

The leads to a situation where just 2 or 3 concepts are tried (or maybe they just go with one), and because the concept phase is now expensive, there’s a tendency to just go with what we’ve got, even if it’s sub-par and doesn’t spark any joy.

A good architects sketches are anchored in reality, yet produce remarkable buildings that are pleasant to look at and live around. Bad architects produce ideas that aren’t feasible to actually build or – perhaps even worse – design buildings based solely on the knowledge of the technology, but with no empathy and understanding of human nature.

You’re going to need detailed drawings, but not until you’ve done 100 sketches.

Pareto Principle

…or the 80-20 rule as it is also known. I am not fully convinced that it holds true; that 80% of your profits come from 20% of your clients, that 80% of the work is done by 20% of the staff, and that 80% of the peas come from just 20% of the pods. But when designing software, I think you need to keep the Pareto principle in mind.

As we add features to our product, we usually sit down and have a meeting about how to create the UI to enable a user to accomplish some task. As we wireframe the UI, people around the table will come up with additional ideas, and point out weaknesses in the design. But quite often, the longest discussions are about what I call “fringe use”. The reality is that people tend to imagine that they are going to use a lot more functionality than they actually do. Since the feature is not in the product, we really don’t know if they are going to use it, and there really isn’t any scientific way of knowing if people will actually use a function. Merely Asking people simply doesn’t work, and we can’t really do A/B testing on software such as ours.

We might not spend 80% of our time discussing and designing UIs for the 20% (or less) that will actually use the feature, but we certainly spend a lot more time designing for the minority, than the expected revenue from these border cases seems to justify.

While the 20% might be getting a good deal, there are more serious consequences. The 80% that really don’t care about the 20%’ers special needs, are getting a shittier deal. Time is a limited resource, and every minute we spend on fringe, is a minute stolen from normal users experience. At times it also means that the interface for Mr. Normal becomes cluttered with a lot of irrelevant options. More options means that the UI is more taxing on the old brain.

I think we need to get back to spending 80% of the design time on the 80% of the users. Am I wrong?

Developing a Great UI

Apart from the, seemingly obvious, requirement that a UI has pleasing aesthetics and meaningful mechanics along with features that let the user feel in control, there is also a requirement that the technical foundation is sound. The term “polishing a turd” is sometimes fitting when it comes to software.

A very common principle of software design is separation of presentation and data. This made a lot of sense in the good old days when you would need to write your own combobox (drop down list). It would be completely insane to put the code to store a value on a server inside the combobox code; if you did, your combobox would be good for one purpose only. Today, we create standard comboboxes without even considering the amount of code behind it. We re-use the combobox to show vehicle makes, user types, colors and so on.

But how far should this principle be pushed?

Say you have a popup dialog, is it OK to write the “store on server code” in the OnClick handler of the OK button? I think it is a bad idea; the dialog is a representation of some known object, I’d much prefer that the code to save the object to a server is part of the class, and so the OnClick simply calls “dataobject.Save()”

The principle can be pushed even further, but this is where I start to get uneasy. Say you have a object that represents an account. The account has various attributes such as name and password. The access to the data happens through what is sometimes called a Controller, a Presenter or a ViewModel. The idea is that the controller adds and additional layer of information that pertain to the presentation. Say you change the account-type attribute to “guest”, then we might want to disable the “create account” button. To make this happen, the controller tells the view that the “IsCreateAccountEnabled” attribute has changed when the account type changes. 

This allows you to write automated tests that check to see if IsCreateAccountEnabled truly becomes false when the account type is “guest”. 

The fantasy is that you can then give the controller to an awesome UI designer who will wire the different things together and it will all be cool. But here’s the catch. What if changing the user type to “admin” required a roundtrip to the server which could take seconds? Suddenly, when you click the dropdown box, you have to wait for the server to respond. You did not expect that a dropdown would cause a 1 second holdup, locking the UI. You click again, before the UI becomes responsive. Windows will queue that mouseclick, causing a second roundtrip to trigger, and waste another second of your life. Why on earth did the designer not take this into consideration?

How would the designer know that setting accounttype to “admin” would take 1 second? Nothing in the controller tells the designer that the accounttype attribute is “slow”. Perhaps the designer should assume that all attributes are slow. But really? Is that what we should do? There is a cost to making things asynchronous, and a lot of designers won’t know how to wire things that happen asynchronously.

The same applies to arrays. Unless the designer knows a) How long the array is expected to be, and b) how frequently the array changes, and c) how long it takes to modify the array, then the UX is dependent merely on luck.

A truly awesome UI requires that the designer and the developer talk to each other every single day. It requires that the designer understands what arrays will be “long”, what side effects there are to every attribute being set and so on. In essence, what I am saying is that there is a much strong dependency between the data, the controller and the view, and that software-shops need to take that into consideration.