As a programmer, and as a human being, I make mistakes every day (I know it’s hard to believe… 🙂 ). My programming mistakes are extremely visible – a giant “Unhandled Exception” box usually tells me, the clients and everyone else that I made a mistake. When a sales person quotes the wrong price, or sends the wrong receipt, the errors are also visible (if not immediate, then in time).
But what about marketing people? Typos for sure, but these errors are rather trivial to detect and fix prior to release. But what about picking the “wrong” color, font or layout. It is very hard to argue that picking one font over another (say Helvetica vs. Garamond) is objectively right or wrong. Granted, if you’ve decided on a color for your corporate identity, then you have to pick the same hue over and over again, but what is the CORRECT hue? It’s all very touchy/feely, and as a programmer I have a really hard time tolerating when marketing people claim that this or that choice is the “right one”.
We use metrics all the time, and marketing people are no different, but do they understand that the ruler they are using is not really a reliable measurement?
In physics we are generally careful not to add too many digits if the meter (tool) is not accurate. A good example is the use of a digital scale. It might show that you weigh 192.43112456 lbs, whereas an analog one would be as exact but because you can’t read it very well you might say 192.5 lbs. Let’s assume that the digital weight has a systematic bias, and it adds 3 pounds to the measurement. You might get fooled by the assertive, and seemingly accurate reading, and if the scale was 100% accurate (apart from the bias) you might feel that the reading was extremely accurate (when – in fact – it is not). In fact, the analog scale might be much more accurate mechanically than the digital one.
Now, if the digital scale is very expensive, we might be even more reluctant to use the cheap analog one as an indication that something was wrong with our digital tool. We’d be even less inclined to even perform the test if we are both extremely confident that our first measurement is accurate and the cost of measuring using the analog scale is also very high. And perhaps we’d trust the numbers even more if the digital scale gave us the result we already expect to be true.
Some people have this idea that since they’ve spent a lot of time/money on something, their metrics ought to be more reliable than – say – asking a cab driver. Just because something is expensive and cumbersome, does not mean that is effective or accurate.
I think that we need to listen to the market, and take our cues from it, but we need to thoroughly understand WHY the “market is demanding it”. If we cannot provide a useful solution, or if providing a useful solution will jeopardize other features then we have to decide what goes as important and what isn’t.
To the mediocre marketing person the solution is quite simple – receive emails about features, ask R&D to create perfect solutions, and then advertise that you’ve done so. But that is both dogmatic and trivial to the point that I doubt we need to pay anyone to perform these tasks. The marketing people will most likely spend a lot of money and time justifying their stance, which brings us to the analog vs. digital scale. I suspect that some people will keep “measuring” until they find a scale that will support their initial claim.
It also seems to me as if marketing people insist on “right vs. wrong” when clearly some marketing people must be getting it wrong all the time! Just like we always have postmortems on bugs and errors where it becomes obvious what went wrong, we understand – intuitively – that we are going to continue to make mistakes, and that just because hindsight is 20/20 it does not mean that our foresight is going to be.
To quote Steve Jobs, quoting Henry Ford…
If I asked people what they wanted, they’d ask for a faster horse