Do Managers in Software Companies Need to Code?

I think so.

The horrible truth is that there are good and bad coders, there are good and bad managers and there are easy and hard projects.

A project, taken on by good coders and good managers can fail simply because the project was too complex and was too intertwined with system that the team had no control over. You could argue that the team never should have taken on the task, but that’s why you warn the customer of the risk of non-completion and bill by the hour.

When doing research on the skills needed to be a good software project manager, there seems to be an implied truth that the coders simply do what they are told, and that coding/design errors are always the managers fault. At the same time, you’ll find that people complain about micromanagement, and not letting the coders find their own solution. I find these two statements at odds with one another.

Coders will sometimes do things that are just wrong, yet it still “works”. How do you handle these situations? Do you, as a manager insists that the work is done “correctly”, which the coder may think is just a matter of taste, and not correct vs incorrect? Or do you leave the smelly code in there, and keep the peace?

If you don’t know how to code, and you’re the manager, you won’t even notice that the code is bad. You’ll be happy that it “works”. Over time, though, the cost of bad code will weigh down on productivity, the errors start piling up, good coders leave as there is no reward for good quality and they’re fed up with refactoring shitty code. If you have great coders, you might not run into that situation, but how do you know if you have great coders if you can’t code?

Maybe you’re the best coder in the world, and you’re in a managerial position facing some smelly code, you might consider two approaches: scold the coder(s), and demand that they do it the “correct” way (which is then interpreted as micromanagement), or alternatively, if you’re exhausted from the discussions, you just do a refactor yourself on a Sunday, while the kids are in the park?

In the real world, though, the best solution is for the manager to have decent coding skills, and posses that rare ability to argue convincingly. The latter is very hard to do if you do not understand the art of coding. Furthermore I don’t think coders are uniquely handicapped in being persuasive and certainly not when dealing with other coders (n00b managers wearing a tie are universally despised in the coding world).

Every coder is different, and act differently depending on the time of day, week or year. Some coders have not fully matured, some are a little too ripe, and some just like to do things the way they always did (or “at my old job we…”), different approaches are needed to persuade different people.

I must confess that this is what I have observed, the few times I have been wearing anything with any resemblance to a managerial hat, I have walked away being universally despised and feared as some sort of “Eye of Sauron” who picks up on the smallest error with no mercy when dishing out insults, but in theory at least, I think I know how thing ought to be.

So,if you are managing software projects and interacting with coders, you need to know how to code.


Debtors Prison

There’s a wonderful term called “technical debt”. It’s what you accrue when you make dumb mistakes, and instead of correcting the mistake, and taking the hit up front, you take out a small loan, patch up the crap with spittle and cardboard, and ship the product.

Yay! Free money!!!

Outside R&D technical debt doesn’t seem to matter. It’s like taking your family to a restaurant and racking up more debt; the kids don’t care, to them, the little credit card is a magical piece of plastic, and the kids are wondering why you don’t use it more often. If they had the card, it would be new PlayStations and drones every day.

Technical debt is a product killer; as the competition heats up, the company wants to “rev the engine”, but all the hacks and quick fixes mean that as soon as you step on the gas, the damn thing falls apart. The gunk and duct tape that gave you a small lead out of the gate, but in the long run, the weight of all that debt will catch up. It’s like a car that does 0-60 in 3 seconds but then dies after 1 mile of racing. Sure it might enter the race again, limp along for a few rounds, then back to the garage, until it eventually gives up and drops out.

Duct Tape Car Fix - 03
Might get you home, but you won’t win the race with this fix

Why does this happen?

A company may masquerade as a software company and simply pile more and more resources into “just fix it” and “we need” tasks that ignore the real need to properly replace the intake pipe shown above. “If it works, why are you replacing it”, the suit will ask, “my customer needs a sunroof, and you’re wasting time on fixing something that already works!”.

So, it’s probably wise to look at the circumstances that caused the company to take on the debt in the first place. An actual software company might take technical debt very seriously, and very early on they will schedule time for 3 distinct tasks:

  1. Ongoing development of the existing product (warts and all),
  2. Continued re-architecting and refactoring of modules,
  3. Development of the next generation product/platform

Any given team (dependent on size, competency, motivation, and guidance) will be able to deliver some amount of work X. The company sells a solution that requires the work Y. Given that Y < X, the difference can be spent on #2 and #3. The bigger the difference, the better the quality of subsequent releases of the product. If the difference is small, then (absent team changes), the product will stagnate. If Y > X then the product will not fulfill the expectations of the customer. To bridge the gap until the team can deliver an X > Y, you might take on some “bridge debt”. But if the bridge debt is perpetual (Y always grows as fast or faster than X), then you’re in trouble. If Y > X for too long, then X might actually shrink as well, which is a really bad sign.

Proper software architecture is designed so that when more (competent) manpower is added, X grows. Poor architecture can lead to the opposite result. And naturally, incompetent maintenance of the architecture itself (an inevitable result of a quick-fix culture), will eventually lead to the problematic situation where adding people lead to lower throughput.

A different kind of “debt” is the inability to properly value the IP you’ve developed. The cost of development is very different from the value of the outcome. E.g. a company may spend thousands of hours developing a custom log handler, but the value of such a thing is probably very low. This is hard to accept for the people involved, and it often leads to friction when someone points out that the outcome of 1000 hours of work is actually worthless (or possibly even provides a net negative value for the product). A lot of (additional) time may be spent trying to persuade ourselves that we didn’t just flush 1000 hours down the drain, as we’re more inclined to believe a soothing lie than the painful truth.


A company that wants to solve the debt problem must first take a good look at its core values. Not the values it pretends to have, but the actual values; what makes management smile and how it handles the information given to them. Does management frown when a scalability issue is discovered, do they yell and slam doors, points out 20 times that “we will lose the customer if we don’t fix this now!”. The team lead hurries down the hallway, and the team pulls out cans of Pringles and the start ripping off pieces of tape.

The behavior might make the manager feel good. The chest-beating alpha-manager put those damn developers in their place, and got this shit done!. However, over the long run, it will lead to 3 things : 1) Developers will do a “quick fix”, because management wants this fixed quickly, rather than correctly, 2) Developers will stop providing “bad news”, and 3) developers that value correctness and quality will leave.

To the manager, the “quality developer” is not an asset at all. It’s just someone who wants to delay everything to fix an intake that is already working “perfectly”. So over time, the company will get more and more duct-tapers and hacks, and fewer craftsmen and artisans.

The only good thing about technical debt (for a coder) is that it belongs to the company, and not to the employees. Once they’re gone, they don’t have to worry about it anymore. Those that remain do, and they now have to work even harder to pay it back.


Why Products Go Bad

The simpleton will equate commercial success with quality.

I don’t.

A product can be well made, even if it is not commercially successful and vice versa. The Microsoft Zune HD, for example, was a great product. Hell, Microsoft’s Phone OS is/was good too. In contrast, Kinect is/was a terrible product. It promised the world, and it was shit. Johnny Lee proved that Nintendo’s controllers were fucking awesome, and Microsoft wanted some of that goodness. Most people at Microsoft knew how piss poor Kinect was, most devs knew too, but  management did not want to be upstaged by Nintendo, so they released this fine piece of junk. Molyneux flat out lied about the capabilities of the thing (and he was not the only one I’m sure).

Sometimes, and perhaps too often, see products that have the potential to be “good”, and perhaps they are already good, but then, gradually as time passes and new generations of the product are released, it turns to utter crap. Why does this happen? You would expect the opposite to be true. You’d expect that the next generation of a product improved on the old.

My own experience is that I am generally considered “an overthinker”. Instead of just shutting up and doing what “the customer asks”, I think about the ramifications over the longer term. I try to interpret what the real problem is, and I spend a long time thinking about a good solution. I spend a lot of time talking about the problem with my peers, drawing on whiteboards. I think about the issues as I drive drove to the office, while I fly flew across the Atlantic. And sometimes, I change my mind. Sometimes, after long discussion, after “everyone agrees”, I see things in a new light and change my mind. And it pisses people off.

In the general population, I believe that there is a large percentage who just want to be told what to do, do what they are told and then at 5.15 pm drive home and watch TV, happy and content that they did what they were told all day. To the majority, “a good day” is doing as much of what you’re being told as possible, regardless of what the task is. They do not want to be interrupted by assholes that can’t offer them a promotion or a raise, who critique the “what” or the “how” – regardless of merit. The “customer” to them, is not the user of the product, the “customer” is their immediate supervisor. Make that guy happy, and you move upwards.

Telling people that unchecking “always” does not mean “never” makes people angry. They can understand the logic (not always = sometimes), but they are angry that you can’t understand that their career is jeopardized if they pointed that out when their supervisor told them to make that change. They will correct the problem if a supervisor tells them to – even if screams them in the face that this is useless to the end user. Doesn’t matter. The end user does not dish out promotions or raise their salary.

As these non-thinkers move up, they get to supervise people like me (JH: No, this has not happened at OnSSI). And that’s where it gets really bad. Now they are in a position where they are told what to do, and they are telling someone else to do that thing (nirvana), and then they learn that the asshole doesn’t want to listen and do what he is told, like “everyone else” does, so eventually the “overthinker” is replaced with a non-thinker, and this continues until all the thinkers are gone, and the company or branch then does exactly what the customer asks.

When you see features that flat out do not work and never did work, and there’s no motivation to fix that issue, then you have to pause, and consider if you have enough thinkers among the non-thinkers.

Because you need both.

You need lying sales and marketing people (that know just how far the truth can be stretched, or who can make a reality distortion field), you need asshole genius programmers who knows iOS, gstreamer, ffmpeg and Qt, you need vain and arrogant designers who can draw the best damn icons and keep everything consistent across the apps, you need dried up, mummified sysops to run IT.

But most of all, you need to make sure that these people think, and care about the end user, instead of just title on their business-card.


Sing Along, Everyone!

An old colleague of mine made a post about trust, freedom and having everyone humming along to the same tune, and how the company’s (I suppose superior) culture could not easily be emulated by competitors.

It made me think of a book I read this summer : “Why Smart Executives Fail” by Sydney Finkelstein. Chapter 7 is called “Delusions of a Dream Company”.

Here’s a choice excerpt:

When businesses start losing touch with reality because of an arrogant belief in their own superiority and their company mission, they tend to adopt a pervasively positive attitude. The more insular the company’s outlook, the more buoyant its managers will tend to be about the company’s prospects.


Product Management

In May 2008, Mary Poppendieck did a presentation on leadership in software development at Google. In it, she points out that at Toyota and 3M the product champions are “deeply technical” and “not from marketing”. The reason this works, she states, is that you need to “marry the technical possibilities with what the market wants”. If the products are driven by marketing people, the engineers will constantly be struggling to explain why perpetual machines won’t work, even if the market is screaming for it. So, while other companies are building all-electric vehicles and hybrids, your company is chasing a pipe-dream.

Innovative ideas are not necessarily technically complex, and may not always require top technical talent to implement. However, such ideas are often either quickly duplicated by other players, or rely on user lock-in to keep the competition at bay. E.g. Facebook and Twitter are technically simple to copy (at small scale), but good luck getting even 100 users to sign up. Nest made a nice thermostat, but soon after the market offered cheaper alternatives. Same with Dropcam. With no lock-in, there is no reason for a new customer to pick Dropcam over something cheaper.

To be truly successful, you therefore need to have the ability to see what the market needs, even if the market is not asking for it. If the market is outright asking, then everyone else hears that, and thus it’s hardly innovation. That doesn’t mean that you should ignore the market, obviously you have to listen to the market and offer solutions for the trivial requests that do make sense (cheaper, better security, faster processing, higher resolution and so on), and weed out the ones that don’t (VR, Blackberry, Windows Mobile). It doesn’t matter how innovative you are, if you don’t meet the most basic requirements.

However, it’s not just a question of whether something is technically possible, it’s also a question as to whether your organization posses the technical competency and time to provide a solution.If your team has an extremely skilled SQL programmer, but the company uses 50% of her time to pursue pipe-dreams or work on trivialities (correcting typos, moving a button, adding a new field), then obviously less time is available to help the innovation along.

Furthermore, time is wasted by doing things in a non-optimal sequence and failing to group related tasks into one update whenever possible. This seem to happen when technical teams are managed by non-technical people (or “technical” people who are trained in unrelated areas). Eventually, the team will stop arguing that you really should install the handrail before the hydrant, and simply either procrastinate or do what they are told at great direct (and indirect!) cost.


At GOTO 2016, Mary states that 50% of decisions made by product managers are wrong, and 2/3 of what is being specced is not necessary and provides no value to the end user, therefore, she argues, development teams must move from “delivery teams” to “problem solving teams”, and discard the notion that the product manager is a God-like figure that is able to provide a long list of do-this and do-that to his subordinates. Instead, the product manager must

  • able to listen to the market
  • accurately describe the technical boundaries and success criteria for a given problem
  • be able to make tradeoffs when necessary.

To do this, I too, believe the PM must be highly technical so that they have the ability to propose possible solutions to the team (when needed). Without technical competency (and I am not talking about the ability to use Excel like a boss here), the PM will not be able to make the appropriate tradeoffs and will instead engage in very long and exhaustive battles with developers who are asked to do something impossible.

Is Mary correct? Or does she not realize that developers are oddballs and mentally incapable of “understanding the market”? Comments welcome.


Cost of Error

When I got my first computer, the language it offered was BASIC. Ask any good programmer, and they’ll tell you that BASIC is a terrible language, but it got worse: my next language was 68K assembler on the Commodore Amiga, and with blatant disregard to what Commodore was telling us, I never used the BIOS calls. Instead, I bought the Amiga Hardware Reference Manual and started hitting the metal directly. During my time in school, I was taught Pascal, C++ and eventually I picked up a range of other languages.

What I’ve learned over the years is that the cost of a tool depends on two things: The time it takes to implement something, and (often overlooked) – the time it takes to debug when something goes wrong.

Take “Garbage Collection”, for example, the idea is that you will not have memory leaks because there’s no “new/delete” or “malloc/free” pair to match up. The GC will know when you are done with something you new/malloced and call delete/free when needed. This, surely, must be the best way to do things. You can write a bunch of code and have no worries that your app will leak memory and crash. After all, the GC cleans up.

But there are some caveats. I’ve created an app that will leak memory and eventually crash.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace LeakOne
 class Program
   class referenceEater
     public delegate void Evil ( );
     public Evil onEvil;
     ~referenceEater() {
       Console.WriteLine("referenceEater finalizer");

   class junk
     public void noShit() { }
     public void leak() {
       for (int i = 0; i < 100000; i++) {
          referenceEater re = new referenceEater();

     ~junk() {
   static void Main(string[] args) {
      for (int i = 0; i < 1000000; i++) {
       junk j = new junk();

What on earth could cause this app to leak?

The answer is the innocent looking “Console.WriteLine” statement in the referenceEater finalizer. The GC runs in its own thread, and because Console.WriteLine takes a bit of time, the main thread will create millions of referenceEater objects and the GC simply can’t keep up. In other words, a classic producer/consumer problem, leading to a leak, and eventually a crash.

Running this app, the leak is fairly apparent just by looking at the task manager. On my laptop it only takes 5-10 minutes for the app to crash (in 32 bit mode), but in 64 bit mode the app would probably run for days, slowing things down for day, until eventually causing a crash.

It’s a bitch to debug, because the memory usage over a loop is expected to rise until the GC kicks in. So you get this see-saw pattern that you need to keep running for a fairly long time to determine, without a doubt, that you have a leak. To make matters worse, the leak may show up on busy systems, but not on the development machine that may have more cores or be less busy. It’s basically a nightmare.


There are other ways for .NET apps to leak – a good example is forgetting to unsubscribe from a delegate, which means that instead of matching new/delete, you now have to match subscription and unsubscription from delegates. Fragmentation of the Large Object Heap (not really a leak, but will cause memory use to grow, and ultimately kill the app)

The C++ code I have I can test for leaks by just creating a loop. Run the loop a million times, and when we are done we should have exactly the same amount of memory as before the loop.

I am not proposing that we abandon garbage collection, or that everything should be written in C++, not even by a long shot. As an example, writing a stress test for our web-server (written in C++), was done using node.js. This took less than 2 hours to put together, and I don’t really care if the test app leaks (it doesn’t). There are a myriad of places where I think C# and garbage collection is appropriate. I use C# to make COM objects that get spawned and killed by IIS, and it’s a delight to write those and not having to worry about the many required macros needed if I had done the same in C++.

And with great care, and attention, C# apps would never leak, but the reality is that programmers will make mistakes, and that the cost of this is taken into consideration when trying to determine what tool is appropriate for what task.

Brainteasers at SpaceX

Allegedly, Elon Musk, asks people a brain-teaser during the hiring process at SpaceX. I’ve been asked a few myself in my day. Failed them all. “How do you determine if a figure is convex or concave?”, “You have 2 buckets..”, “Three fishermen return from the sea”.

What I now understand is that there are some basic principles involved in these; the earth is a sphere, and – the most difficult one to grasp – the fact that someone doesn’t know, also has some informational value. So, I am not really impressed by someone posing these questions, as if they had come up with them right then and there. In fact, my old boss (John Blem) posed a very similar question to me when I was hired. I suspect he got the question from Mensa (where he is a member), and now, 10 years later, Elon Musk is trying to weed out the posers with a variation of that same old riddle. The man must be a genius.

The real question to ask during an interview is this : “Do you make a new pot of coffee, when you take the last cup?”. If the applicant answers “yes”, the interview ends right there. You do not want to hire an immoral liar. I never make a new pot if I take the last cup. I proudly pour the last cup, and turn the damn coffee machine off. And I stand by my choice; the truth is that 99% of people walk in, see that there is just around 1 cup left (if there is 1.51 cup left, they are OK). And so they decide that the hassle of brewing a new pot is not worth it. Not only that. If the walk out and wait 30 minutes, then some other poor fool must have drunk the last cup of stale coffee, and therefore, new fresh coffee is available. Alas, it’s better to just walk back to your desk and wait.

So, what happens is that the schmuck who takes the last coffee, which by now is thick as molasses, will be punished. He has to make new, fresh coffee for all the assholes who walked in, saw the trap, and backed out. So while he is trying to down a cup of revolting goo, everyone else gets to party with delicious, freshly brewed Joe. Right until there is just one cup left. That gets to sit in the pot, until it too is strong enough to awaken the dead.

This is a tremendous waste of resources. The coffee sits on the burner for hours on end, wasting precious energy like there’s no tomorrow. I could probably charge a few Teslas per day with the amount of energy being wasted on that damn coffee machine.

Yet, in every office I’ve been to, someone took the time to write “If you take the last cup of coffee, make a new pot”, print that shit out, and hang it by the coffee maker. It’s as common as the “wash your own dishes” or “clean up after yourself” printouts. I’d like to know if that ever worked, for anyone, anywhere. I very much doubt that the sign (usually written in Comic Sans – to make it look hand-written, yet I-am-using-a-computer-and-printer-because-I-am-a-professional-and-I-expect-to-be-taken-seriously), will make people go “oh, I wasn’t aware of that rule, but now that I see the words written down, I will change my ways”.

Companies must get one of those machines that brew one cup at a time. For the environment. Both of them.

If you are a job seeker, a company that has one of those old machines, and one or more of those “your mom doesn’t work here” signs, it’s a clear sign that you should run for the hills. Run, and don’t look back.