With the ASUS “Tinker Board“. While I believe it is perhaps overpowered and possibly even too expensive (for my purposes), it is nice to see a major manufacturer chime in. I can’t wait to get my hands on one.
Month: January 2017
Sing Along, Everyone!
An old colleague of mine made a post about trust, freedom and having everyone humming along to the same tune, and how the company’s (I suppose superior) culture could not easily be emulated by competitors.
It made me think of a book I read this summer : “Why Smart Executives Fail” by Sydney Finkelstein. Chapter 7 is called “Delusions of a Dream Company”.
Here’s a choice excerpt:
When businesses start losing touch with reality because of an arrogant belief in their own superiority and their company mission, they tend to adopt a pervasively positive attitude. The more insular the company’s outlook, the more buoyant its managers will tend to be about the company’s prospects.
Product Management
In May 2008, Mary Poppendieck did a presentation on leadership in software development at Google. In it, she points out that at Toyota and 3M the product champions are “deeply technical” and “not from marketing”. The reason this works, she states, is that you need to “marry the technical possibilities with what the market wants”. If the products are driven by marketing people, the engineers will constantly be struggling to explain why perpetual machines won’t work, even if the market is screaming for it. So, while other companies are building all-electric vehicles and hybrids, your company is chasing a pipe-dream.
Innovative ideas are not necessarily technically complex, and may not always require top technical talent to implement. However, such ideas are often either quickly duplicated by other players, or rely on user lock-in to keep the competition at bay. E.g. Facebook and Twitter are technically simple to copy (at small scale), but good luck getting even 100 users to sign up. Nest made a nice thermostat, but soon after the market offered cheaper alternatives. Same with Dropcam. With no lock-in, there is no reason for a new customer to pick Dropcam over something cheaper.
To be truly successful, you therefore need to have the ability to see what the market needs, even if the market is not asking for it. If the market is outright asking, then everyone else hears that, and thus it’s hardly innovation. That doesn’t mean that you should ignore the market, obviously you have to listen to the market and offer solutions for the trivial requests that do make sense (cheaper, better security, faster processing, higher resolution and so on), and weed out the ones that don’t (VR, Blackberry, Windows Mobile). It doesn’t matter how innovative you are, if you don’t meet the most basic requirements.
However, it’s not just a question of whether something is technically possible, it’s also a question as to whether your organization posses the technical competency and time to provide a solution.If your team has an extremely skilled SQL programmer, but the company uses 50% of her time to pursue pipe-dreams or work on trivialities (correcting typos, moving a button, adding a new field), then obviously less time is available to help the innovation along.
Furthermore, time is wasted by doing things in a non-optimal sequence and failing to group related tasks into one update whenever possible. This seem to happen when technical teams are managed by non-technical people (or “technical” people who are trained in unrelated areas). Eventually, the team will stop arguing that you really should install the handrail before the hydrant, and simply either procrastinate or do what they are told at great direct (and indirect!) cost.
At GOTO 2016, Mary states that 50% of decisions made by product managers are wrong, and 2/3 of what is being specced is not necessary and provides no value to the end user, therefore, she argues, development teams must move from “delivery teams” to “problem solving teams”, and discard the notion that the product manager is a God-like figure that is able to provide a long list of do-this and do-that to his subordinates. Instead, the product manager must
- able to listen to the market
- accurately describe the technical boundaries and success criteria for a given problem
- be able to make tradeoffs when necessary.
To do this, I too, believe the PM must be highly technical so that they have the ability to propose possible solutions to the team (when needed). Without technical competency (and I am not talking about the ability to use Excel like a boss here), the PM will not be able to make the appropriate tradeoffs and will instead engage in very long and exhaustive battles with developers who are asked to do something impossible.
Is Mary correct? Or does she not realize that developers are oddballs and mentally incapable of “understanding the market”? Comments welcome.
Cost of Error
When I got my first computer, the language it offered was BASIC. Ask any good programmer, and they’ll tell you that BASIC is a terrible language, but it got worse: my next language was 68K assembler on the Commodore Amiga, and with blatant disregard to what Commodore was telling us, I never used the BIOS calls. Instead, I bought the Amiga Hardware Reference Manual and started hitting the metal directly. During my time in school, I was taught Pascal, C++ and eventually I picked up a range of other languages.
What I’ve learned over the years is that the cost of a tool depends on two things: The time it takes to implement something, and (often overlooked) – the time it takes to debug when something goes wrong.
Take “Garbage Collection”, for example, the idea is that you will not have memory leaks because there’s no “new/delete” or “malloc/free” pair to match up. The GC will know when you are done with something you new/malloced and call delete/free when needed. This, surely, must be the best way to do things. You can write a bunch of code and have no worries that your app will leak memory and crash. After all, the GC cleans up.
But there are some caveats. I’ve created an app that will leak memory and eventually crash.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LeakOne { class Program { class referenceEater { public delegate void Evil ( ); public Evil onEvil; ~referenceEater() { Console.WriteLine("referenceEater finalizer"); } } class junk { public void noShit() { } public void leak() { for (int i = 0; i < 100000; i++) { referenceEater re = new referenceEater(); } } ~junk() { } } static void Main(string[] args) { for (int i = 0; i < 1000000; i++) { junk j = new junk(); j.leak(); } } } }
What on earth could cause this app to leak?
The answer is the innocent looking “Console.WriteLine” statement in the referenceEater finalizer. The GC runs in its own thread, and because Console.WriteLine takes a bit of time, the main thread will create millions of referenceEater objects and the GC simply can’t keep up. In other words, a classic producer/consumer problem, leading to a leak, and eventually a crash.
Running this app, the leak is fairly apparent just by looking at the task manager. On my laptop it only takes 5-10 minutes for the app to crash (in 32 bit mode), but in 64 bit mode the app would probably run for days, slowing things down for day, until eventually causing a crash.
It’s a bitch to debug, because the memory usage over a loop is expected to rise until the GC kicks in. So you get this see-saw pattern that you need to keep running for a fairly long time to determine, without a doubt, that you have a leak. To make matters worse, the leak may show up on busy systems, but not on the development machine that may have more cores or be less busy. It’s basically a nightmare.
There are other ways for .NET apps to leak – a good example is forgetting to unsubscribe from a delegate, which means that instead of matching new/delete, you now have to match subscription and unsubscription from delegates. Fragmentation of the Large Object Heap (not really a leak, but will cause memory use to grow, and ultimately kill the app)
The C++ code I have I can test for leaks by just creating a loop. Run the loop a million times, and when we are done we should have exactly the same amount of memory as before the loop.
I am not proposing that we abandon garbage collection, or that everything should be written in C++, not even by a long shot. As an example, writing a stress test for our web-server (written in C++), was done using node.js. This took less than 2 hours to put together, and I don’t really care if the test app leaks (it doesn’t). There are a myriad of places where I think C# and garbage collection is appropriate. I use C# to make COM objects that get spawned and killed by IIS, and it’s a delight to write those and not having to worry about the many required macros needed if I had done the same in C++.
And with great care, and attention, C# apps would never leak, but the reality is that programmers will make mistakes, and that the cost of this is taken into consideration when trying to determine what tool is appropriate for what task.