Interview Process

It’s strange – every manager has a good idea on how to weed out bad programmers during the interview process, but how do we weed out bad managers?

During an interview process I might ask a self professed OOD expert about an abstract class, virtual members and stuff like that, if the prospect does not know what I am talking about then the remainder of the interview is usually just idle chat. But what about the managerial positions? I usually ask if people know the books “Mythical Man Month” and “Peopleware” since I consider these books as important to a software manager as C# is to a windows developer – Can you call yourself an “expert manager” if you don’t know these books?

There are plenty of average coders who never wrote template classes, or spewed out regex to parse some weird format, but you really cannot be considered more than a mediocre manager if you are not familiar with the two books. You don’t have to agree with the books (not recommended), but ignorance!!?!

The Peter Principle causes clueless people to raise to the top. How many Peter type managers take credit for the success of a company, when the success was attained in spite of their performance? I imagine quite a lot of managers fancy themselves responsible for successes, after all

Success has many fathers, while failure is an orphan
Just take a look at LinkedIn, no-one there is the father of some derailed product, and managers (especially managers!) always have some “proven track record”.

A bad manager can poison a functioning team in a couple of months, but when the team falls apart it is usually too late to correct the problem, causing irreversible damage to the company.

A broken department can come in many guises, but the first sign that things are headed for the crapper is low morale in the team. As the manager prevents the members from entering the flow, enforce ideas that are meaningless (at least to the team), or simply treats the team as if they were retarded children, members will start to leave. But a lot don’t, so obviously, the members that leave are just “divas that could not take the tough love”. No, the reality is that a lot of people are “clockwatchers”, they go to work, get their paycheck and do just enough to avoid getting fired. They will stay behind, because they really, really don’t give a rats ass about anything – including the product.

Don’t get me wrong – they enjoy when the product is successful, but if it’s not, if it “just gets by”, keeps the company afloat and keeps the paychecks rolling, then that is enough for them.

Once this purging process begins, it will soon spread, and in the end the only way to cash in is to get a venture capitalist on board, and spend money on marketing campaigns, external “genius coders that will save the day” and other misdirected ideas. Sometimes that works out for the owners, but in the long run?

Take a look at Apple, when the bean-counters took over they damn near killed the company.  Just something to consider.

Advertisements

Trainspotting

Video surveillance is now being deployed in NYC’s subway system. Far from being the first subway system to be retrofitted with surveillance gear, MTA (NYC’s subway administration) wants to do a pilot. A sensible choice, but why not run 5 or even 10 pilots at once? Let the vendors install their system on 10 different sets of cars, perhaps MTA could stage 10 or 20 “incidents” that the operators would need to find in the database, or respond to if realtime deployment is the purpose.

Read more about the project here

Video Analytics

ObjectVideo has a blog, which I believe is refreshing (Genetec has one too, and so do Exacq, and I am sure there are a lot of other vendors that have one too). In this post they talk about legislation being a hindrance to application of new technology, the  post that was picked up by John Honovich, who asks some questions about the 98% figure in this article.

John raises a good point – what does 98% success rate mean exactly? If the system fails miserably in the last 2% (1.3% according to OV’s own numbers), then 98.7% might justifiably be totally unacceptable. Another argument could be that a benchmark was set, and OV flat out failed to beat that number.

In the medical industry I was stunned when I realized how benchmarks were met; a reference study was done, and the algorithms were tweaked so that the numbers met the criteria – but on the same set of samples! How the system would perform on a new set of samples was largely unknown. The same thing seems to apply to some analytics systems – the system is tweaked to beat the numbers in the trial phase, only to fail miserably when the conditions change; test in the summer, and be prepared for failure when the leaves start to fall. Test during low season, only to fail miserably when the tourists start flooding the entrances.

Frequently the performance of analytics systems is wildly oversold (remember facial recognition?), so the cards are stacked against the integrator even before the system is deployed. Once in place, there are a plethora of parameters that affect the performance – horsepower, frame rate, compression, lighting, weather, color calibration, network performance and so on. Raise the frame rate to make object tracking more reliable, and the bandwidth just went up, place the analysis on the edge, and you can discard all those old cameras already installed. The truth is that the system has to be designed around analytics from the get go, and more often than not, it isn’t.

ObjectVideo has a good product, it’s just that the expectations have to be adjusted a little bit – not necessarily the benchmarks.