Monolith

20 years ago, the NVR we wrote was a monolith. It was a single executable, and the UI ran directly on the console. Rendering the UI, doing (primitive) motion detection and storing the video was all done within the same executable. From a performance standpoint, it made sense; to do motion detection we needed to decode the video, and we need to decode the video to render it on the screen, so decoding the video just once made sense. We’d support up to a mind-blowing 5 cameras per recorder. As hardware improved, we upped the limit to 25, in Roman numerals, 25 is XXV, and hence the name XProtect XXV (people also loved X’s back then – fortunately, we did not support 30 cameras).

Image result for rock

I’m guessing that the old monolith would be pretty fast on today’s PC, but it’s hard/impossible to scale beyond a single machine. Supporting 1000 cameras is just not feasible with the monolithic design. That said, if your system is < 50 cameras, a monolith may actually simpler, faster and just better, and I guess that’s why cheap IP recorders are so popular.

You can do a distributed monolith design too; that’s where you “glue” several monoliths together. The OnSSI Ocularis system does this; it allows you to bring in a many autonomous monoliths and let the user interact with them via one unified interface. This is a fairly common approach. Instead of completely re-designing the monolith, you basically allow remote control of the monolith via a single interface. This allows a monolith to scale to several thousand cameras across many monoliths.

One of the issues of the monolithic design is that the bigger the monolith, the more errors/bugs/flaws you’ll have. As bugs are fixed, all the monoliths must be updated. If the monolith consists of a million lines, chances are that the monolith will have a lot of issues, and fixes for these issues introduce new issues and so on. Eventually, you’re in a situation where every day you have a new release that must be deployed to every machine running the code.

The alternative to the monolith is the service based architecture. You could argue that the distributed monolith is service based; except the “service” does everything. Ideally, a service based design ties together many different services that have a tightly defined responsibility.

For example; you could have the following services: configuration, recorder, privileges, alarms, maps, health. The idea being that each of these services simply has to adhere to an interface contract. How the team actually implements the functionality is irrelevant. If a faster, lighter or more feature rich recorder service comes along, it can be added to the service infrastructure as long as it adheres to the interface. Kinda like ONVIF?

This allows for a two-tiered architectural approach. The “city planner” who plans out what services are needed and how they communicate, and the “building architect” who designs/plans what goes into the service. Smaller services are easier to manage, and thus, hopefully, do not require constant updates. To the end user though, the experience may actually be the same (or even worse). Perhaps patch 221 just updates a single service, but the user has to take some action. Whether patch 221 updates a monolith or a service doesn’t make much difference to the end-user.

Just like cities evolve over time, so does code and features. 100 years ago when this neighborhood was built, a sewer pipe was installed with the house. Later, electricity was added, it required digging a trench and plugging it into the grid. Naturally, it required planning and it was a lot of work, but it was done once, and it very rarely fails. Services are added to the city, one by one, but they all have to adhere to an interface contract. Electricity comes in at 50 Hz, and 220V at the socket, and the sockets are all compatible. It would be a giant mess if some providers used 25 Hz, some 100 Hz, some gave 110V, some 360V etc. There’s not a lot of room for interpretation here; 220V 50 Hz is 220V 50 Hz. If the spec just said “AC” it’s be a mess. Kinda like ONVIF?.

Image result for wire spaghetti

In software, the work to define the service responsibilities, and actually validate that services adhere to the interface contract is often overlooked. One team does a proprietary interface, another uses WCF, a third uses HTTPS/JSON, and all teams think that they’re doing it right, and everyone else is wrong. 3rd parties have to juggle proprietary libraries that abstract the communication with the service or deal with several different interface protocols (never mind the actual data). So imagine a product that has 20 different 3rd party libraries, each with bugs and issues, and each of those 3rd parties issue patches every 6 months. That’s 40 times a year that someone has to make decide to update or not; “Is there anything in patch 221 that pertains to my installation? Am I using a service that is dependent on any of those libraries” and so on.

This just deals with the wiring of the application. Often the UI/UX language differs radically between teams. Do we drag/drop things, or hit a “transfer” button. Can we always filter lists etc. Once again, a “city planner” is needed. Someone willing to be the  a-hole, when a team decide that deviating from the UX language is just fine, because this new design is so much better.

I suppose the problem, in many cases, is that many people think this is the fun part of the job, and everyone has an opinion about it. If you’re afraid of “stepping on toes”, then you might end up with a myriad of monoliths glued together with duct-tape communicating via a cacophony of protocols.

OK, this post is already too long;

Monoliths can be fine, but you probably should try to do something service based. You’re not Netflix or Dell, but service architecture means a more clearly defined purpose of your code, and that’s a good thing. But above all, define and stick to one means of communication, and it should not be via a library.

 

Agile is like Communism

Communism can work. For a short duration, and with a limited number of like-minded participants, real communism can work (or at least appear to work). In most other cases, communism just doesn’t pan out.

communism_worker_flag_mini

When faced with the long list of failed communist experiments, hardliners will always say “well, that was not real communism”. Which is true. But when you consider the nature of man, there really are just two options “bad communism” or “no communism”. I prefer the latter.

Same goes for Agile.

Observing a jelled team that is firing on all cylinders, you’ll see that dogmatic adherence to “process” is not enforced. That there is a lot of informal communication (on the technical topics), and the tasks are broken down to manageable chunks with a clear scope. The team can quickly adapt to changes in the environment simply because it is  agile. Wouldn’t it, then, be nice if we could write down how these guys are doing things, and then apply it to everyone writing software?

Here’s where reality sets in.

Some people are simply not fit to write code, and some people are not fit to write specs.

It doesn’t really matter what process you follow, inept coders and managers will never be agile.

But they can do Agile.

I suppose the rationale is that the group eventually acknowledges that it is not being productive. Perhaps it has gone through some dead sea effect for some time, and there is increasing frustration with delays, shipping defects and surprising side-effects discovered late in the cycle.

Given two options: a) we are simply incompetent or b) there’s something wrong with our process. Most teams pick option b).

Agile’s pitch is that bad productivity is simply due to the wrong process. And this is true; for competent teams, the wrong type and amount of bureaucracy slows things down. Limiting needless paperwork speeds things up. But it requires competent and honest people and an appropriate type of project. You don’t find a cure for cancer just by doing a bunch of epics, sprints and retrospectives.

The bad team then picks up Agile, but never bother reading the manifesto, and the concept is applied indiscriminately on all types of projects.

Informal inquiries and communication is shunned and the them instead insist on strict adherence to “process”, because deviation from the process is “what lead to disaster the last time” the argument goes. The obvious contradiction between refusing ad-hoc communication and insistence on “following process” and the stated principles of Agile is often completely lost on bad teams.

The web is overflowing with disaster stories of Agile gone wrong (and now I just added one to the growing pile), just as history books overflow with stories of communism gone wrong. And for every story, there’s one where an Agile proponent explains why they just weren’t doing Agile the right way, or that a different kind of Agile is needed, like in this piece, where a comment then reads:

This insane wishy-washy process-worshipping religion is __BULLSHIT__ of the highest order. What you really need is a competent team that isn’t sabotaged by over-eager, incompetent management and hordes of process-masturbators every step of the way.

The Agile process will not fix problems that are due to incompetence. Competent, jelled teams, are probably already agile. Spend more time identifying what value each member brings to the team. Keep score. Cull the herd.