Lambs to the Slaughter

When lambs are loaded on trucks, as they are sent to the slaughterhouse, I doubt the driver, or the farmer or basically anyone tells them that this is where they’re going.

I wonder what the lambs are thinking.

If they could talk, some would probably say “we’re doomed”, and others would stomp on them and say “shut the hell up, can’t you see everyone is getting anxious” or “why can’t you be more positive”.

Maybe anxiety is natures way of telling you that something bad may be coming your way. If you’re in a rustling truck, driving far away from the farm, it’s appropriate to feel anxious. It’s telling you to be aware of whats going on, and think of an escape route.

The lambs see the butcher, but they don’t know what they’re looking at. The guy is not going to scream “I’M HERE TO KILL YOU ALL”, he’ll whisper reassuringly “come here little friend, follow me”.

Don’t listen to him.

Run away.

 

IBM

In 2017, IBM spent $5.4 billion dollars on R&D (sources vary on the number). That’s a lot of money. Not as much as Amazon ($23 billion) or Alphabet ($16 billion), but still a pretty good chunk of money. Instead of saying billion, let’s say thousand-million, so IBM spent five-thousand-four-hundred million dollars on R&D.

It’s roughly the same as what they spent in 2005, and their revenue is roughly the same as back then ($88 billion in 2006, $80 billion in 2017)

They’re doing a bunch of things to stay relevant, but while most people have heard about AWS or Azure, I don’t often bump into people who know or use IBM Bluemix (now “IBM Cloud”). Is it any good? I guess it must be; the revenue was at the time of writing (Jan 2019) about the same as AWS and Azure (between $7 billion and 9 billion), but AWS and Azure is growing very, very fast (~50% growth per year). As a developer, trying out AWS or Azure, for real, can be accomplished in a few hours. I tried Bluemix years ago. I gave up. I’m sure that I could have gotten it off the ground, but why should I spend days on something that I can do in hours with the other vendors?

Most have heard about IBM’s Watson project. Watson is a project to make a computer that knows everything; it can play Jeopardy, diagnose patients and judge your wardrobe. Reading about the intended purposes of Watson, it seems as if they’re constantly trying new (random) things, only to get beat by Amazon, Apple, Microsoft or Google in the areas that matter. Morgan-Stanley asked ~100 CIO’s about their interest in AI, and 43 of them were considering using AI. Of those, 10 preferred IBM Watson. I don’t know what “preferred” means, and I don’t know what they plan to do, but just because IBM has plowed a lot of money into building something that runs on their mainframes, doesn’t mean it’s valuable. As a side note, the survey (in the link) converted the numbers to percentages to make them seem more significant, but really, it was just 10 dudes out of 100 who said they “preferred Watson”. SO FAKE NEWS!!! (or at least, take it with a grain of salt).

IBM’s revenue did grow recently, but the growth was driven largely by mainframes (cloud business also saw an uptick, about 20% growth), but most people are wondering if this is sustainable. Aren’t we all moving towards serverless (a’la Amazon Lambda), which basically means “sell me cycles as cheap as possible”. It smells like commoditization, narrow margins and huge volume – running on the cheapest possible hardware. A game in which IBM’s expensive mainframes probably will struggle. It seems as if IBM is anticipating this, and just took a major step in that direction by paying $34 billion for Red Hat.

IBM basically went from being front and center of the PC revolution to being a a large, but mostly invisible company. I used to be that “PC software” def. would work on an IBM PC, but then things got bad. For example, 20 years ago there was just one person at the dorm with an actual IBM computer, and it was not compatible with any of the clones the rest of us had. The world had taken the parts that were useful (common platform), and moved on. IBM thought they were still in control of the platform, and could deviate from it. Turned out they were wrong.

Is there a need for IBM any longer?

Sure, banks, government and insurance companies with huge legacy systems, that can’t be moved, will keep buying mainframes forever. So there’s certainly a need. But it’s the same sort of need a drug addict has when they’re coming off their drugs. It’s not a “good” need. Does IBM have anything unique and valuable they can offer to the “cheap reliable cycles” world of the cloud? Do they have anything in the AI space that isn’t being beat by the usual suspects, or at least will be beat, as soon as it becomes commercially viable?

We need competition. If we’re left with Azure and AWS, the world will be a boring place. IBM could maybe compete with AWS. I don’t see why they wouldn’t be able to. But perhaps they aren’t willing or capable.

 

 

Conway’s Law

I was re-watching this video about the (initially failed) conversion from a monolithic design of an online store, into a microservice based architecture. During the talk, Conway’s Law is mentioned. It’s one of those laws that you really should keep in mind when building software.

“organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations.”
— M. Conway

The concept was beautifully illustrated by a conversation I had recently; I was explaining why I disliked proprietary protocols, and hated the idea of having to rely on a binary library as the interface to a server. If a server uses HTTPS/JSON as it’s external interface, it allows me to use a large number of libraries – of my choice, for different platforms (*nix, windows) – to talk to the server. I can trivially test things using a common web browser. If there is a bug in any of those libraries, I can use another library, I can fix the error in the library myself (if it is OSS) etc. Basically I become the master of my own destiny.

If, on the other hand, there is a bug in the library provided to me, required to speak some bizarre proprietary protocol, then I have to wait for the vendor/organizational unit to provide a bug-fixed version of the library. In the meantime, I just have to wait. It’s also much harder to determine if the issue is in the server or the library because I may not have transparency to what’s inside the library, and I can’t trivially use a different means of testing the server.

But here’s the issue; the bug in the communication library that is affecting my module might not be seen as a high priority issue by the unit in charge of said library. It might be that the author left, and it takes considerable time to fix the issue etc. etc. this dramatically slows down progress and the time it takes to deliver a solution to a problem.

Image result for bottleneck

The strange thing is this; the idea that all communication has to pass through a single library, making the library critically important (but slowing things down) was actually 100% mirrored in the way the company communicated internally. Instead of encouraging cross team communication, there was an insistence that all communication pass through a single point of contact.

Basically, the crux is this, if the product is weird, take a look at the organization first. It might just be the case that the product is the result of a sub-optimal organizational structure.

Crashing a Plane

Ethiopian Airlines Flight 961 crashed into the Indian ocean. It had been hijacked en route from Addis-Ababa to Nairobi. The hijackers wanted to go to Australia. The captain warned that the plane only had enough fuel for the scheduled flight and would never make it to Australia. The hijackers disagreed. The 767-200ER had a max. flight capacity of 11 hours, enough to make it to Australia they argued. 125 people died when the plane finally ran out of fuel and the pilots had to attempt an emergency landing on water.

Korean Air Flight 801 was under the command of the very experienced Captain Park Yong-chul. During heavy rain, the Captain erroneously thought that the glidescope instrument landing system was operational, when it fact it wasn’t. The Captain sent the plane into the ground about 5 km from the airport killing 228 people.

In the case of Ethiopian Airlines, there’s no question that the people in charge of the plane (the hijackers), had no idea what they were doing. Their ignorance, and distrust of the crew, ultimately caused their demise. I am certain that up until the last minute, the hijackers believed they knew what they were doing.

For Korean Air 801, the crew was undoubtedly competent. The Captain had 9000 hours logged, and during the failed approach, we can safely assume that he felt that he knew what he was doing. In fact, he might have been so good that everyone else stopped second guessing Captain Park even though their instruments was giving them a reading that told them something was seriously wrong. Only the 57 year old flight engineer Nam Suk-hoon with 13000 hours logged dared speak up.

I think there’s an analogy here; we see companies crash due to gross incompetence, inexperience and failure to listen to experienced people, but we also see companies die (or become zombies) because they have become so experienced that they felt that they couldn’t make any fatal mistakes. Anyone suggesting they were short on approach are ignored. The “naysayers” can then leave the plane on their own, get thrown out for not being on board with the plan, or meet their maker when the plane hits the ground.

Yahoo comes to mind; witness this horror-show:

Image result for yahoo bad decisions

The people making these mistakes were not crazed hijackers with an insane plan. These were people in expensive suits, with many many years of experience. They all had a large swarm of people doing their bidding and showing them excel sheets and power-point presentations from early morning to late evening. Yet, they managed to crash the plane into the ground.

So, I guess the moral is this: if you’re reading the instruments, and they all say that you’re going to crash into the ground, then maybe, just maybe the instruments are showing the true state of things. If the Captain refuses to acknowledge the readings and dismisses the reports, then the choices are pretty clear.

The analogy’s weakness is that in most cases, no-one dies when the Captain sends the product in the wrong direction. The “passengers” (customers) will just get up from their seats and step into another plane or mode of transportation, and (strangely) in many cases the Captain and crew will move on and take over the controls of another plane. We can just hope that the new plane will stay in the air until it reaches it’s intended destination safely.

Looking Forward

If I had a nickle for every time someone told me to “look forward” I’d have a dime by now. I’d be a lot richer if “looking forward” was actually a viable strategy.

Say you’re one of those annoying back-seat drivers who observe that the driver is doing 40 mph on the freeway, but he’s in second gear redlining the revs. You lean forward and suggest changing gears. “I know what I am doing” the driver snares back at you, and you lean back and look out the window, shaking your head in disbelief. 20 minutes later, you see the temp gauge creeping up, and you lean forward suggesting you pull over and cool down the engine. The response is the same, and once again you sit back, and bring up google maps. You’re in the middle of nowhere, and going in the opposite direction of your goal. You ponder bringing it up, but you already know how this will be received. As you’re driving up hill, the engine finally gives out. A huge bang, vapor steaming from the engine, oil splattered all over the windscreen and all over the road.

Everyone is on the side of the road now – some start hiking and are picked up by passing cars, others just start walking. You, however, stick around. You’ve put a lot of work into the car, and for some bizarre reason you still feel some attachment to it. It’s far from the car you set out to build, but you still have this dream about what it could be.

Finally roadside assistance shows up. Everyone’s out of money, but the owner of the tow company offers to tow it for free, if you will sell your broken down vehicle. He’ll pay the scrap value of the thing, but promises to bring the car back in working condition (the guy is probably insane). After a few months, the car finally rolls out of the shop.

The driver prepares to get back in the drivers seat. You object. “This guy wrecked two cars already”, your voice trembling with frustration, “and now you’re putting him back in the drivers seat?”.

The owner takes a deep breath. He says “everything stays the way it was”. A sigh of relief can be heard. He looks straight at you, “we have to forget the past, and look forward”.

But you remember the past. You remember the warning you made, you remember the lack of attention to the details, proper operation and maintenance, and now you’re supposed to put that aside and just “look forward”.

“Looking forward” is not a business strategy. It’s great advice for the individual though; you made a mistake, learn from it, and then move forward. If you’re in the wrong job, don’t dwell on it for too long, just recognize that you are, and move on.

 

 

Paranoia?

Some time ago, Bloomberg ran an article claiming that Chinese computer components (in this case a motherboard) would be intercepted en route to customers and be modified to host a small chip that would allow the (evil) Chinese government to spy on the righteous.

It was an unusually sensational piece for Bloomberg, complete with a fake animation zooming in on a cartoon-styled motherboard, suggesting that Bloomberg knew, as a matter of fact, where the alleged chip was placed. They even showed the chip placed on top of a finger. I’d call it deceptive, because Bloomberg demonstrably did not have any physical evidence of the chip, so the motherboard zoom-in and finger-chip were fabrications. If I discovered a “rogue” chip on any of my devices, I can assure you, I would keep the evidence around. What person discovers a rogue chip on a motherboard, and then just discard it?

Because it’s very difficult, and often impossible to prove a negative, the burden of proof is on the accuser. It’s too easy to say that people roaming the certain internet forum is actually a front for exchange of immoral and perverse videos. The admin of the forum and its members would deny the allegations, and I’d just say – “of course they are denying it, it would destroy their business and reputation if they didn’t deny it”, and I would then demand that they prove they never exchanged sick videos. Can’t be done.

It all brings memories of Stephen Glass.

Does that mean that it is inconceivable that hardware from China is bugged? No, nor does it mean that evidence will never surface. All it means is that if you’re buying into the Bloomberg story, then you’re probably part of the problem.

It’s a problem when people start believe gossip simply because it supports their belief. Don’t like/can’t compete with the Chinese, then you’re likely to believe some gossip about “spy chips” that no-one so far has been able to prove existed.

At the same time, when there are vulnerabilities in chipsets from Intel, then that’s just an honest mistake.

I don’t trust anything, and you shouldn’t either. Instead, you should spend less time obsessing over gossip (as entertaining as it might be), and instead educate yourself on how to protect yourself from eavesdropping. I’m not suggesting you’ll ever get 100% security when dealing with computers – and I don’t care who the manufacturer is. Things are put together by humans, and we make mistakes (or perhaps we have a fallout with former allies who then promptly leaks our secrets), so it’s on you to take precautions.

Stay safe, and don’t spread rumors and gossip. Reserve judgment until you see the evidence, not before.

 

 

Agile is like Communism

Communism can work. For a short duration, and with a limited number of like-minded participants, real communism can work (or at least appear to work). In most other cases, communism just doesn’t pan out.

communism_worker_flag_mini

When faced with the long list of failed communist experiments, hardliners will always say “well, that was not real communism”. Which is true. But when you consider the nature of man, there really are just two options “bad communism” or “no communism”. I prefer the latter.

Same goes for Agile.

Observing a jelled team that is firing on all cylinders, you’ll see that dogmatic adherence to “process” is not enforced. That there is a lot of informal communication (on the technical topics), and the tasks are broken down to manageable chunks with a clear scope. The team can quickly adapt to changes in the environment simply because it is  agile. Wouldn’t it, then, be nice if we could write down how these guys are doing things, and then apply it to everyone writing software?

Here’s where reality sets in.

Some people are simply not fit to write code, and some people are not fit to write specs.

It doesn’t really matter what process you follow, inept coders and managers will never be agile.

But they can do Agile.

I suppose the rationale is that the group eventually acknowledges that it is not being productive. Perhaps it has gone through some dead sea effect for some time, and there is increasing frustration with delays, shipping defects and surprising side-effects discovered late in the cycle.

Given two options: a) we are simply incompetent or b) there’s something wrong with our process. Most teams pick option b).

Agile’s pitch is that bad productivity is simply due to the wrong process. And this is true; for competent teams, the wrong type and amount of bureaucracy slows things down. Limiting needless paperwork speeds things up. But it requires competent and honest people and an appropriate type of project. You don’t find a cure for cancer just by doing a bunch of epics, sprints and retrospectives.

The bad team then picks up Agile, but never bother reading the manifesto, and the concept is applied indiscriminately on all types of projects.

Informal inquiries and communication is shunned and the them instead insist on strict adherence to “process”, because deviation from the process is “what lead to disaster the last time” the argument goes. The obvious contradiction between refusing ad-hoc communication and insistence on “following process” and the stated principles of Agile is often completely lost on bad teams.

The web is overflowing with disaster stories of Agile gone wrong (and now I just added one to the growing pile), just as history books overflow with stories of communism gone wrong. And for every story, there’s one where an Agile proponent explains why they just weren’t doing Agile the right way, or that a different kind of Agile is needed, like in this piece, where a comment then reads:

This insane wishy-washy process-worshipping religion is __BULLSHIT__ of the highest order. What you really need is a competent team that isn’t sabotaged by over-eager, incompetent management and hordes of process-masturbators every step of the way.

The Agile process will not fix problems that are due to incompetence. Competent, jelled teams, are probably already agile. Spend more time identifying what value each member brings to the team. Keep score. Cull the herd.

The Singleton Anti-Pattern

In programming, the whole idea is to avoid re-inventing the wheel, and re-use as much as possible. Some clever coders discovered that there were some mechanism that were used over and over again. For example, the “producer/consumer” mechanism, whereby one or more threads are “producers” and one or more threads are “consumers”. Instead of coders figuring out how to do this properly over and over again, a group of people decided to write a book that described how to solve some of these problems. “Design Patterns: Elements of Reusable Object-Oriented Software” they called it. In the business, the authors became known as the “Gang of Four”.

One of the patterns they described is a “Singleton“: A singleton is essentially a global object, that is instantiated when needed. The idea being that the user doesn’t need to know when, or how, the underlying object is created/destroyed, they can just use it, and all parts of the code then shares the same object. Isn’t that cool. It’s like global variables were suddenly being endorsed in a book, and by some clever people too!!

There are cases (rare, constrained) where a global variable makes sense; it makes sense when the physical properties that the software is trying to model, matches with a single object. E.g. a singular file on a disk or a specific camera in a network. It’s perfectly appropriate to model these objects as global, because there truly is only one of them.

Let’s consider a log mechanism. There may be several things that are logging data, but if all that data goes into just one file, then it’s OK to use a singleton for the file, but certainly not for the log abstractions. If there are three or four different modules that are all logging to the same file, then those modules must have their own logger instance, and the various instances that are made, can then write to the same file using the singleton.

A primitive class diagram could look like this:

             Module A -> Log A 
Parent  ->                        -> Singleton File
             Module B -> Log B

When you are acutely aware of this composition, you should eventually realize that each logger instance must add some identifier when it writes to the disk. Otherwise you get a log file that looks like this

File Open
File Open
File Write Failed
File Write Succeeded
File Close
File Close

What you want, in the file, is this

Module A: File Open
Module B: File Open
Module B: File Write Failed
Module A: File Write Succeeded
Module B: File Close
Module A: File Close

This appears to solve the problem; except there’s a caveat. Say someone writes an app that creates two instances of the parent module. Since the log file is a singleton, all log data is written to the same file. This, in turn, means that two instances of the parent will also write to the same file.

Consider this diagram

                              Module A -> Log A
                 Parent ->               
                              Module B -> Log B
Aggregator  ->                                       -> Singleton File
                              Module A -> Log A
                 Parent ->
                              Module B -> Log B

We are now in hell.

Module A: File Open
Module B: File Open
Module B: File Write Failed
Module A: File Open
Module B: File Write Failed
Module A: File Write Succeeded
Module B: File Close
Module A: File Write Succeeded
Module A: File Close

This issue is relatively easy to fix, and it’s still valid to have a requirement that there is just one log file (might be better to create one per parent, but that’s a matter of taste).

But what about issues where things like username, password, preferences etc. are stored in a singleton that contains “user info”. In that case, when the aggregator sets the username, the username change applies to ALL modules, regardless of where they reside in the aggregator tree. It’s therefore impossible for the aggregator to set a different username for Parent 1 and Parent 2. The aggregator, therefore, breaks.

Essentially, the coder might as well have said “let’s make the username a global variable”. 99% of all coders will object when they hear that (or “goto”). But 50% of all coders remain silent when the same pattern is described using the “singleton” moniker.

The morale of the story: don’t use singletons. Not even if you think you know what you are doing. Because if you think you know what you are doing, then you almost certainly do not.

 

Do Managers in Software Companies Need to Code?

I think so.

The horrible truth is that there are good and bad coders, there are good and bad managers and there are easy and hard projects.

A project, taken on by good coders and good managers can fail simply because the project was too complex and was too intertwined with system that the team had no control over. You could argue that the team never should have taken on the task, but that’s why you warn the customer of the risk of non-completion and bill by the hour.

When doing research on the skills needed to be a good software project manager, there seems to be an implied truth that the coders simply do what they are told, and that coding/design errors are always the managers fault. At the same time, you’ll find that people complain about micromanagement, and not letting the coders find their own solution. I find these two statements at odds with one another.

Coders will sometimes do things that are just wrong, yet it still “works”. How do you handle these situations? Do you, as a manager insists that the work is done “correctly”, which the coder may think is just a matter of taste, and not correct vs incorrect? Or do you leave the smelly code in there, and keep the peace?

If you don’t know how to code, and you’re the manager, you won’t even notice that the code is bad. You’ll be happy that it “works”. Over time, though, the cost of bad code will weigh down on productivity, the errors start piling up, good coders leave as there is no reward for good quality and they’re fed up with refactoring shitty code. If you have great coders, you might not run into that situation, but how do you know if you have great coders if you can’t code?

Maybe you’re the best coder in the world, and you’re in a managerial position facing some smelly code, you might consider two approaches: scold the coder(s), and demand that they do it the “correct” way (which is then interpreted as micromanagement), or alternatively, if you’re exhausted from the discussions, you just do a refactor yourself on a Sunday, while the kids are in the park?

In the real world, though, the best solution is for the manager to have decent coding skills, and posses that rare ability to argue convincingly. The latter is very hard to do if you do not understand the art of coding. Furthermore I don’t think coders are uniquely handicapped in being persuasive and certainly not when dealing with other coders (n00b managers wearing a tie are universally despised in the coding world).

Every coder is different, and act differently depending on the time of day, week or year. Some coders have not fully matured, some are a little too ripe, and some just like to do things the way they always did (or “at my old job we…”), different approaches are needed to persuade different people.

I must confess that this is what I have observed, the few times I have been wearing anything with any resemblance to a managerial hat, I have walked away being universally despised and feared as some sort of “Eye of Sauron” who picks up on the smallest error with no mercy when dishing out insults, but in theory at least, I think I know how thing ought to be.

So,if you are managing software projects and interacting with coders, you need to know how to code.

Nintendo’s Marriage

Nintendo was the first one among the gaming console companies to enforce strict quality and content controls on games for their platform. Perhaps they saw what happened to other manufacturers that had a more promiscuous approach. When 9 out of 10 games are terrible, people start thinking that there’s something wrong with the platform.

Apple took the same approach with the iPhone. Initially banning 3rd party apps completely, and suggesting that 3rd parties create specially crafted HTML pages just for iPhone. It did not take long before this rule was relaxed, but at least Apple kept some control of their platform by having all apps go through a (shallow) vetting procedure, and ultimately having the ability to pull the app entirely.

In the IP video industry, the VMS companies used to demand that people selling the software were certified. The two primary reasons were that a) it produced a decent revenue, and b) idiots selling your software may tarnish your reputation through no fault of the manufacturer.

Prior to IP video cameras, most installations were pretty straightforward. The challenges were in getting the right coverage, pulling the cables neatly and mounting the cameras properly. Any old electrician understood that when you connected the coax camera to “input 1”, the video from that camera would emerge on the corresponding spot on the monitor. If something happened, you’d eject the tapes, push in some new ones, and that was it.

Getting an IP video infrastructure set up properly is an entirely different ballgame. You still have to pull cables, and mount cameras, but on top of that, you have to deal with a whole host of new problems. You have to keep the OS up-to-date, you have to keep the camera firmware up-to-date, you have to verify that security protocols are adhered to (no “123456” passwords), and if something happens, you have to navigate an often confusing and complex UI that offers 3 different ways to get your footage out of the system. Most of these tasks are trivial to maintain for people who are used to the quirks and understand the meaning of every term, but the majority do not.

If you’re dealing with larger installations, you’re often trying to integrate the VMS with existing equipment, and sometimes you’re asked to make it fit within existing IT policies, which makes things an order of magnitude more interesting. You’re also dealing with people in a position of authority, that arbitrarily demand various things (some possible, some not, some that make sense, some that do not).

As a consultant, I advise people against things I think are counter-productive, unfeasible or impossible. If they still insist on going down some rabbit hole, I will happily go there, knowing that they are paying by the hour. But not everyone is fortunate enough to make that trade.

You could say that I am a kind of prostitute; Naturally, I want repeat clients, so unless the services requested are too crazy, I’ll oblige. I am not offended by any suggestion, but I reserve the right to just say no.

In many cases, though, it’s more like a marriage. And just like in a marriage,. the vendor and the partner must establish and maintain trust between one-another. Without trust, the marriage will not last long, or it will be a long nightmare for both parties. Trust is not limited to “not, technically, lying” (as opposed to straight up lying), it’s also about sharing expectations, plans, ideas, and being honest about what can’t and what won’t happen.

Good marriages also seem to include some sort of equal give and take between the partners; you do the dishes, I’ll do the laundry.

And this is where marriages get tricky. If I mess up the laundry every single time, break the dishes when I try to fill the washing machine, cause water damage to the floors when I mop, then we need to divide the tasks so that I take on tasks that I am qualified to take on. But what if I am not really good at any task? Or at least, not good at any relevant task? Or, perhaps I am confident that I am cooking a mean mac and cheese, but the reality is that it is bland and mushy and gives people constipation.

In a relationship that is too lopsided, one partner will eventually get fed up and leave. And it’s hard for me, then, to gauge whether the mac and cheese is truly terrible, or if it was just something mean and offensive the ex-wife threw in my face. I may, tragically, not learn a single thing from this endeavor.

id-100100980

And so you may encounter people who disables the storage drive through the windows disk manager, and then complain about poor performance. They may not understand how networks work, and demand changes that are time-consuming but will never improve the performance. Naturally, they will complain when they realize this to be true. They may consistently provide false, misleading information regarding behavior and version numbers, and fail (intentionally?) to provide the diagnostic logs to support their claims, and so on. They’re breaking the dishes, shrinking your favorite shirt, and causing water damage.

In those situations, there’s nothing wrong with sitting down, looking each other deep in the eyes, and agree to part ways. Rather than staying in an abusive relationship where backstabbing and offensive slurs are the order of the day.

Nintendo and Apple carefully vetted who they married; setting up strict requirements for those who were allowed into the walled garden. You had to prove that you were a good match, and that you wouldn’t tarnish the reputation of either of them. If you can’t find a good match, then give up, it’s always better to abstain than to settle.

Today, it is not in vogue to be such a snob. Promiscuity is all the rage. Have thousands of connections with semi-random people on social media is the norm. Getting into bed with every conceivable partner is a virtue.

And perhaps that’s why there’s so much shit out there today.