Spam mails

As a kid in school, the teacher said “advertising works”. Any proof? No, because, if it didn’t work they wouldn’t be doing it. OK. Now sit your ass down.

But then you see advertising that is boring, meaningless, sometimes terrible, stupid, confusing, perhaps even damaging to the brand, and you might wonder if this really works too (you know, because they wouldn’t be doing it unless…).

So – does spam work? The catch-all argument still lingers in the back of your head, quietly reminding you that if it didn’t then they wouldn’t do it.

I think it’s a bad argument.

Consider this : does sending pics of your dick to bots on Ashley Madison get you laid? I doubt it, yet many, many horney men did so, and they paid handsomely for the privilege.

There’s a lot of money to be made on suckers. Suckers are fools that are incapable of connecting the dots, and instead will try anything once (from spam campaigns to pegging and heroine).

Compression and Quality

Lossless

Here’s a bit of uncompressed text (there are 60 chars I think)

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = 60 bytes

Depending on your system, these 60 a’s take up 60 bytes of memory (UTF-8) or 120 bytes (Unicode). If you were to store the text on disk, the file size would be 60 bytes too. Sending it across the network? 60 bytes.

Now, what if I compress it? A simple compression algo would be RLE of the a’s, like so :

60,a = 4 bytes

This is just 4 bytes instead of 60. So we have compressed the signal quite a bit, but did the quality suffer? No. Once we do the decompression, the signal is a 100% copy of the original.

Here’s a slightly different signal

aaaaaaaaaaaaaaaaaaaabaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa  = 60 bytes

Compressed:

20,a,1,b,39,a = 13 bytes

It’s safe to say that higher compression does not lead to loss of quality. In both cases the output is exactly the same as the input, but the amount of data sent/stored is much less when you compress. And just because signal 2 is less compressed than signal 1, it would be unreasonable to say that the higher compression of signal 1 gave a less accurate result.

Lossy

Let’s introduce a new variants of the RLE algorithm. The algorithm above looks through the entire text before compressing, but what if we add another one that only looked forward 10 chars?

Method 1:

20,a,1,b,39,a = 13 bytes

Method 2:

10,a,10,a,1,b,10,a,10,a,10,a,9,a = 33 bytes

Method 2 does not compress the signal as much as Method 1, but the end result is the same. No difference in quality. So would it make sense to use Method 2 because it offers better quality? No, off course not. Would the quality improve if we stored the original signal without any compression? No, that makes no sense either.

But what if we didn’t care about that singular ‘b’ in the middle of the stream? What if that was simply a mistake, or a change so small that we can live without it. In that case, we’ll just replace the ‘b’ with an ‘a’ and we’ll get this result:

Method 1:

60,a = 4 bytes

Method 2:

10,a,10,a,10,a,10,a,10,a,10,a = 30 bytes

Notice that in both cases, the compression increased, but for method 1, the increase in compression was dramatic – from 13 bytes to just 4, while method 2 only shed 3 bytes. In both cases, the reconstructed signal has the ‘b’ replaced by an ‘a’, so we have lost some information during the compression. Therefore – lossy compression.

Now, both algos give the same error, so the quality of the reproduction, in both cases, is the same. However, method 1 gives a much higher compression than method 2.

So, again, we observe that higher compression does not always lead to lower quality.

MPEG vs JPEG compression

Both MPEG and JPEG are lossy (although lossless variants do exist). When we look at a video clip of, say 10 frames; JPEG looks at each frame, by itself. Whereas MPEG takes the content of the other 9 frames into consideration. This leads to a much higher compression for MPEG, because MPEG detects similarities between frames. And as we just saw, just because Method 1 offers higher compression than Method 2, it does not mean that the quality is worse.

If you compare apples to apples, then yes, a 512Kb/s MPEG stream is objectively worse quality than a 1Mb/s MPEG stream. But you cannot compare a 512Kb/s MPEG to a 1Mb/s JPEG stream and state that MPEG is worse quality because it has higher compression. In fact, the opposite is likely to be true.

Practical Issues with MPEG

MPEG is a pain in the ass to deal with when you are reviewing video. You want to be able to single step frames, forward and backwards, in other words, you should expect full random access to any frame, at any given time. With the new variants of MPEG (SmartStream by Vivotek for example), the GOP lengths are now much longer and so to get random access you need to decode a hell of a lot of frames to provide random access. Sony (if I recall) took the advice (that I gave to Mobotix) and made a variant where every frame references only the last keyframe. This allows access to any frame in the GOP with only the I-frame and a single P-frame (yay!).

Practical Issues with MxPEG

Like MPEG, MxPEG also looks at other frames, so it offers better compression than JPEG, but where MPEG uses motion vectors, MxPEG has an all or nothing approach. Either you get a full macroblock, or you get nothing.

Furthermore, MxPEG does not have the notion if I-frames. It’s very similar to some of the H.264 variants used for live-streaming of games etc. One benefit is that the bitrate does not fluctuate wildly (over the same content), because there is not giant I-frame every second, followed by microscopic P-frames. Really shitty implementations of MPEG tries to compress the I-frame to fit inside the given bandwidth, which leads to useless results. I am tempted to conclude that some of the anti-H.264 sentiment stems from seeing that kind of shit (there should be a public blacklist of terrible cameras/encoders).

The problem with MxPEG is that it is not widely supported. Mobotix (or someone who loves MxPEG) has added MxPEG to one of the greatest open source decoder pools out there, so finding a decoder is not too hard (and Mobotix will offer you source code if you play your card right). But you are not going to see native support for MxPEG on android, on iOS, in Chrome or Safari etc. So you have to do some server side processing, and provide a full MJPEG frame in those cases, or write a lot more code on the client side to support MxPEG.

MxPEG’s I-frameless concept also means that the NVR has to do a bit of juggling to ensure proper random frame access. In some cases, the NVR creates a full JPEG frame every N frames and stores that in the video DB, thus making MxPEG more like MPEG.

Conclusion

Different problems call for different solutions. It is up to the skilled integrator to pick the right components based on actual knowledge, and not rely on folklore or the ramblings of a crazed sociopath. Different cameras have different advantages, some have great optics, some have analytic capabilities, some have PIR sensors and so on. What’s great for Pete’s Sandwhiches might not work well for Johnson Manufacturing. The intent of this post is not to say that X is the wrong choice, but to – hopefully – shed some light on how X is different from Y, and why.

On Management Books

I ran into a guy driving the new BMW i8. I asked how he’d become so successful. He gave me a pamphlet. It said, on the cover “How I got rich”. I flipped through the pages. There was a lot of weird things this guy had done. He always used a certain type of shampoo, and recommended I do the same. He never drove red cars, so I should avoid them, and he took two teaspoons of suger in his morning coffee.

I asked how this would make me successful? It just didn’t make sense to me – perhaps doing certain things would alter my perception, causing me to become more successful I offered as an explanation.

He squinted, paused a second, and said “I did these things, and I won the lottery 2 years ago”. He pulled out a pack of cigarettes. He lit one, inhaled deep and with a painful expression he proceeded to say “Post hoc ergo propter hoc”. He then stepped into his shiny new car, and drove off.

There’s no arguing with that.. it’s in LATIN, and he drowe a BMW i8!!!

Arguably the most successful company* in the world was run by someone who many people say was an asshole. But without Woz (and a whole lot of circumstantial luck), I doubt that Jobs would have gotten very far. After the success of Jobs, a lot of people felt that their assholery became validated as a management strategy, Post hoc….

Microsoft did not have a Woz (they did have Allen though). Contrary to popular belief, Gates is not a programming genius (he’s probably good though). Gates is a genius in making deals and ruthlessly destroying any threat to the core business. He, and Ballmer were such wonderful mensch (sarcasm), and they built this massive, successful company. So, Post hoc..

I can’t find any empirical evidence that suggest that one management style is more likely to be successful than the other. There are companies that are run by sociopathic assholes, and are successful. There are companies run by geniuses yet fail miserably.

Self-serving bias is a real thing, which is one reason I don’t read management books. If you are successful, you’ll stand on your anthill yelling “post hoc”, whereas, if you fail, it was something external.

*Apple!!!