Compression and Quality


Here’s a bit of uncompressed text (there are 60 chars I think)

aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa = 60 bytes

Depending on your system, these 60 a’s take up 60 bytes of memory (UTF-8) or 120 bytes (Unicode). If you were to store the text on disk, the file size would be 60 bytes too. Sending it across the network? 60 bytes.

Now, what if I compress it? A simple compression algo would be RLE of the a’s, like so :

60,a = 4 bytes

This is just 4 bytes instead of 60. So we have compressed the signal quite a bit, but did the quality suffer? No. Once we do the decompression, the signal is a 100% copy of the original.

Here’s a slightly different signal

aaaaaaaaaaaaaaaaaaaabaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa  = 60 bytes


20,a,1,b,39,a = 13 bytes

It’s safe to say that higher compression does not lead to loss of quality. In both cases the output is exactly the same as the input, but the amount of data sent/stored is much less when you compress. And just because signal 2 is less compressed than signal 1, it would be unreasonable to say that the higher compression of signal 1 gave a less accurate result.


Let’s introduce a new variants of the RLE algorithm. The algorithm above looks through the entire text before compressing, but what if we add another one that only looked forward 10 chars?

Method 1:

20,a,1,b,39,a = 13 bytes

Method 2:

10,a,10,a,1,b,10,a,10,a,10,a,9,a = 33 bytes

Method 2 does not compress the signal as much as Method 1, but the end result is the same. No difference in quality. So would it make sense to use Method 2 because it offers better quality? No, off course not. Would the quality improve if we stored the original signal without any compression? No, that makes no sense either.

But what if we didn’t care about that singular ‘b’ in the middle of the stream? What if that was simply a mistake, or a change so small that we can live without it. In that case, we’ll just replace the ‘b’ with an ‘a’ and we’ll get this result:

Method 1:

60,a = 4 bytes

Method 2:

10,a,10,a,10,a,10,a,10,a,10,a = 30 bytes

Notice that in both cases, the compression increased, but for method 1, the increase in compression was dramatic – from 13 bytes to just 4, while method 2 only shed 3 bytes. In both cases, the reconstructed signal has the ‘b’ replaced by an ‘a’, so we have lost some information during the compression. Therefore – lossy compression.

Now, both algos give the same error, so the quality of the reproduction, in both cases, is the same. However, method 1 gives a much higher compression than method 2.

So, again, we observe that higher compression does not always lead to lower quality.

MPEG vs JPEG compression

Both MPEG and JPEG are lossy (although lossless variants do exist). When we look at a video clip of, say 10 frames; JPEG looks at each frame, by itself. Whereas MPEG takes the content of the other 9 frames into consideration. This leads to a much higher compression for MPEG, because MPEG detects similarities between frames. And as we just saw, just because Method 1 offers higher compression than Method 2, it does not mean that the quality is worse.

If you compare apples to apples, then yes, a 512Kb/s MPEG stream is objectively worse quality than a 1Mb/s MPEG stream. But you cannot compare a 512Kb/s MPEG to a 1Mb/s JPEG stream and state that MPEG is worse quality because it has higher compression. In fact, the opposite is likely to be true.

Practical Issues with MPEG

MPEG is a pain in the ass to deal with when you are reviewing video. You want to be able to single step frames, forward and backwards, in other words, you should expect full random access to any frame, at any given time. With the new variants of MPEG (SmartStream by Vivotek for example), the GOP lengths are now much longer and so to get random access you need to decode a hell of a lot of frames to provide random access. Sony (if I recall) took the advice (that I gave to Mobotix) and made a variant where every frame references only the last keyframe. This allows access to any frame in the GOP with only the I-frame and a single P-frame (yay!).

Practical Issues with MxPEG

Like MPEG, MxPEG also looks at other frames, so it offers better compression than JPEG, but where MPEG uses motion vectors, MxPEG has an all or nothing approach. Either you get a full macroblock, or you get nothing.

Furthermore, MxPEG does not have the notion if I-frames. It’s very similar to some of the H.264 variants used for live-streaming of games etc. One benefit is that the bitrate does not fluctuate wildly (over the same content), because there is not giant I-frame every second, followed by microscopic P-frames. Really shitty implementations of MPEG tries to compress the I-frame to fit inside the given bandwidth, which leads to useless results. I am tempted to conclude that some of the anti-H.264 sentiment stems from seeing that kind of shit (there should be a public blacklist of terrible cameras/encoders).

The problem with MxPEG is that it is not widely supported. Mobotix (or someone who loves MxPEG) has added MxPEG to one of the greatest open source decoder pools out there, so finding a decoder is not too hard (and Mobotix will offer you source code if you play your card right). But you are not going to see native support for MxPEG on android, on iOS, in Chrome or Safari etc. So you have to do some server side processing, and provide a full MJPEG frame in those cases, or write a lot more code on the client side to support MxPEG.

MxPEG’s I-frameless concept also means that the NVR has to do a bit of juggling to ensure proper random frame access. In some cases, the NVR creates a full JPEG frame every N frames and stores that in the video DB, thus making MxPEG more like MPEG.


Different problems call for different solutions. It is up to the skilled integrator to pick the right components based on actual knowledge, and not rely on folklore or the ramblings of a crazed sociopath. Different cameras have different advantages, some have great optics, some have analytic capabilities, some have PIR sensors and so on. What’s great for Pete’s Sandwhiches might not work well for Johnson Manufacturing. The intent of this post is not to say that X is the wrong choice, but to – hopefully – shed some light on how X is different from Y, and why.


Author: prescienta

Prescientas ruler

2 thoughts on “Compression and Quality”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s