The good thing about MJPEG is that the frames are independent. If the link between my camera and the NVR is capable of streaming 30 frames per second, but the link from NVR to client only is capable of 10 fps, then 2 out of every 3 frames will be dropped. I may have local clients that will do 20 fps, and a mobile client doing only 4 fps. The thing is – they all get the frame rate that their bandwidth will allow.
Now, consider MPEG4 –
The camera prepares a 30 FPS GOP and sends it to the NVR, the NVR will then try to relay the GOP to the client. But if the clients bandwidth is too low, then I will have to skip some data. I can either skip a whole GOP; causing latency (if we assume it takes 2 seconds to send a 1 second GOP, I will get the last frame with 1 second delay), AND I would skip one second of video every other second. I think it is safe to say that such a solution would not be acceptable to anyone.
So we can chose to “throw away” some P-frames (it should NEVER discard I-Frames!), but this leads to a new, but much smaller problem. Depending on the encoder, missing P-frames leads to different artifacts on the screen – these artifacts are frequently described as “ghosting”.
The artifacts are a symptom of a bottleneck in the system; but how do we handle these bottlenecks? I prefer a smooth playback, even if there are artifacts. It’s a personal preference, so I am not going to state that this is objectively the BEST solution. It is just the one I prefer. The other is to “freeze frame” until we receive a new I-Frame, and then continue from there. This leads to stuttering and jerky movements, but evidently a lot of people prefer that.
In the perfect system, the video would scale to match the bandwidth of the client. Hulu does this (Hulu is like a deluxe version of YouTube), but Hulu is not streaming real time. Hulu’s programming has been transcoded offline to 3 or 4 different levels of bandwidth, and Hulu can afford to buffer for 2 minutes before starting playback. That’s simply not a viable option for us.