We just overhauled the motion detection engine in Cayuga; the results were promising (better than the older 4.2 recorders, and with new cameras and a good integrator, we can run a lot more cameras on the same box). I still believe that motion detection should be done on the edge. But for those who refuse, the new recorders are a lot better.
Motion Detection on the Edge
I think server side motion detection ought to be the last resort (I also think that the VMS should make the choice transparent, so that if a camera has decent motion detection the VMS will simply set up the camera to do it on the edge.)
If the VMS is not able to do this transparently, then most systems will allow you to set it up manually. Usually a royal pain in the ass, but c’est la vie.
Server side motion detection can be very taxing on the CPU. Obviously depending on the accuracy required, both in terms of the number of pixels processed, and how often the detection takes place. Generally, lowering the accuracy will also lower the strain on the CPU.
Motion detection on the edge incurs no additional CPU load – regardless of the accuracy. So if your system is running hot (due to motion detection), then consider moving the detection to the edge. Edge detection may offer more options, and perform better (or worse, so test before deployment), depending on the camera type.
If you are going to do server side motion detection, consider if the motion detection can happen on a secondary stream of lower resolution (and perhaps lower frame-rate too). This obviously requires more bandwidth on the network, but it might be worth the trade-off.
Be very careful when “overbooking” a server. You may often be able to fit more cameras on one server by recording only when there is motion. Since the disk IO can represent a bottleneck, the assumption is that not all cameras will have motion at the same time, and so if only 30% of the cameras are recorded at the same time, the disk IO may not represent a bottleneck. Unfortunately, a lot of motion detectors are susceptible triggering on account of noise in the image. At night, when the light is sparse, the camera increases the gain (unless this is turned off). This also increases the amount of noise, and often trigger the motion detector. This causes the camera to be recorded to disk. Since most cameras all experience the same loss of light at the same time, they all get recorded. And this triggering the disk IO bottleneck and wasting space on meaningless recordings. Consider if post-recording image processing can replace AGC, a very, very unscientific suggested that we might as well do the AGC on the client.
If the motion detection is used for alerting, it may be difficult or even impossible to find the right parameters. If the setup is too sensitive, you will get too many false alarms. On the other hand, making the system too in-sensitive may cause you to loose the event (if you are using the same detection to trigger recording, you may lose the event completely!). A setup that was appropriate in the winter may not be appropriate in the summer where foliage, azimuth of the sun etc may influence your sensitivity or masking settings.
For real-time alerting, I would recommend a dedicated video analytics engine, preferably installed on the camera.
Off to work…