Surveillance systems rely heavily upon the capability provided by embedded vision systems to enable deployment across a wide range of markets and applications. These surveillance systems are used for numerous applications from event and traffic monitoring, safety and security applications, to ISR and business intelligence. This diversity brings with it several driving challenges which need to be addressed by the system designers in their solution. These are:
- Multi Camera Vision – The ability to interface with multiple homogeneous or heterogeneous sensor types.
- Computer Vision Techniques – The ability to develop using high level libraries and frameworks like OpenCV and OpenVX.
- Machine Learning Techniques – The ability to use frameworks like Caffe to implement machine learning inference engines.
- Increasing Resolutions and Frame rates – Increases the data processing required for each frame of the image.
Depending upon the application, the surveillance systems will implement algorithms such as optical flow to detect motion within the image. Stereo vision provides depth perception within the image, while machine learning techniques are also used to detect and classify objects within an image. In figure 1, the top photo demonstrates facial detection and classification applications, while the bottom photo depicts an optical flow application.
Heterogeneous System on Chip devices like the All Programmable Zynq-7000 and the Zynq Ultrascale+ MPSoC are increasingly being used for the development of surveillance applications. These devices combine high performance ARM® cores to form a Processing System (PS) with Programmable Logic (PL) fabric.
This tight coupling of PL and PS allows for the creation of a system which is more responsive, reconfigurable, and power efficient when compared to a traditional approach. Traditional CPU / GPU based SoC approaches require the use of system memory to transfer images from one…