top of page

        Video Stabilization for Embedded Systems

RidgeRun's Video Stabilization for Embedded Systems is a software library that aims to provide efficient video stabilization solutions for resource constrained systems. The library uses different hardware units available in the platform to ensure real-time performance on a variety of small platforms.

video-stabilization-overview.gif

Broadly, stabilizers may be broke down in three stages: 

​

* Motion estimation: approximating the camera movement

* Motion compensation: removing undesired movement 

* Image warping: transforming the image to compensate for undesired movement. 

 

From these, the first and the last stages are known to be very resource intensive. For the motion estimation, we make use of the HW accelerated H.264 encoder and extract the motion vectors. Most modern platforms, even though small, are capable of encoding at 30fps. These motion vectors are aggregated to estimate the overall camera movement. 

 

The camera movement is then smoothed out to eliminate undesired perturbations. This motion correction is applied to the original image by using OpenGL, which again may operate in real-time in the mentioned platforms. 

 

The following image shows a very general overview of the process:

​

video-stabilization-overview.png

Our initial v0.1.0 pre-release includes:

​

* Support for x86, imx6*, imx8*, and snapdragon* platforms (*in progress).

* HW accelerated motion estimation via H264 motion vectors.

* Motion smoothing using ultra-low latency filter.

* Image warping via OpenGL.

* GStreamer plugin for easy pipeline integration

​

Stay tuned for the updates and learn more in our developer's wiki.

​

Technical questions? Email: support@ridgerun.com

Want to know more about the project? Email: contactus@ridgerun.com

 

bottom of page