top of page
Writer's pictureLuis G. Leon-Vega

Introducing the RidgeRun CUDA ISP Library: Accelerate your Image Processing

Updated: Apr 17


Image signal processing (ISP) is a critical component of the image capture process, responsible for transforming raw sensor data into usable images. This process involves several steps, including debayering and auto-white balancing, which can be computationally intensive and time-consuming. To address these challenges, RidgeRun has developed the CUDA Image Signal Processing (ISP) library: a CUDA-accelerated solution for image preprocessing that offloads processing to NVIDIA GPUs.



What is CUDA Image Signal Processing Library?


The CUDA ISP library provides several features to streamline the image processing workflow. These include:

  • Debayering: which converts raw Bayer pixel arrays into RGB format.

  • Binary shifting: which converts high-bit raw data into 8-bit data.

  • White balancing: which adjusts the color balance of the image.


The library also includes CUDA buffer allocators, making integrating multimedia applications and computer vision algorithms easy.


One of the key benefits of the CUDA ISP library is its ease of integration. The library provides an intuitive C++ API that can be seamlessly integrated into existing workflows, and is GStreamer-friendly, making it ideal for streaming applications, computer vision, and image analysis.


When Can I Use it?


CUDA ISP looks forward to helping you in

  • Platforms without Image Signal Processor: most x86 platforms do not have any Image Signal Processor required for MIPI-CSI cameras.

  • Reducing CPU consumption: CUDA ISP offloads the processing to the GPU, freeing the CPU for other critical tasks.

  • Extensibility: after the purchase of CUDA ISP, RidgeRun will deliver the source code, and you can perform library extensions, tuning the algorithms according to your needs.


Ideal applications for CUDA ISP are streaming applications and preprocessing before computer vision.


You can get the benefits of CUDA ISP simply by using the following zero-copy GStreamer pipeline:


v4l2src io-mode=userptr ! “video/x-bayer,bpp=10” ! cudashift ! cudadebayer ! cudaawb ! fakesink

For your applications, CUDA ISP is also available in C++:


  • Create the acceleration backend

auto backend = rr::IBackend::Build(rr::kCUDA);
  • Create the algorithm handler from the acceleration backend

auto white_balancer = backend->CreateAlgorithm(rr::kWhiteBalancer);
  • Create the buffers compatible with the acceleration backend

auto buffer_params = std::make_shared<CudaBufferParams>();
buffer_params->type = CudaType::kUnified;

auto in_buffer = backend->CreateBuffer(size, buffer_params);
auto out_buffer = backend->CreateBuffer(size, buffer_params);
  • Adjust the algorithm parameters

auto awb_params = std::make_shared<rr::WhiteBalanceParams>();
awb_params->algorithm = rr::WhiteBalanceParams::kHistogramStretch;
awb_params->quality_factor = 0.95;
  • And run the algorithm

white_balancer->Apply(in_buffer.get(), out_buffer.get(), awb_params, false)

CUDA ISP was developed following the best practices, such as RAII, where every structure is automatically freed after usage, easing the process of integrating CUDA ISP to your application. You can get started through our API Usage in our developer wiki.


Which Platforms does CUDA ISP Support?


RidgeRun CUDA ISP is supported on:

  • X86 platforms equipped with NVIDIA GPUs*

  • NVIDIA Jetson (with Jetpacks 4.x and 5.x)


* Limited to GPUs that support NVIDIA NPP.


Learn more in our developer wiki.


Want to give it a try? Email: support@ridgerun.com for an evaluation version!

Please submit the Contact Us form if you have any questions or send an email to contactus@ridgerun.com



448 views
bottom of page