top of page

Official GStreamer Plug-in for Basler Cameras

Writer's picture: Michael GrunerMichael Gruner

Updated: Jun 10, 2024

RidgeRun is proud to have partnered with Basler to develop the official GStreamer plug-in for Basler cameras! The project consists of a single source element which will autodetect cameras connected to the system and allow you to start capturing from one of them right out-of-the box. The project is fully open-source and the latest release can be found on Basler's GitHub organization.




The project uses the powerful Pylon SDK to access the different camera features. We used this in combination with GObject's introspection capabilities to create an element that dynamically exposes different caps and properties, according to the detected cameras. This results in a one-fits-all GStreamer element that fully represents the underlying device. But there's more! Having selected a device you can not only configure the camera, but also its associated stream grabber, transport layer and all the components in the HW stack*. This allows, for example, to not only configure the gain and exposure time of a USB3 Vision camera, but to also configure the transfer size of the USB stack to really fine tune to a specific application.



The following sections give you a sneak peak of the developed pylonsrc element in action.

* Only the stream grabber configuration is exposed at the time of this writing.


Inspecting Available Cameras via GstInspect


If you are somewhat familiar with GStreamer you know that gst-inspect-1.0 is a tool that allows you to understand what the element is capable of. It provides information regarding pads, caps, properties, signals, actions and more. In the case of Basler cameras it was only natural to include this information about the connected cameras. The following snippet shows the output of gst-inspect-1.0 when two different cameras are connected to the system:



Address your attention to the top-level cam property. You'll notice how, within this entry, all the detected cameras are listed along with their properties. Furthermore, they are complemented with the property type, default values and allowed ranges. Similarly, the stream property will list the properties of the available stream grabbers.


Selecting a Camera


Cameras are selected using three criterion:

  1. Device user name: a custom name the user can configure to each camera.

  2. Device serial number: the (almost always) unique serial number configured in the camera.

  3. Device index: the index of the device in the list of filtered cameras.

These properties serve as filters and may be combined together to narrow the camera search. For example, the following configuration will choose the first camera in the list of cameras named Top-Left.

~$ gst-launch-1.0 pylonsrc device-name="Top-Left" device-index=0 ! autovideosink

The order of the cameras is guaranteed to be preserved across executions, which allows deterministic executions.


Configuring the Camera


Once the camera is selected you can configure its different features. To do so you use a slightly lesser known notation which is named the Child Proxy syntax. The selected camera is made available through the cam child object and its properties can be accessed through the cam::property=value syntax. For example, the following line will configure the test pattern in the Basler daA1600-60uc (22687677) camera (see the inspect output above):

~$ gst-launch-1.0 pylonsrc device-index=0 cam::TestPattern=ColorDiagonalSawtooth8 ! autovideosink

Note that the camera must be selected prior to attempting to configure the camera.


Programmatically, the same configuration would be achieved as in the following snippet:

Or alternatively in Python:

In both cases it can be seen that the TestPattern property is being set to the camera and not the pylonsrc element. You'd take a similar approach to configure the stream grabber, except you would use the "stream" child.


Note that, in the example above, we are using a hardcoded value of 12 which corresponds to the "ColorDiaglonalSawtooth8" value in the Pylon's GenICam enumerator. This value can be retrieved from the camera documentation, the gst-inspect-1.0 output or, even, programmatically by performing introspection on the "cam" GObject. For example, without going into much introspection detail, the following snippet shows how you'd query the allowed values:

GParamSpecEnum *pspec = NULL;
gst_child_proxy_lookup (GST_CHILD_PROXY(pylonsrc), "cam::TestPattern", NULL, &pspec);
for (int i=0; i < pspec->enum_class->n_values; ++i) {
  GEnumValue *v = pspec->enum_class->values[i];
  g_print ("Fount pattern: %s - %s (%d)\n", v->value_name, v->value_nick, v->value);
}

Finally, you could even set the pattern value by name by using the de-serialization capabilities:

GValue value = G_VALUE_INIT;
g_value_init (&value, ((GParamSpec*)pspec)->value_type);

gst_value_deserialize (&value, "ColorDiagonalSawtooth8");
gst_child_proxy_set_property (GST_CHILD_PROXY(pylonsrc), "cam::TestPattern", &value);

g_value_unset (&value);

Profiles and Other Defaults


As with plain Pylon, it is likely that you'll be using a pre-defined configuration for your camera, or even loading your fine tuned one. There are two ways to achieve this.


User Sets allow you to load a configuration from the device's flash memory. See Basler's docs for more information on user sets, including how to save your custom ones.

~$ gst-launch-1.0 device-index=0 user-set="Light Microscopy" ! autovideosink

PFS Files allows you to load a configuration from a file in your hard drive. Again, refer to Basler's docs for details on how to generate these feature files.

~$ gst-launch-1.0 device-index=0 pfs-file=defect_detection.pfs ! autovideosink

In both cases the pre-defined configuration is loaded first. This means that you can still fine tune the camera afterwards:

~$ gst-launch-1.0 device-index=0 user-set="High Gain" cam::Gamma=1.25 ! autovideosink

All these make scenarios easily reproducible and deterministic.


What's Next?


There are a lot of other features that fall off the scope of this introduction. To mention some: camera disconnection detection, frame error handling, etc... And there are still more to come! Feel free to head to the GitHub repository to try it out, report problems and ask for new features.


RidgeRun is immensely happy to work with industry titans such as Basler to extend the reach of machine vision to a wider variety of platforms such as NVIDIA Jetson or NXP i.MX8.



1,139 views
bottom of page