These days, making powerful artificial intelligence (AI) applications can feel very difficult, especially for people without a lot of technical knowledge. However, NVIDIA has created a new solution to make this easier. It's called the Graph Composer suite of tools. Graph Composer is a collection of programs meant to help users build and launch AI applications in an extremely simple way.
The core of the Graph Composer suite is a graph, following the GXF_Specification. This new way lets users easily build and see their AI pipelines through a friendly graphical user interface (GUI), without needing a lot of C/C++ coding. By using drag-and-drop and a visual environment, the Graph Composer suite bridges the gap between complex AI frameworks and user accessibility.
Imagine being able to build and deploy advanced AI applications without getting deep into the details of underlying frameworks like GStreamer. The Graph Composer suite allows users to focus on their main goals, making the development process smoother and enabling quick testing and experimentation.
To show how powerful the Graph Composer suite is, let's go through an example of building a graph that reads from a file, runs an inference, and shows the output on the screen.
First, you'll need an x86 host machine and a device with NVIDIA DeepStream enabled. If your host can run DeepStream, you may not need a separate device. The installation is made easy through a user-friendly guide that walks you through installing the required Debian packages found here.
Next, we'll ensure the gxf_server is running on the target machine and configure the composer to use the server, whether it's running locally or remotely as follows:
The final sync step gives us access to NVIDIA's pre-built nodes, which we can add to our graph.
Now the fun part! We can start building our graph by dragging and dropping the needed elements from the list on the left of the GUI. For our example, we'll need:
1. Source plugin component for input
2. Mux plugin component (needed for the infer plugin)
3. Video inference plugin component with the PeopleNet model
4. OSD (On-Screen Display) plugin component to draw bounding boxes
5. Render plugin component to display the output
Once we're happy with our setup, we can save the graph in YAML format, ready to deploy.
To run the graph, we have two options: run it directly from Graph Composer by clicking "Run" with the gxf_server running. You should see the output displayed on your target device's screen as follows:
or use the execute_graph.sh script for more control, like running locally or on a remote device:
/opt/nvidia/graph-composer/execute_graph.sh <graph-file> -d /opt/nvidia/graph-composer/config/target_x86_64.yaml
Or, to run the graph on a remote Jetson target, we can invoke:
/opt/nvidia/graph-composer/execute_graph.sh <graph-file> -d /opt/nvidia/graph-composer/config/target_aarch64.yaml -t <username@host> --target-env-vars "DISPLAY=:0"
As the graph runs, we'll see the person counting output streaming into the terminal, showing our AI application's performance in real-time.
The Graph Composer suite also lets users develop their own extensions by using GStreamer elements. By generating a list of desired elements and running a command, the suite can automatically generate the source code for those extensions, making development even smoother.
Do you want to know more about Graph Composer, AI application development and deployment, and RidgeRun services around it? Feel free to contact us at https://www.ridgerun.com/contact and discover how we can help you reach the next level with our embedded software services and products using NVIDIA technologies.