Configure Sample Applications
This section provides a flexible workflow for running and customizing advanced multimedia and AI sample applications using the Qualcomm® Intelligent Multimedia Product (QIMP) SDK on Ubuntu. Developers can define input/output sources, runtime targets, and model precision using JSON configuration files—enabling seamless evaluation across CPU, GPU, and DSP. With support for frameworks like TFLite, QNN, and SNPE, and integration with AI Hub, this setup is ideal for building and optimizing edge AI pipelines tailored to specific use cases.
Running advanced multimedia and AI sample applications using the QIMP SDK on Ubuntu enables developers to:
- Prototype and validate AI workloads across heterogeneous compute targets (CPU, GPU, DSP), helping teams choose the most efficient runtime for their use case.
- Customize application behavior using JSON-based configuration, allowing precise control over input/output sources, model types, and runtime parameters.
- Accelerate development and deployment by leveraging pre-integrated models from AI Hub and supported frameworks like TFLite, QNN, and SNPE.
- Benchmark performance and optimize resource usage, which is critical for embedded systems and edge devices where compute and power budgets are constrained.
- Ensure compatibility and reproducibility across Qualcomm platforms by using standardized scripts and directory structures for models, labels, and media assets.
Prerequisites
- Ubuntu OS should be flashed
- Terminal access with appropriate permissions
- Basic familiarity with JSON configuration files and runtime environment variables.
- Access to AI Hub for model selection and export. Create AI Hub account
- If you haven’t previously installed the PPA packages, please run the following steps to install them.
git clone -b ubuntu_setup --single-branch https://github.com/rubikpi-ai/rubikpi-script.git
 cd rubikpi-script
 ./install_ppa_pkgs.sh
Use the steps below to configure the script and run the model.
1️⃣ Download and Run the Script This script will automatically fetch all required packages for running the sample applications, including:
- Models
- Labels
- Media files
cd /home/ubuntu 
curl -L -O https://raw.githubusercontent.com/quic/sample-apps-for-qualcomm-linux/refs/heads/main/download_artifacts.sh
sudo chmod +x download_artifacts.sh 
sudo ./download_artifacts.sh -v GA1.5-rel -c QCS6490
Explanation
- Use the -vflag to define the version you want to work with (e.g., GA1.5-rel).
- Use the -cflag to define the chipset your device is using.(e.g., QCS6490).
2️⃣ Verify Model/Label/Media Files
Before launching any sample applications, make sure the required files are in place.
Check the following directories:
- Model files → /etc/models/
- Label files → /etc/labels/
- Media files → /etc/media/
These files are essential for AI sample applications to function correctly. If they’re missing, re-run the artifact download script.
3️⃣ Updating JSON Config File
To run sample applications with a specific functionality, you’ll need a properly configured JSON file.
What to Do
- Update the required JSON config file based on your model and config requirements.
- Edit the file at e.g. - /etc/configs/config_classification.jsonto match your use case:
Configuration Parameters
Details
Update your JSON config file with the following key parameters:
- Input Source
- Camera
- File (Filesrc)
- RTSP Stream
 
- Output Source
- Waylandsink
- Filesink
- RTSP Stream
 
- Runtime Options
- CPU
- GPU
- DSP
 
- Precision
- INT8 / INT16
- W8A8 / W8A16
- FP32
 
- Model Type
- Select from available models in AI Hub
 
- Labels
- Select the correct labels file
 
Sample Application Configuration Matrix
| Sample App Name | Details | AI Hub Model Type | Runtime | Script to Use | 
|---|---|---|---|---|
| gst-ai-classification | Image classification | MobileNet-v2, ResNet101, GoogLeNet, MobileNet-v3-Large, ResNet18, ResNeXt50, ResNeXt101, SqueezeNet, WideResNet50, Shufflenet | CPU, GPU, DSP | Update JSON | 
| gst-ai-object-detection | Object detection | Yolox, Yolov7, Yolov8-Detection (manual export) | CPU, GPU, DSP | Export model from AI Hub; Update script for Yolox/Yolov7 – Update JSON | 
| gst-ai-pose-detection | Pose detection | hrnet_pose | CPU, GPU, DSP | TFLite works by default; update script for precision/runtime – Update JSON | 
| gst-ai-segmentation | Image segmentation | FFNet-40S, FFNet-54S, FFNet-78S | CPU, GPU, DSP | Update JSON | 
| gst-ai-superresolution | Video super-resolution | quicksrnetsmall, QuickSRNetMedium, QuickSRNetLarge, XLSR | CPU, GPU, DSP | Update JSON | 
| gst-ai-multistream-batch-inference | Multistream batch inference | YoloV8-Detection (batch 4), DeeplabV3 (batch 4) | CPU, GPU, DSP | Export model from AI Hub; Update script – Update JSON | 
| gst-ai-face-detection | Face detection | face_det_lite | CPU, GPU, DSP | Update JSON | 
| gst-ai-face-recognition | Face recognition | face_det_lite, face_attrib_net, facemap_3dmm | CPU, GPU, DSP | Face registration required; otherwise output is 'unknown face recognized' | 
| gst-ai-metadata-parser-example | Metadata parsing | Yolov8-Detection | CPU, GPU, DSP | Export model from AI Hub | 
| gst-ai-usb-camera-app | AI USB camera | Yolov8-Detection | CPU, GPU, DSP | Export model from AI Hub | 
| gst-ai-parallel-inference | Parallel inferencing | Yolov8-Detection, Deeplab, Hrnet, Inceptionv3 | CPU, GPU, DSP | Export model from AI Hub; Update JSON for other models | 
| gst-ai-daisychain-detection-classification | Daisy chain detection and classification | Inceptionv3 + YoloV8 | CPU, GPU, DSP | Export model from AI Hub; Update JSON for other models | 
| gst-ai-audio-classification | Audio classification | Inceptionv3 + YoloV8 | CPU, GPU, DSP | Export model from AI Hub; Update JSON for other models | 
| gst-ai-smartcodec-example | AI smart codecn | Inceptionv3 + YoloV8 | CPU, GPU, DSP | Export model from AI Hub; Update JSON for other models | 
Use the SSH/SBC terminal to launch your sample application.
In case if the terminal is in root, the we need to set the following environment. Otherwise for ubuntu user, it is not required. export XDG_RUNTIME_DIR=/run/user/$(id -u ubuntu)
Example
For the AI Classification sample application, open the /etc/configs/config_classification.json configuration file and update default labels file.
Change:
"labels": "/etc/labels/classification.labels"
to:
"labels": "/etc/labels/imagenet_labels.txt"
Run the AI classification sample application.
gst-ai-classification
To display the available help options, run the following command in the SSH shell:
gst-ai-classification -h
To stop the use case, use CTRL + C
Reference Docs:
To further explore sample applications, see the Qualcomm Intelligent Multimedia SDK (IM SDK) Reference Guide. Qualcomm Intelligent Multimedia SDK (IM SDK) Reference