By Avionics Team | November 20, 2024
As part of our SUAS 2025 mission planning, the Avionics Team is actively developing a modular computer vision system capable of detecting and geolocating targets in real time. This system builds on SUAS guidelines for autonomous object detection and localization, and is designed to operate entirely through software without requiring changes to UAV hardware.
Our current implementation uses a Docker-based inference pipeline integrated with Roboflow to manage multiple computer vision models in parallel. This architecture enables reliable, high-performance image processing with minimal integration overhead.
Detection Pipeline Overview:
Backend Integration:
The detection system is fully containerized using Docker and exposed via a lightweight HTTP client. This setup allows any component — such as our flight computer or ground control station — to submit images and receive detection results using simple RESTful API calls.
System Integration:
To meet the real-time demands of SUAS, the GCS includes a multi-threaded client architecture:
Why This Works for UAVs:
This design gives us high-accuracy vision capabilities with minimal impact on system complexity. By using standard APIs and containerization, the vision subsystem integrates cleanly with our existing software stack and runs efficiently on our mission computer hardware. This makes the system maintainable, scalable, and aligned with our SUAS strategy for autonomous object detection and geolocation.