About Architecture Algorithms Tech Stack Results Console Comparison
Autonomous Drone SAR Platform

GuardianEye

Operator-Truth Mission Management for Drone-Assisted Search & Rescue — integrating real-time AI detection, multi-object tracking, geospatial evidence fusion, and autonomous mission orchestration.

0 Cluster RMSE
0 FP Reduction
0 Pipeline Latency
Scroll

Why GuardianEye?

Timely detection and localisation of survivors in disaster-affected areas remains a critical challenge. GuardianEye bridges the gap between fragmented UAV tools and integrated, mission-aware SAR platforms.

The Problem

Traditional ground-based search operations are constrained by terrain accessibility, visibility conditions, and physical endurance. Existing UAV-based SAR systems address individual components — such as detection or path planning — in isolation, leaving integration, evidence management, and operator coordination as afterthoughts.

This fragmentation increases cognitive load for operators and delays the critical detection-to-decision timeline, when every second counts in saving lives.

  • Layered Perception

    Raw frames processed through detection, tracking, geolocation, and fusion — each independently testable.

  • Evidence Persistence

    Frame-level detections are fused into geo-referenced clusters that persist across the mission timeline.

  • Operator-Centric Design

    Five-panel real-time console with live telemetry, detection overlays, map view, alert triage, and mission replay.

Observe — Detect — Respond

A three-stage architecture that spans the entire pipeline from frame ingestion to operator alerting, with modular, independently testable components.

Stage 01

Observe

Frame ingestion from UAV camera or mobile edge nodes via REST API. Real-time video stream processing with GPS-synchronized telemetry capture.

Stage 02

Detect

Four-stage perception pipeline: YOLOv8 detection → Hungarian tracking → Pinhole geolocation → DBSCAN evidence fusion. Each module independently testable.

Stage 03

Respond

Prioritised evidence clusters surfaced via WebSocket, FSM-governed mission state transitions, coverage engine tracking, and operator console alerting.

Core System Components

Detector

YOLOv8n inference with configurable confidence threshold

Tracker

Hungarian assignment + Kalman prediction tracking

Geolocator

Pinhole-camera pixel-to-GPS projection

Evidence Fuser

DBSCAN clustering with haversine distance

Prioritiser

Multi-factor weighted scoring engine

Orchestrator

Async FSM-based mission lifecycle

Coverage Engine

Grid-based area tracking per sector

Replay Service

Timeline-indexed event playback & seek

Algorithm Design

Six core algorithms power GuardianEye's perception and decision pipeline, from frame-level detection to mission-level orchestration.

YOLOv8 Object Detection

The perception pipeline uses an Ultralytics YOLOv8n model as the primary detection engine. Given a video frame, the detector produces bounding boxes, class labels, and confidence scores.

D = {(bi, ci, si)}  —  bi = [x₁, y₁, x₂, y₂],  ci = class,  si ∈ [0, 1]

Detections below a configurable confidence threshold τ are discarded. The model runs at real-time speeds on commodity hardware.

YOLOv8n architecture — optimized for edge inference speed
Configurable confidence threshold for precision/recall tuning
Person and vehicle class detection with weighted priorities
0.6s inference time on standard hardware

Hungarian Assignment + Kalman Tracking

Multi-object tracking associates detections across frames using optimal Hungarian assignment with Kalman prediction for persistent target identities.

Cij = 1 − IoU(bi, j)  →  Optimal assignment via Munkres in O(n³)

Matches with IoU below θIoU (default 0.3) are rejected. A soft re-identification fallback uses centroid distance for occluded targets.

Optimal Hungarian (Munkres) assignment — O(n³) complexity
Kalman filter state prediction for smooth track continuation
Soft re-ID: centroid distance fallback for occluded targets
Lifecycle: Tentative → Confirmed → Lost → Retired

Pinhole Camera Geolocation

Projects pixel-space bounding-box centroids to GPS coordinates using a pinhole camera model with UAV pose data.

GSD = 2hd · tan(α/2) / W  →  Offset (Δx, Δy) rotated by heading ψd

The centroid offset from frame centre is rotated by the heading angle and converted to geographic displacement using the GSD and local latitude. Achieves sub-3m accuracy.

Pinhole camera model with terrain-aware raycasting
GSD-based pixel-to-metre conversion at UAV altitude
Heading-corrected coordinate rotation
1.79m RMSE with DBSCAN fusion enabled

DBSCAN Evidence Fusion

Individual geo-estimates are fused into persistent evidence clusters using DBSCAN with a haversine-distance metric, eliminating parametric assumptions about target count.

ε = 15 m | min_samples = 2 | Distance: Haversine

Each cluster maintains a running centroid (weighted average), uncertainty radius, and cumulative metadata. This reduces false positives by 39.7% and improves cluster RMSE to 1.75 m.

Density-based clustering — no need to specify target count
Haversine distance for accurate spherical merging
Persistent clusters with weighted centroid updates
41% false positive reduction vs raw detections

Multi-Factor Prioritisation

Evidence clusters are ranked by a composite weighted score to surface the most critical targets to operators.

S = 0.35·fconf + 0.25·fpersist + 0.20·fprox + 0.10·frecency + 0.10·fclass

Person targets receive class weight 1.0; vehicles 0.6; unknown classes default to 0.3. Persistence is capped at 5 observations, and recency uses exponential decay.

Confidence factor (35%): raw detection confidence score
Persistence factor (25%): number of corroborating observations
Proximity factor (20%): inverse distance from drone
Class weight factor (10%): SAR-priority class lookup

Mission State Machine

The orchestrator enforces strict state transitions through a finite-state machine, ensuring safe and deterministic mission execution.

Disconnected Precheck Ready Takeoff Search Revisit RTL Land

An Abort state is reachable from any airborne state. Precheck validates GPS lock, battery margin, geofence integrity, and service health. The revisit policy evaluates 5 criteria including battery margin ≥ 30%, uncertainty > 25m, and cooldown ≥ 60s.

6-condition precheck gate: heartbeat, GPS, home, geofence, estimator, battery
Abort-safe: emergency abort reachable from any airborne state
Async phase execution with deterministic state transitions
5-criteria revisit policy with urgency scoring

Technology Stack

Production-grade tools chosen for reliability, performance, and real-time operation on commodity hardware.

API Server

FastAPI + Uvicorn

High-performance REST/WebSocket API with async support

Detection

YOLOv8n + PyTorch

Ultralytics real-time object detection inference

Tracking

Munkres + Kalman

Pure-Python Hungarian tracker with no external dependencies

Fusion

scikit-learn DBSCAN

Density-based clustering with haversine distance metric

Flight Control

pymavlink MAVLink 2.0

PX4-compatible UAV flight controller interface

Operator UI

HTML5 / CSS3 / JS

Real-time 5-panel operator console with Canvas overlays

Containerisation

Docker Compose

One-command reproducible evaluation environment

OTMC Monte Carlo Evaluation

50-iteration OTMC evaluation with ground-truth comparison — 40 completed iterations exercising the real mission-management path under repeatable perturbations.

0 Mean F1 (DBSCAN On)
0 False Positive Reduction
0 Cluster RMSE
0 Preflight Assertions

DBSCAN Ablation Study

Metric DBSCAN On DBSCAN Off Delta Improvement
Mean F1 Score 0.421 0.318 +0.103 +32.4%
Cluster RMSE (m) 1.75 2.30 −0.55 −23.9%
Mean Localization Error (m) 1.79
False Positive Reduction 39.7% vs non-clustering ablation

Safety Precheck Ablation

Metric Blockers On Blockers Off
Missions Completed 17 23
Missions Blocked 10 0
Block Rate 37.0% 0.0%
Total Iterations 27 23

Pipeline Latency Budget — 33.0 ms

YOLO Detection
24.1ms
Hungarian Tracking
4.0ms
Geolocation
1.5ms
DBSCAN Fusion
2.3ms
DB Commit
1.1ms

Operator Console

Five-panel surveillance-grade layout designed for maximum situational awareness during active SAR missions.

LIVE FEED • AI OVERLAY • 30 FPS
01

Status Bar

Persistent system health indicators — connection status, GPS lock, battery level, mission mode, and clock.

02

Navigation Sidebar

Ten-section sidebar: live feed, mission control, detections, coverage, evidence, alerts, replay, devices, logs, and settings.

03

Centre Viewport

Live video feed with real-time AI bounding-box overlays rendered on HTML5 Canvas at 30 FPS.

04

Intelligence Panel

Tabbed display for interactive map, telemetry graphs, priority alerts, and target tracking status.

05

Quick-Action Bar

One-click controls: connect, camera toggle, record, AI toggle, patrol mode, RTL, and emergency stop.

System Comparison

Comparing GuardianEye against recent UAV SAR systems across detection, tracking, fusion, evaluation, and safety dimensions.

System Detection Tracking Geolocation Evidence Fusion Mission Mgmt. Safety Evaluation
Lygouras et al. CNN None GPS only None Manual No Field test
Mishra et al. YOLOv5 None None None None No Offline batch
Bejiga et al. CNN None None None None No Offline batch
SORT Any Kalman+IoU None None None No MOT bench
DeepSORT Any Kalman+Deep None None None No MOT bench
GuardianEye YOLOv8n Hungarian+Kalman Pinhole+Raycast DBSCAN+Haversine FSM+OTMC 6-condition 50-iter MC