Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach

Published in 2025 IEEE Latin American Robotics Symposium (LARS), 2025

Inspecting confined industrial infrastructure, such as ventilation shafts, is a hazardous and inefficient task for humans. Unmanned Aerial Vehicles (UAVs) offer a promising alternative, but GPS-denied environments require robust control policies to prevent collisions. Deep Reinforcement Learning (DRL) has emerged as a powerful framework for developing such policies, and this paper provides a comparative study of two leading DRL algorithms for this task: the on-policy Proximal Policy Optimization (PPO) and the off-policy Soft Actor-Critic (SAC). The training was conducted with procedurally generated duct environments in Genesis simulation environment. A reward function was designed to guide a drone through a series of waypoints while applying a significant penalty for collisions. PPO learned a stable policy that completed all evaluation episodes without collision, producing smooth trajectories. By contrast, SAC consistently converged to a suboptimal behavior that traversed only the initial segments before failure. These results suggest that, in hazard-dense navigation, the training stability of on-policy methods can outweigh the nominal sample efficiency of off-policy algorithms. More broadly, the study provides evidence that procedurally generated, high-fidelity simulations are effective testbeds for developing and benchmarking robust navigation policies.

Recommended citation: M. S. Tayar et al., "Autonomous UAV Flight Navigation in Confined Spaces: A Reinforcement Learning Approach," 2025 Latin American Robotics Symposium (LARS), Monterrey, Mexico, 2025, pp. 1-6, doi: 10.1109/LARS69345.2025.11273007. keywords: {Training;Uncertainty;Navigation;Ducts;Autonomous aerial vehicles;Deep reinforcement learning;Ventilation;Stability analysis;Robots;Convergence}
Download Paper