Abstract:
"The rapid advancement of Unmanned Aerial Vehicles (UAV) can be seen across various
applications. Traditional path planning methods excel in static settings but falter in dynamic
scenarios involving multiple moving objects. Reinforcement Learning (RL) has emerged as a
promising alternative. However, existing RL algorithms struggle to handle simultaneous
interactions with multiple dynamic objects, resulting in suboptimal performance and increased
collision risks. This research addresses the critical gap in developing RL-based path
optimization algorithms capable of navigating UAVs efficiently and safely through complex,
dynamic environments with simultaneous multi-object interactions.
To overcome the challenge, this study proposes a novel RL-based framework for UAV path
optimization in dynamic environments. The research methodology encompasses a
comprehensive review of existing RL methods, extensive simulations, and iterative
prototyping. The proposed hybrid approach of existing RL techniques enhances adaptability
and performance. The development process follows the PRINCE2 Agile project management
methodology, ensuring flexibility and iterative refinement. The framework's effectiveness is
evaluated through quantitative metrics gathered from simulated real-world scenarios, focusing
on efficiency, adaptability, and safety in handling multiple dynamic objects.
Preliminary results indicate that the proposed RL-based framework significantly improves
UAV path planning performance in dynamic environments. The logs indicate better
performance around 14000 steps while entropy loss indicates policy is becoming more certain
in its actions from 10 000 to 18000. This research contributes to the field of autonomous UAV
navigation by providing a robust solution for path optimization in challenging dynamic
environments, paving the way for safer and more efficient UAV operations. "