Robust Robot Navigation against External Disturbance using Deep Reinforcement Learning외란에 강인한 심층 강화학습 기반 로봇 주행

유형열, 윤민성, 박대형, and 윤성의
한국로봇종합학술대회(KRoC), 2021

Abstract: With recent advances of deep reinforcement learning (DRL) in complex robotic navigation, mobile robots have worked in diverse places.
Nonetheless, as the robots operate in a wider set of environments including cluttered or messed up environments (e.g., cleanup robots on a dusty construction site), they can get exposed to external disturbances such as dust or stains.
These could hinder the normal operation or even cause catastrophic behaviors to the robots.
To deal with this problem, we propose a navigation method robust to disturbances using deep reinforcement learning that efficiently follows the path to a goal and secures safety by sensing the areas where some parts of the sensor cannot observe well due to the disturbances.
Our approach utilizes Confidence map, which constructs the robot’s local regions requiring more observation to avoid collision and identifies where to sense next via entropy.
We empirically demonstrate the influences of the disturbances on the sensor and compare the existing DRL method with our proposed method under the disturbances.
The result shows that our method achieves higher performances over the existing method in proposed situations.