Abstract:
Problem: Shape-changing robots offer significant potential for navigating dynamic
environments, yet existing motion planning algorithms, primarily designed for rigid-body
robots, fail to meet the demands of dynamic morphology. Effective motion planning must
address real-time structural adaptation, computational efficiency, and resource constraints
while ensuring optimal movement strategies.
Methodology: This study introduces a Reinforcement Learning framework to enhance motion planning in shape-changing robots. By leveraging internal sensor data, including acceleration and velocity in multiple axes, the system continuously optimizes morphology and movement strategies. A mixed-methods approach evaluates model performance in both simulated and physical environments.
Results: Experiments show the implemented framework can accommodate various
Reinforcement Learning algorithms and autonomously adapt both shape and motion strategy in real time, maintaining stable control across varied environments without explicit instructions or programming. Expert evaluators rated the framework’s novelty and feasibility highly and highlighted only moderate scalability concerns. Overall, the project demonstrates that a resource-aware, model-free Reinforcement Learning pipeline can turn theoretical adaptability into practical performance for shape-changing robots.