Journal article
IEEE Transactions on Automation Science and Engineering, 2024
APA
Click to copy
Liu, W., Eltouny, K. A., Tian, S., Liang, X., & Zheng, M. (2024). KG-Planner: Knowledge-Informed Graph Neural Planning for Collaborative Manipulators. IEEE Transactions on Automation Science and Engineering.
Chicago/Turabian
Click to copy
Liu, Wansong, Kareem A. Eltouny, Sibo Tian, Xiao Liang, and Minghui Zheng. “KG-Planner: Knowledge-Informed Graph Neural Planning for Collaborative Manipulators.” IEEE Transactions on Automation Science and Engineering (2024).
MLA
Click to copy
Liu, Wansong, et al. “KG-Planner: Knowledge-Informed Graph Neural Planning for Collaborative Manipulators.” IEEE Transactions on Automation Science and Engineering, 2024.
BibTeX Click to copy
@article{wansong2024a,
title = {KG-Planner: Knowledge-Informed Graph Neural Planning for Collaborative Manipulators},
year = {2024},
journal = {IEEE Transactions on Automation Science and Engineering},
author = {Liu, Wansong and Eltouny, Kareem A. and Tian, Sibo and Liang, Xiao and Zheng, Minghui}
}
This paper presents a novel knowledge-informed graph neural planner (KG-Planner) to address the challenge of efficiently planning collision-free motions for robots in high-dimensional spaces, considering both static and dynamic environments involving humans. Unlike traditional motion planners that struggle with finding a balance between efficiency and optimality, the KG-Planner takes a different approach. Instead of relying solely on a neural network or imitating the motions of an oracle planner, our KG-Planner integrates explicit physical knowledge from the workspace. The integration of knowledge has two key aspects: 1) We present an approach to design a graph that can comprehensively model the workspace’s compositional structure. The designed graph explicitly incorporates critical elements such as robot joints, obstacles, and their interconnections. This representation allows us to capture the intricate relationships between these elements; 2) We train a Graph Neural Network (GNN) that excels at generating nearly optimal robot motions. In particular, the GNN employs a layer-wise propagation rule to facilitate the exchange and update of information among workspace elements based on their connections. This propagation emphasizes the influence of these elements throughout the planning process. To validate the efficacy and efficiency of our KG-Planner, we conduct extensive experiments in both static and dynamic environments. These experiments include scenarios with and without human workers. The results of our approach are compared against existing methods, showcasing the superior performance of the KG-Planner. A short video introduction of this work is available via this link. Note to Practitioners—This paper was motivated by the problem of human-robot collaboratively working on remanufacturing processes such as disassembly that require human operators and collaborative robots to work closely with each other. The robots need to plan their trajectories efficiently enough to avoid collision with humans and the trajectories need to be short enough to reduce the cycle time. Traditional motion planners usually struggle with finding a balance between efficiency and optimality, which limits wide applications of collaborative robots in remanufacturing systems that are usually less structured than manufacturing systems. This paper suggests a new planning approach that integrates the workspace’s physical information into a graph and leverages deep learning to obtain safe and near-optimal solutions quickly. Experimental studies and observations demonstrated some advantages of this approach including learning capability, efficiency, and optimality, which makes it a great potential approach to be applied to real remanufacturing processes.