Geometry Matching for Multi-Embodiment Grasping

Abstract

Many existing learning-based grasping approaches concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. We tackle the problem of grasping using multiple embodiments by learning rich geometric representations for both objects and end-effectors using Graph Neural Networks. Our novel method - GeoMatch - applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive predictions of grasps keypoint-by-keypoint. We compare our method against baselines that support multiple embodiments. Our approach performs better across three end-effectors, while also producing diverse grasps. Examples, including real robot demos, can be found at geo-match.github.io.

Publication
Conference on Robot Learning (CoRL)

Toronto Intelligent Systems Lab Co-authors

Maria Attarian
Maria Attarian
PhD Student

My research interests include robotic manipulation, concept and action grounding and learning from third-person demonstrations for robotic applications.

Igor Gilitschenski
Igor Gilitschenski
Assistant Professor