摘要
Human-robot handover is a key capability of service robots, such as those used to perform routine logistical tasks for healthcare workers. Recent algorithms have achieved tremendous advances in object-agnostic end-to-end planar grasping with up to six degrees of freedom (DoF); however, compiling the requisite datasets is simply not feasible in many situations and many users consider the use of camera feeds invasive. This letter presents an end-to-end control system for the visual grasping of unseen objects with 6-DoF without infringing on the privacy or personal space of human counterparts. In experiments, the proposed Fed-HANet system trained using the federated learning framework achieved accuracy close to that of centralized non-privacy-preserving systems, while outperforming baseline methods that rely on fine-tuning. We also explores the use of a depth-only method and compares its performance to a state-of-the-art method, but ultimately emphasizes the importance of using RGB inputs for better grasp success. The practical applicability of the proposed system in a robotic system was assessed in a user study involving 12 participants. The dataset for training and all pretrained models are available at https://arg-nctu.github.io/projects/fed-hanet.html.
| 原文 | 英語 |
|---|---|
| 頁(從 - 到) | 3772-3779 |
| 頁數 | 8 |
| 期刊 | IEEE Robotics and Automation Letters |
| 卷 | 8 |
| 發行號 | 6 |
| DOIs | |
| 出版狀態 | 已發佈 - 6月 1 2023 |
ASJC Scopus subject areas
- 控制與系統工程
- 生物醫學工程
- 人機介面
- 機械工業
- 電腦視覺和模式識別
- 電腦科學應用
- 控制和優化
- 人工智慧
指紋
深入研究「Fed-HANet: Federated Visual Grasping Learning for Human Robot Handovers」主題。共同形成了獨特的指紋。引用此
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS