AR-VRM: Imitating Human Motions for Visual Robot Manipulation with Analogical Reasoning

Dejie Yang1, Zijing Zhao1, Yang Liu1,2*

1Wangxuan Institute of Computer Technology, Peking University
2State Key Laboratory of General Artificial Intelligence, Peking University

ICCV2025

*Corresponding Author

Video Presentation

MY ALT TEXT

Demonstration of the differences between our framework and previous methods: we propose to learn from human actions explicitly by hand keypoints with analogical reasoning.

Abstract

Visual Robot Manipulation (VRM) aims to enable a robot to follow natural language instructions based on robot states and visual observations, and therefore requires costly multi-modal data. To compensate for the deficiency of robot data, existing approaches have employed vision-language pretraining with large-scale data. However, they either utilize web data that differs from robotic tasks, or train the model in an implicit way (e.g., predicting future frames at the pixel level), thus showing limited generalization ability under insufficient robot data. In this paper, we propose to learn from large-scale human action video datasets in an explicit way (i.e., imitating human actions from hand keypoints), introducing Visual Robot Manipulation with Analogical Reasoning (AR-VRM). To acquire action knowledge explicitly from human action videos, we propose a keypoint Vision-Language Model (VLM) pretraining scheme, enabling the VLM to learn human action knowledge and directly predict human hand keypoints. During fine-tuning on robot data, to facilitate the robotic arm in imitating the action patterns of human motions, we first retrieve human action videos that perform similar manipulation tasks and have similar historical observations , and then learn the Analogical Reasoning (AR) map between human hand keypoints and robot components. Taking advantage of focusing on action keypoints instead of irrelevant visual cues, our method achieves leading performance on the CALVIN benchmark and real-world experiments. In few-shot scenarios, our AR-VRM outperforms previous methods by large margins , underscoring the effectiveness of explicitly imitating human actions under data scarcity.

Framework

MY ALT TEXT

The model architecture of our AR-VRM: Visual Robot Manipulation with Analogical Reasoning

Results

1. Comparison with previous methods on CALVIN benchmark.

MY ALT TEXT

2. Comparison with previous methods on real-world experiments.

MY ALT TEXT

3. Ablation studies. We ablate the effect of different components of our AR-VRM.

MY ALT TEXT

4. Data-efficient results.

MY ALT TEXT

5. Unseen-language results.

MY ALT TEXT

6. Qualitative retrieval results.

MY ALT TEXT

BibTeX

@inproceedings{arvrm,
        title     = {AR-VRM: Imitating Human Motions for Visual Robot Manipulation with Analogical Reasoning},
        author    = {Dejie Yang, Zijing Zhao, Yang Liu},
        booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision, {ICCV-25}},
        year      = {2025},
      }