OpenRobotLab
Shanghai AI Laboratory
Recent works have been exploring the scaling laws in the field of Embodied AI. Given the prohibitive costs of collecting real-world data, we believe the Simulation-to-Real (Sim2Real) paradigm is a more feasible path for scaling the learning of embodied models.
We introduce project GRUtopia (aka. 桃源 in Chinese), the first simulated interactive 3D society designed for various robots. It features several advancements:
We hope that this work can alleviate the scarcity of high-quality data in this field and provide a more comprehensive assessment of embodied AI research.
We test our codes under the following environment:
We provide the installation guide here. You can install locally or use docker and verify the installation easily.
Following the installation guide, you can verify the installation by running:
python ./GRUtopia/demo/h1_locomotion.py # start simulation
You can see a humanoid robot (Unitree H1) walking following a pre-defined trajectory in Isaac Sim.
Referring to the guide, you can basically run to wander a demo house:
# python ./GRUtopia/demo/h1_city.py will run a humanoid in the city block
# Its movement is much smaller given the large space of the block.
# Therefore, we recommend try with h1_house.py
python ./GRUtopia/demo/h1_house.py # start simulation
You can control a humanoid robot to walk around in a demo house and look around from different views by changing the camera view in Isaac Sim (on the top of the UI).
BTW, you can also simply load the demo city USD file into Isaac Sim to freely sightsee the city block with keyboard and mouse operations supported by Omniverse.
Please refer to the guide to try with WebUI and play with NPCs. Note that there are some additional requirements, such as installing with the docker and LLM's API keys.
We provide detailed docs and simple tutorials for the basic usage of different modules supported in GRUtopia. Welcome to try and post your suggestions!
An embodied agent is expected to actively perceive its environment, engage in dialogue to clarify ambiguous human instructions, and interact with its surroundings to complete tasks. Here, we preliminarily establish three benchmarks for evaluating the capabilities of embodied agents from different aspects: Object Loco-Navigation, Social Loco-Navigation, and Loco-Manipulation. The target object in the instruction are subject to some constraints generated by the world knowledge manager. Navigation paths, dialogues, and actions are depicted in the figure.
For now, please see the paper for more details of our models and benchmarks. We are actively re-organizing the codes and will release them soon. Please stay tuned.
If you find our work helpful, please cite:
@inproceedings{grutopia,
title={GRUtopia: Dream General Robots in a City at Scale},
author={Wang, Hanqing and Chen, Jiahe and Huang, Wensi and Ben, Qingwei and Wang, Tai and Mi, Boyu and Huang, Tao and Zhao, Siheng and Chen, Yilun and Yang, Sizhe and Cao, Peizhou and Yu, Wenye and Ye, Zichao and Li, Jialun and Long, Junfeng and Wang, ZiRui and Wang, Huiling and Zhao, Ying and Tu, Zhongying and Qiao, Yu and Lin, Dahua and Pang Jiangmiao},
year={2024},
booktitle={arXiv},
}
GRUtopia's simulation platform is MIT licensed. The open-sourced GRScenes are under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .
rsl_rl
library to train the control policies for legged robots.此处可能存在不合适展示的内容,页面不予展示。您可通过相关编辑功能自查并修改。
如您确认内容无涉及 不当用语 / 纯广告导流 / 暴力 / 低俗色情 / 侵权 / 盗版 / 虚假 / 无价值内容或违法国家有关法律法规的内容,可点击提交进行申诉,我们将尽快为您处理。