Facebook
Habitat 2.0: Training Home Assistants to Rearrange their Habitat

Computer Vision

Robotics

Habitat 2.0: Training Home Assistants to Rearrange their Habitat

June 30, 2021

Abstract

We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack – data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist-authored, annotated, reconfigurable 3D dataset of apartments (matching real spaces) with articulated objects (e.g. cabinets and drawers that can open/close); (ii) H2.0: a high-performance physics-enabled 3D simulator with speeds exceeding 25,000 simulation steps per second (850× real-time) on an 8-GPU node, representing 100× speed-ups over prior work; and, (iii) Home Assistant Benchmark (HAB): a suite of common tasks for assistive robots (tidy the house, stock groceries, set the table) that test a range of mobile manipulation capabilities. These large-scale engineering contributions allow us to systematically compare deep reinforcement learning (RL) at scale and classical sense-plan-act (SPA) pipelines in long-horizon structured tasks, with an emphasis on generalization to new objects, receptacles, and layouts. We find that (1) flat RL policies struggle on HAB compared to hierarchical ones; (2) a hierarchy with independent skills suffers from ‘hand-off problems’, and (3) SPA pipelines are more brittle than RL policies

Download the Paper

AUTHORS

Written by

Andrew Szot

Alex Clegg

Eric Undersander

Erik Wijmans

Yili Zhao

John Turner

Noah Maestre

Mustafa Mukadam

Devendra Chaplot

Oleksandr Maksymets

Aaron Gokaslan

Vladimir Vondrus

Sameer Dharur

Franziska Meier

Wojciech Galuba

Angel Chang

Zsolt Kira

Vladlen Koltun

Jitendra Malik

Manolis Savva

Dhruv Batra

Publisher

Arxiv

Research Topics

Computer Vision

Robotics

Related Publications

November 10, 2022

Computer Vision

Learning State-Aware Visual Representations from Audible Interactions

Unnat Jain, Abhinav Gupta, Himangi Mittal, Pedro Morgado

November 10, 2022

November 06, 2022

Computer Vision

Neural Basis Models for Interpretability

Filip Radenovic, Abhimanyu Dubey, Dhruv Mahajan

November 06, 2022

October 25, 2022

Theseus: A Library for Differentiable Nonlinear Optimization

Mustafa Mukadam, Austin Wang, Brandon Amos, Daniel DeTone, Jing Dong, Joe Ortiz, Luis Pineda, Maurizio Monge, Ricky Chen, Shobha Venkataraman, Stuart Anderson, Taosha Fan, Paloma Sodhi

October 25, 2022

October 22, 2022

Computer Vision

Time-rEversed diffusioN tEnsor Transformer: A new TENET of Few-Shot Object Detection

Naila Murray, Lei Wang, Piotr Koniusz, Shan Zhang

October 22, 2022

April 30, 2018

Computer Vision

NAM – Unsupervised Cross-Domain Image Mapping without Cycles or GANs | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

December 11, 2019

Speech & Audio

Computer Vision

Hyper-Graph-Network Decoders for Block Codes | Facebook AI Research

Eliya Nachmani, Lior Wolf

December 11, 2019

April 30, 2018

NLP

Speech & Audio

Identifying Analogies Across Domains | Facebook AI Research

Yedid Hoshen, Lior Wolf

April 30, 2018

November 01, 2018

NLP

Computer Vision

Non-Adversarial Unsupervised Word Translation | Facebook AI Research

Yedid Hoshen, Lior Wolf

November 01, 2018

Help Us Pioneer The Future of AI

We share our open source frameworks, tools, libraries, and models for everything from research exploration to large-scale production deployment.
Design a Mobile Website
View Site in Mobile | Classic
Share by: