@@ -27,6 +27,9 @@ Habitat-API currently uses [`Habitat-Sim`](https://github.com/facebookresearch/h
...
@@ -27,6 +27,9 @@ Habitat-API currently uses [`Habitat-Sim`](https://github.com/facebookresearch/h
1.[Details](#details)
1.[Details](#details)
1.[Data](#data)
1.[Data](#data)
1.[Baselines](#baselines)
1.[Baselines](#baselines)
1.[Acknowledgments](#acknowledgments)
1.[License](#license)
1.[References](#references)
## Motivation
## Motivation
While there has been significant progress in the vision and language communities thanks to recent advances in deep representations, we believe there is a growing disconnect between ‘internet AI’ and embodied AI. The focus of the former is pattern recognition in images, videos, and text on datasets typically curated from the internet. The focus of the latter is to enable action by an embodied agent in an environment (e.g. a robot). This brings to the forefront issues of active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment.
While there has been significant progress in the vision and language communities thanks to recent advances in deep representations, we believe there is a growing disconnect between ‘internet AI’ and embodied AI. The focus of the former is pattern recognition in images, videos, and text on datasets typically curated from the internet. The focus of the latter is to enable action by an embodied agent in an environment (e.g. a robot). This brings to the forefront issues of active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment.
...
@@ -34,7 +37,7 @@ While there has been significant progress in the vision and language communities
...
@@ -34,7 +37,7 @@ While there has been significant progress in the vision and language communities
To this end, we aim to standardize the entire ‘software stack’ for training embodied agents – scanning the world and creating highly photorealistic 3D assets, developing the next generation of highly efficient and parallelizable simulators, specifying embodied AI tasks that enable us to benchmark scientific progress, and releasing modular high-level libraries to train and deploy embodied agents.
To this end, we aim to standardize the entire ‘software stack’ for training embodied agents – scanning the world and creating highly photorealistic 3D assets, developing the next generation of highly efficient and parallelizable simulators, specifying embodied AI tasks that enable us to benchmark scientific progress, and releasing modular high-level libraries to train and deploy embodied agents.
## Citing Habitat
## Citing Habitat
If you use the Habitat platform in your research, please cite the following technical report:
If you use the Habitat platform in your research, please cite the following [technical report](https://arxiv.org/abs/1904.01201):
```
```
@article{habitat19arxiv,
@article{habitat19arxiv,
title = {Habitat: A Platform for Embodied AI Research},
title = {Habitat: A Platform for Embodied AI Research},
...
@@ -151,3 +154,6 @@ The Habitat project would not have been possible without the support and contrib
...
@@ -151,3 +154,6 @@ The Habitat project would not have been possible without the support and contrib
## License
## License
Habitat-API is MIT licensed. See the LICENSE file for details.
Habitat-API is MIT licensed. See the LICENSE file for details.
## References
1.[Habitat: A Platform for Embodied AI Research](https://arxiv.org/abs/1904.01201). Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, Dhruv Batra. Tech report, arXiv:1904.01201, 2019.