Skip to main navigation menu Skip to main content Skip to site footer

Articles

Vol. 6 (2019)

Automatic Reconstruction of Building-Scale Indoor 3D Environment with a Deep-Reinforcement-Learning-Based Mobile Robot

DOI
https://doi.org/10.31875/2409-9694.2019.06.2
Submitted
October 8, 2019
Published
08.10.2019

Abstract

The aim of this paper is to digitize the environments in which humans live, at low cost, and reconstruct highly accurate three-dimensional environments that are based on those in the real world. This three-dimensional content can be used such as for virtual reality environments and three-dimensional maps for automatic driving systems.

In general, however, a three-dimensional environment must be carefully reconstructed by manually moving the sensors used to first scan the real environment on which the three-dimensional one is based. This is done so that every corner of an entire area can be measured, but time and costs increase as the area expands. Therefore, a system that creates three-dimensional content that is based on real-world large-scale buildings at low cost is proposed. This involves automatically scanning the indoors with a mobile robot that uses low-cost sensors and generating 3D point clouds.

When the robot reaches an appropriate measurement position, it collects the three-dimensional data of shapes observable from that position by using a 3D sensor and 360-degree panoramic camera. The problem of determining an appropriate measurement position is called the “next best view problem,” and it is difficult to solve in a complicated indoor environment. To deal with this problem, a deep reinforcement learning method is employed. It combines reinforcement learning, with which an autonomous agent learns strategies for selecting behavior, and deep learning done using a neural network. As a result, 3D point cloud data can be generated with better quality than the conventional rule-based approach.

References

  1. Adán A, Quintana B, Vázquez AS, Olivares A, Parra E, Prieto S. Towards the Automatic Scanning of Indoors with Robots. Sensors 2015; 15: 11551-74. https://doi.org/10.3390/s150511551
  2. Bai S, Chen F, Englot B. Toward Autonomous Mapping and Exploration for Mobile Robots through Deep Supervised Learning. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017; pp.2379-84. https://doi.org/10.1109/IROS.2017.8206050
  3. Banta JE, Wong LR, Dumont C, Abidi MA. A Next-Best-View System for Autonomous 3-D Object Reconstruction. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans 2000; 30(5): 589-98. https://doi.org/10.1109/3468.867866
  4. Bircher A, Kamel M, Alexis K, Oleynikova H, Siegwart R. Receding Horizon ‘Next-Best-View’ Planner for 3D Exploration. In Proc. of the IEEE International Conference on Robotics and Automation (ICRA) 2016; pp.1462-8. https://doi.org/10.1109/ICRA.2016.7487281
  5. Charrow B, Kahn G, Patil S, Liu S, Goldberg K, Abbeel P, Michael N, Kumar V. Information-theoretic Planning with Trajectory Optimization for Dense 3D Mapping. Robotics: Science and Systems (RSS) 2015; Vol.11. https://doi.org/10.15607/RSS.2015.XI.003
  6. Concha A, Civera J. RGBDTAM: A Cost-effective and Accurate RGB-D Tracking and Mapping System. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017; pp.6756-63. https://doi.org/10.1109/IROS.2017.8206593
  7. Foissotte T, Stasse O, Escande A, Wieber P-B, Kheddar A. A Two-Steps Next-Best-View Algorithm for Autonomous 3D Object Modeling by a Humanoid Robot. In Proc. of IEEE International Conference on Robotics and Automation (ICRA) 2009; pp.1159-64. https://doi.org/10.1109/ROBOT.2009.5152350
  8. Grisettiyz G, Stachniss C, Burgard W. Improving Grid-based SLAM with Rao-Blackwellized Particle Filters by Adaptive Proposals and Selective Resampling. In Proc. of the IEEE International Conference on Robotics and Automation (ICRA) 2005; pp.2432-7.
  9. Krainin M, Curless B, Fox D. Autonomous Generation of Complete 3D Object Models using Next Best View Manipulation Planning. In Proc. of IEEE International Conference on Robotics and Automation (ICRA) 2011; pp.5031-7. https://doi.org/10.1109/ICRA.2011.5980429
  10. Laidlow T, Bloesch M, Li W, Leutenegger S. Dense RGB-DInertial SLAM with Map Deformations. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017; pp.6741-8. https://doi.org/10.1109/IROS.2017.8206591
  11. Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal Loss for Dense Object Detection. In Proc. of the IEEE International Conference on Computer Vision (ICCV) 2017; pp. 2980-8. https://doi.org/10.1109/ICCV.2017.324
  12. Millane A, Taylor Z, Oleynikova H, Nieto J, Siegwart R, Cadena C. C-blox: A Scalable and Consistent TSDF-based Dense Mapping Approach. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018; pp.995-1002. https://doi.org/10.1109/IROS.2018.8593427
  13. Move base. http://wiki.ros.org/move_base. (accessed September 29, 2019).
  14. Nagao K, Yang M, Miyakawa Y. Building-Scale Virtual Reality: Reconstruction and Modification of Building Interior Extends Real World. International Journal of Multimedia Data Engineering and Management 2019; 10(1): 1-21. https://doi.org/10.4018/IJMDEM.2019010101
  15. OpenAI. https://gym.openai.com/envs/. (accessed September 29, 2019).
  16. Potthast C, Sukhatme GS. A Probabilistic Framework for Next Best View Estimation in a Cluttered Environment. Journal of Visual Communication and Image Representation 2014; 25(1): 148-64. https://doi.org/10.1016/j.jvcir.2013.07.006
  17. Romanoni A, Fiorenti D, Matteucci M. Mesh-based 3D Textured Urban Mapping. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017; pp.3460-6. https://doi.org/10.1109/IROS.2017.8206186
  18. Rusu RB, Cousins S. 3D is Here: Point Cloud Library (PCL). In Proc. of the IEEE International Conference on Robotics and Automation (ICRA) 2011; pp.1-4. https://doi.org/10.1109/ICRA.2011.5980567
  19. Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O. Proximal Policy Optimization Algorithms. arXiv preprint arXiv: 1707.06347 2017.
  20. Shen S, Michael N, Kumar V. Obtaining Liftoff Indoors: Autonomous Navigation in Confined Indoor Environments. IEEE Robotics & Automation Magazine 2013; 20(4): 40-8. https://doi.org/10.1109/MRA.2013.2253172
  21. Sutton RS, McAllester DA, Singh SP, Mansour Y. Policy Gradient Methods for Reinforcement Learning with Function Approximation. In Proc. of Advances in Neural Information Processing Systems 2000; pp.1057-63.
  22. Tateno K, Tombari F, Navab N. When 2.5D is not enough: Simultaneous Reconstruction, Segmentation and Recognition on Dense SLAM. In Proc. of IEEE International Conference on Robotics and Automation (ICRA) 2016; pp.2295-302. https://doi.org/10.1109/ICRA.2016.7487378
  23. Wang K, Ding W, Shen S. Quadtree-Accelerated Real-Time Monocular Dense Mapping. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018; pp.1-9. https://doi.org/10.1109/IROS.2018.8594101
  24. Yang S, Zhu X, Nian X, Feng L, Qu X, Mal T. A Robust Pose Graph Approach for City Scale LiDAR Mapping. In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018; pp.1175-82. https://doi.org/10.1109/IROS.2018.8593754