Multi-Sources 3D Reconstruction and Semantic Modeling Platform

Home / Research / Research Projects  / Multi-Sources 3D Reconstruction and Semantic Model...
PISai-Kit Yeung Funding Source:  Virtual Singapore R&D Programme
Co PIAlexander Binder, Foong Shaohui (EPD), Yeo Kang Shua (ASD) Start Date:  1 April, 2017
Research Areas:  Computer Vision & Graphics, Machine Learning and Artificial Intelligence End Date:  31 March, 2019
Website:  -

The goal of this proposal is to generate the three-dimensional models of the buildings and landscapes of Singapore, for 3D visualization and information retrieval purposes. The models can support useful queries such as, “What is the maximum tolerable noise level of a construction site with respect to the residents in a nearby building?” and, “How will a heavy rain affect the people living in a certain region?”. The models consist of the shape of the buildings textured-mapped with the color photos of the scene. The models are enriched with detailed semantic information, for example, the locations of the front windows of a buildings, roads and pavements. The models will be built using photographs taken from airplanes, laser scanners (LIDAR), and imaging devices such as cameras integrated in smart phones. From those sources of data, subjects and structures in both outdoor environments, e.g., trees, roads, and indoor environments, e.g., balconies, furniture, will be recognized. Novel 3D-reconstruction and scene understanding algorithms will be devised for processing this very large amount of data obtained from multiple sources. The project will employ drones to capture additional lidar scans and images in regions where the initial data is insufficient. The outcomes of the project will significantly help to save time, effort and thus costs in creating, manipulating, and maintaining the three-dimensional models of Singapore.