|PI: Sai-Kit Yeung||Funding Source: MOE AcRF Tier 2|
|Co PI: Alexander Binder, Ying He (NTU)||Start Date: 1 June, 2017|
|Research Areas: Vision & Graphics||End Date: 31 May, 2020|
Research in 3D modeling has given rise to various fancy applications. To name a few, Google Earth is making vigorous progress towards modeling a digital archive of the whole Earth. Many CG movies, e.g. Avatar, are filmed entirely in virtual worlds with virtual creatures and characters. Video games are featuring entire virtual cities. 3D printing creates new possibilities to link up the virtual world with the physical world. In all, the possibilities of modeling research are numerous and the impacts are growing.
However, given today’s technology, it is still cumbersome and time-consuming even for professional digital artists to create detailed 3D content. Production of games and movies still requires tremendous amount of time and money investment.On the other hand, individual users still cannot create 3D models as easily as they could draw pictures using a pen. This proposal aims to make use of the increasing availability of real-world data and user-created content publicly available on the internet for 3D modeling. Such vast amount of data has made data-driven modeling approaches attractive choices. In particular, we are interested in exploring how data-driven optimization approaches can be devised based on such big data, for modeling 3D contents for realistic graphics applications and reconstructing 3D contents from actual scenes. To this end, our project has three main objectives: First objective (theoretic side on computer graphics): Devising novel data-driven approaches to automatically synthesize realistic 3D models and facilitate interactive modeling tasks. Specifically, we will demonstrate such approaches in interactive scene enrichment, creative object
modeling and virtual world modeling. Second objective (theoretic side on computer vision): Advancing the knowledge base of data-driven approaches in 3D computer vision.
In particular, we will develop deep learning models that effectively unify both 3D and 2D data. We will demonstrate our models through a variety of applications such as automatic 3D content reconstruction using both photometric stereo and multi-view RGB data, 3D object detection and recognition, and scene understanding from real world data. Third objective (application side on vision and graphics): Creating various datasets for benchmarking and shape understanding. Such datasets will be very valuable for the research community, and also complementary to the first two objectives by refining the approaches with more data samples. Data-driven approaches in general follows a learning-optimization nature, in which abstract relationships are learnt from real-world / humanprovided example data, to train a generative statistic model to support inquiries in the subsequent modeling (optimization) process. They are promising for semantic labeling, relationships extraction, object search, model analysis and synthesis. It is exciting to devise novel data-driven approaches and explore their creative applications to the modeling problem.