VarCity - semantic and dynamic city modelling from images
Computer Vision Laboratory, ETH Zurich
VarCity was a multi-year research project financed by the European Research Council and obtained by ETH Professor Luc Van Gool 5 years ago at the Computer Vision Lab, ETH Zurich.
Dr. Hayko Riemenschneider was appointed to lead the group of researchers on a daily basis, see people for all names.
VarCity - The Video showcases some of the buildings blocks of creating and understanding an entire city from images.
Semantic cities modelled on knowledge are combined with dynamic cities containing events and traffic flows!
VarCity - The Video - its premiere was on May 19th 2017 and the full video is available here since May 22nd 2017!
For press information, questions, usage of the video, collaborations or any further contact, you can reach us at
hayko(at)vision.ee.ethz.ch
FULL VERSION ----------------- TEASER VERSION
Current results in 2017!
Semantic cities modelled on knowledge are combined with dynamic cities containing events and traffic flows!
3D city models have many applications, such as urban design, navigation, real-estate ads or movies & video games. Our project aims at producing such models from photographs both faster and with higher detail than before. As the effort often needs to be repeated to regularly update a city model, efficient production of compact models is an absolute necessity.
We process images of real cities automatically and efficiently to create parameterized and semantic 3D models, in which streets, buildings and vegetation are discriminated, and the number of floors, positions and shapes of windows, doors and balconies are recognized and encoded.
Our research further fills our static 3D city models with dynamic content by extracting special events and traffic flows from images, and by generating a city-scale motion and activity model. One can virtually visit the Munsterhof in Zurich and see a video summary of recent events there, or check out traffic densities along the kids' way to school.
Vision in 2012!
Semantic cities modelled on knowledge are combined with dynamic cities containing events and traffic flows!
Virtual city models are used in many game and movie designs, like the industry leading spin-off Procedural (now part of ESRI) for creating stunning 3D urban environments from 2D data.
Currently the production of real 3D city models comes at a high cost. Given that the modeling effort needs to be repeated regularly for updating, rendering city model production more efficient is an absolute necessity. Our work creates inverse procedural models, which are built for existing cities. The modeling is done by analyzing images of real cities and constructing parametrized and semantic models, where we know the number of storeys, shadows cast by new buildings, the position of traffic signs, vegetation, etc.
Our research further creates dynamic living 3D city models, which allows for deeper immersion than in current city representations. We extract special events and traffic flows to generate a city-scale motion and activity model. One can virtually visit Times Square and see what was on the electronic newsreel recently or check out traffic densities along a journey or the kids' way to school.