Speaker: Kiran Varanasi
Thursday, July 15th, 11:00am, in BC 329
Spatio-temporal modeling of dynamic 3D scenes
Recent advances in 3D sensing technologies have provided us with the means to capture the 3D shape of a scene at a fast frame-rate.
Various methods exist to produce 3D video, i.e, 3D snapshots of a dynamic scene as independent visual reconstructions.
For example, a synchronized multi-camera setup can produce a sequence of visual hull meshes by integrating silhouette information at each frame.
Each of these meshes can be readily mapped with texture information from the taken images. However, these 3D reconstructions are not temporally consistent
and suffer from severe geometric and topological artifacts.
In this talk, I will present a few methods that we have developed during my thesis to build a coherent spatio-temporal model of a dynamic scene using such reconstructions.
We make no assumptions on the scene - about a template skeleton, mesh model or even the topology of the shape being observed.
Our approach is completely bottom up and is thus applicable to a dynamic scene in all its generality. Such a scene may be composed of multiple actors, who are dressed in
loose clothing, and who are interacting with each other in an arbitrary fashion. I will present our solutions to three principal sub-problems in this area (a) deriving
temporally coherent segments in a sequence (b) detecting and matching features between mesh-reconstructions and (c) tracking the surface of an unknown and possibly varying