3D Teleconferencing

This project aims at developing a 3D teleconferencing system, that virtually puts two persons in front of each other, in spite of being in different physical spaces. For this we will explore the use of two Kinect devices to capture not only the scene image but also its depth information from two different viewpoints. This information should be transmitted to a remote place that will show the 3D scene on a TV screen (eventually 3D enabled) from an observer controlled viewpoint. Being the idea to develop a face-to-face communication system, the viewpoint changes should be equivalent to moving the observer’s head in front of the viewed user, as happens in a physical meeting. By consequence the viewpoint movements allowed are confined to those that keep the observed in his/her (virtual) half-space. This will simplify the problem as no back views need to be generated. To enable the user to move freely in front of the display and have the 3D perception of the remote user and scene, a tracker must be used to continuously estimate the viewing point.

Work Plan:

  1. Build the Teleconferencing Setup
  2. Calibration of the Kinect Devices
  3. Acquisition registration of the 3D point clouds and smoothing of the images acquired from the two Kinect devices.
  4. Visualization of the acquired point cloud from variable viewpoints.
  5. Implementation of the user tracking for controlling the viewpoint.
  6. Evaluation of the user perceived 3D structure and its influence on the communication.

 

Supervision: Prof. Paulo Menezes & MSc. Luís Almeida