Jeremy S. De Bonet : Poxels: Probabilistic Voxel Reconstruction




Resume
RESEARCH
     
Publications
Image Compression
Texture Synthesis
Image Database Retrieval
Segmentation
Registration
Discrimination
Projects
Web Hacks


START PREV NEXT

Poxels Overview

The poxel reconstruction algoithm is able to reconstruct accurate volumetric representations of 3D space from a limited collection of 2D or 1D observations. Futhermore the reconstruction is robust, and can gracefuly represent regions of space in which there little certainty about its exact structure, due to limited, non-exist or contraditing data.
The fundamental element of concern in the reconstruction algorithm is the ``poxel'' -- an estimate of the probability that a region of voxelized space contributes to a particular pixel in one of the observed 2D images (or to a single 1D observation). The reconstruction algorithm is one of determining the joint distribution of probabilities which is most consistent with the observed images.
A poxel's value is the probability that that voxel in space contributes to the observation in a particular image. This probabilty can vary due to two factors. First the likelyhood of a voxel to contribute to an observation is proportional to is opacity. In a probabilistic sense, an (unoccluded) opaque object will always contribute to an observation of it, while a partially transparent voxel will only contribute some fraction of the time. Thus an alternative, and equally valid, intpretation of this value is that it is the probability that a ray passing from the observation through the voxel will be reflected (or absorbed in the case of transmissive sensors) by that voxel. A second cause for the variation of this probability is due to measurement uncertainty. Suppose our observation were a completely white image. We can be certain that there is something white in the observed space, however, we can not be certain of its distance. In the absence of priors, it is equally likely to any distance from the camera. This state of uncertainty is indistinguishable from the situation in which the white image is generated by a whole volume of semitranparent white fog. In this work we will not distingush between these two. And we thus compound uncertainty in position with transparency, by modeling the probability of contribution to an observation.
Each poxel is the probability that a single voxel in space contributes to the observation in a single image. Thus we need to consider a number of poxels proportional to the product of the size of the observations (i.e. number of images times number of image pixels) and the size of our voxelated space. Conceptually this can be thought of as a collection of poxel spaces, one for each observation
For a set of K, n by m observation images taken around the z-axis as in this Figure , a voxel space of m, by n by n would be used, thus requiring on the order of Kn3m2 poxels.
Reconstruction of consistent poxel distributions is performed in a series of iterated steps. Each step involves simple and justifiable manipulations of the poxel values. After several iterations the process converges to a distribution which acurately reflects the contraints provided by the observations, and the uncertainty which which remains.
Given the converged set of poxel values, a new virtual viewpoint can be synthesized by rendering each point in space using the observations (taken from different viewpoints) of that location weighted by the estimated probability of that point contributing to an observation, taking into account occlusions which may never have been observed but which are suggested by the resulting spatial reconstruction.
START PREV NEXT


Jeremy S. De Bonet
jsd@debonet.com
return to main page

Page loaded on November 24, 2024 at 01:39 AM.
Page last modified on 2006-05-27
Copyright © 1997-2024, Jeremy S. De Bonet. All rights reserved.
Set-Cookie: LASTPAGESELECTED=Research/Poxels; path=/; domain=ai.mit.edu; expires=Monday, 27-Oct-99 06:48:50 GMT