About UsResearchOpportunities
PublicationsResourcesSite Map
 
next up previous
Next: Personal Safety Device Up: A proposed solution: Humanistic Previous: Reality Window Manager

The VideoOrbits head-tracker

Of course one cannot expect a head-tracking device to be provided in all possible environments, so head-tracking is done by the reality-mediator itself, using the VideoOrbits[10] tracking algorithm. The VideoOrbits head-tracker does head-tracking based on a visually observed environment, yet works without the need for high-level object recognition.

VideoOrbits builds upon the tradition of image processingvenetsanopoulosstockham combined with the Horn and Schunk equationshornandschunk and some new ideas in algebraic projective geometry and homometric imaging, using a spatiotonal model, $\tilde{p}$, that works in the neighbourhood of the identity:

 \begin{displaymath}\left(\sum_{x,y} (\phi(x,y) \phi^T(x,y))\right) {\bf\tilde{p}}
= - \sum_{x,y} F_t \phi(x,y)
\end{displaymath} (1)

where $\phi^T = [F_x(xy, x, y, 1), F_y(xy, x, y, 1), F, 1]$, F(x,t)=f(q(x)) at time t, Fx(x,t) = (df/dq)(dq(x)/dx), at time t, and Ft(x,t) is the difference of adjacent frames. This ``approximate model'' is used in the innermost loop of a repetitive process, and then related to the parameters of an exact projectivity and gain group of transformations, so that the true group structure is preserved throughout. In this way, virtual objects inserted into the ``reality stream'' of the person wearing the glasses, follow the orbit of this group of transformations, hence the name `VideoOrbits'.

A quantagraphic version of VideoOrbits is also based on the fact that the unknown nonlinearity of the camera, f, can be obtained from differently exposed images f(q) and f(kq), etc., and that these can be combined to estimate the actual quantity of light entering the imaging system: q(x) = _i c_i(A x+bcx+1) 1k_if^-1(F_i(A x+bcx+1)) _i c_i(A x+bcx+1)   where ci is the derivative of the recovered nonlinear response function of the camera, f, and A, b, and c are the parameters of the true projective coordinate transformation of the light falling on the image sensor. This method allows the actual quantity of light entering the reality-mediator to be determined. In this way, the reality-mediator absorbs and truly quantifies the rays of light entering it. Moreover, light entering the eye due to the real and virtual objects are therefore placed on an equal footing.

Other researchers, such as Feiner[14][15][16][17], propose augmented-reality environments with some similar characteristics, although these are tethered to a specific location, in part, because of the non-vision-based tracking. Feiner's work is seminal, and of great value, in the context of location-specific augmented reality. However, an object of Humanistic Intelligence is to be able to architect a personal visual reality that does no rely on specific environmental provisions.


next up previous
Next: Personal Safety Device Up: A proposed solution: Humanistic Previous: Reality Window Manager
Steve Mann
1998-09-15