Go To Top

Think of the hologram as a glass window. A viewer can look through, under, over, around, and behind the image of the object in the glass. Normally, holograms are lit with a single light located 45 degrees above and in front of it. If a light source is shone at the hologram from a different angle, the image disappears or becomes washed out. However, the Lightspace hologram allows light to shine into the plate in arbitrary ways. So, while Lightspace represents the manner in which a scene will respond to light, the lighting isn't specified until the image is viewed. Mann records this information and downloads the images into a database for future use.

The light source being recorded is not important; neither is its Kelvin quality. Rather, it is what the light does to the scene that Mann studies. For example, Mann has used his Lightspace theory to record images around Boston and Cambridge, and in "Strobe Alley," the MIT lab where Harold "Doc" Edgerton performed his work on the stroboscopic light from 1927 until his death in January 1990.

Edgerton spent his life perfecting the strobe light to enable photographers to capture forever one fraction of a moment in time on film. Mann became intrigued by the metaphor of time in Strobe Alley.

"In photographing the lab, I tried to capture the feeling of the time that has passed. Time has frozen those things Doc created to freeze time," Mann said. "It was a very magical place; a very enchanting environment. Seeing the cables hanging in the corner, the microflash parts stacked in the box, and everything so delicately covered in dust. Those things have been stored there for decades and most of them still work."

Harold "Doc" Edgerton's lab, Strobe Alley at MIT, was photographed by Steve Mann. The scene was particularly striking to him because of the metaphor of the way time had captured the equipment built to stop time on film.

These images were showcased in an exhibit called "Lightspaces at MIT, Microseconds in Years." The lab was recently renovated and the equipment moved into the Edgerton Center at MIT. One of the many accomplishments for which Edgerton is noted is the use of electronic strobes in aerial photography in the Allied campaigns over Normandy during World War II.

In 1939, the U.S. Army Air Corps commissioned him to design a light powerful enough for aerial reconnaissance photography. In 1944, he served in Italy, England, and France as a technical representative directing the use of these lights, which provided the Army with vital intelligence information about enemy troop movements. His strobes were "used in the nights immediately preceding the D-day invasion of Normandy, during the Battle of Monte Cassino, and campaigns in the Far East," according to Stopping Time: The Photographs of Harold Edgerton, by Estelle Jussim (New York, 1987).
Go To Top

Mann used these same lights (40 kilojoule flash lamps) with his Lightspace system to create a number of images inspired by photographs Edgerton made of the campus and surrounding areas. Even now, there are no strobes more powerful, according to Mann.

For one image, he set up his equipment on the tallest building he could find in Cambridge to capture an image of Boston just across the Charles River. Edgerton had photographed Cambridge from Boston in the same manner. Mann set up his camera on one corner of the building and the lamps on another corner in order to get a longer baseline and avoid back-scatter from the fog along the river.

In addition to Edgerton's strobes, Mann used FFT17-30 flash lamps made by GE-Mazda. They were housed in large, chrome-plated, four-foot reflectors in order to get a reasonable quantity of light 1.5 to 2 miles across the river. He used a large number of capacitors that had also been sitting, untouched, in the lab's corridor. To trigger the setup, Mann used the FlashWizard radio slave from LPA Design. He was amazed when he saw that a FlashWizard being used in one of the buildings on campus was neither affected by the physical presence of equipment in the building nor the amount of electronic noise generated from that equipment.

Equipment to Expand

After trying it, he found that the FlashWizard fit nicely into his concept of network connectivity. In that vein, Mann is working toward interconnectivity along the World Wide Web (WWW). He envisions Internet surfers wearing a virtual reality-type Head-Mounted Device (HMD) and being able to view from each other's perspective.

"That kind of thing involves communication and getting things to look crisp rather than blurry," Mann said. "And, with motion, that involves strobe, which involves synchronization; that is where the FlashWizard fits in."
Go To Top

The general framework in which the FlashWizard was created appeals to Mann. It was not designed for a specific purpose, rather it is simply a radio triggering device that can be used to activate anything that can receive a radio wave. This generality allows the user to build other ideas.

Steve Mann demonstrates some of his "Smart Clothing." While it is an awkward configuration now, he hopes future technology will help streamline it for a more comfortable fit.

Mann's Smart Clothing has light measuring and producing instruments built into it. It can be used to illuminate a scene in any particular pattern -- much like a pixel board that you can wear.

If a rectangular light box was needed, the user would simply define the rectangular area among the hundreds of flashtubes on his shirt, i.e., a 10x10-inch grid would consist of 100 tiny flashtubes or camera lenses. "It is very clumsy and awkward right now," he said. "Things are hanging everywhere in a waist bag full of clutter and wires sticking out. However, as technology improves, we could have clothing that images the world around us and evaluates the lightspace."

Mann is currently experimenting with several camera setups to attain this level of imaging. However, they are still too large and clumsy for practical use. He is also building the NetCam that can be used on the WWW, which he hopes will allow Internet users to view the holographic functionThe NetCam consists of two cameras mounted in a HMD. Eventually, this will become a lattice of cameras that will give a better degree of presence„of actually being on the scene.

Lightspace is a multi-dimensional representation of the subject. Those dimensions include the position of the sensors on the X, Y, and Z axes (that is, 3-D); then there is the azimuth and elevation of the sensor (making it 5-D); and the wavelength of the sensor and the time at which the sensor is evaluated (for a total of 7-D). The source has the same seven dimensions, so together they make 14 dimensions. Therefore, the Lightspace of a scene is a real valued function of 14 real variables.

Each of these areas of study will eventually lead Mann to a Ph.D. in the discipline of video understanding. It is currently possible to perform key word searches for text in a database, but there is no convenient way to perform a search through video.

"People have hours of unlogged home video. One of the things we are interested in is getting the computer to search through the video automatically. That is a hard task," Mann said. One aspect of video that is detectable is a scene change. It is possible to advance to the next scene on the video with the push of a button. Other methods are also being explored.

As a child, Mann was intrigued by the way lights from a passing car would sweep around a dark room, "the way light and shadows play on things." During his studies at MIT, Mann said, "I feel that I have gained a better understanding and appreciation of light -- a love of light, you might say."

Subscribe to PHOTO>Electronic Imaging magazine to receive more articles like these every month.

Go To Top
© 1995 PHOTO>Electronic Imaging