October 27, 2009 –
In the 2002 futuristic movie “Minority Report” officer John Anderton, played by Tom Cruise, uses a series of hand gestures to manipulate multiple images on large circular screens at police headquarters.
Now computer science associate professor Aditi Majumder is using a recently awarded 5-year, $632,000 NSF CAREER grant to develop a similar system, one she envisions as an integral component of the collaborative workspace of the future.
The project – Ubiquitous Displays via a Distributed Framework – has such a wide range of potential applications that it was selected as part of the American Reinvestment and Recovery Act, more commonly known as the stimulus program.
The proposed displays will consist of multiple projector-camera units known as Plug-and-Play Projectors (PPP). Each PPP is “smart” – capable of sensing, via a camera, and computation and wireless communication, via embedded chips.
The integrated displays will not only purvey information but will actually participate, interacting with human users, data sets and other objects in the room. The built-in cameras and the embedded communication and computation capabilities allow the units to “know” the location of neighboring units, transfer data among themselves and project seamless images onto any surface.
The ubiquitous display uses a completely distributed paradigm. Rather than relying on a central server to connect the PPPs, each component is self-sufficient, automatically connecting with other units and re-calibrating as needed.
Because each unit contains all necessary software and is an independent entity, the system will be completely scalable, self-assembling and exceptionally user-friendly. In other words, it doesn’t matter if the user needs 5 PPPs or 100; the system will adapt, reconfigure and self-calibrate in the same way. Better yet, if one unit fails, it can be removed without disrupting the whole display.
Majumder foresees the units being available in the not-too-distant future at consumer electronics stores. She believes they will be inexpensive and easily installed, just like today’s home theater systems.
“People have been working in human-computer interaction, large-scale data generation, high-performance networking, rendering and resource management,” she says. “What hasn’t really been worked on is how we can connect the gaps between our displays and our input devices. So we are developing the active display, which has the ability to sense everything around it and interact with its environment.”
Ubiquitous displays hold the promise of effortless multi-user interfaces. The system is being designed to respond the most natural form of human interaction – hand/body gestures, the interpretation of which can be flexibly adapted from one application to another. A particular gesture will produce a specific response for scientists examining brain cells and a different one for artists creating a digital image.
Furthermore, the approach will allow multiple users to work in different ways on individual sections of the data without disrupting the work of their collaborators. For example, if several users are annotating a large map, one may instruct the system to zoom in on his section, while a co-located collaborator requires a broader view or a different region.
The self-calibration and gesture-based interaction capabilities will allow users to drag and pull displays from one location to another or increase their size and resolution with just commonplace gestures. “If I am collaborating with somebody and I need additional projectors to increase the number of available pixels, I’ll just grab and move them like this,” Majumder says with a sweeping hand movement. “They will join the display and automatically calibrate so everything is seamless.”
Early response from industry has been positive. Disney, Canon and Epson are among the companies expressing interest in the technology. “We’re moving in the right direction – where the right people in the right industries are interested,” she says.
“This is about completely new kinds of displays, and completely new kinds of workspaces. Nobody really has explored this distributed paradigm of active displays that unifies both calibration and interaction.”
Note: The project is already achieving recognition: Majumder and her student Behzad Sajadi won a runner-up best paper award at the IEEE Visualization, 2009 conference in Atlantic city last week for “Markerless View-Independent Registration of Multiple Distorted Projectors on Extruded Surfaces Using an Uncalibrated Camera.” In addition, Majumder received the 2009 Annual Faculty Research Incentive award from the School of Information and Computer Sciences for her work with ubiquitous displays.