Microsoft Develops Stereo-3D Display with Haptic Feedback Technology

From X-bit Labs: Microsoft Research has developed a stereoscopic 3D monitor with haptic technology which simulates the sense of touch through tactile feedback mechanisms. In case the prototype device becomes a commercial product, it can revolutionize personal computers, interactivity of communication with people and things as well as many industries.

But how can a user touch and feel objects inside the virtual world? Can a flat touchscreen convey depth, weight, movement, and shape? Yes, say scientists in the natural interaction research group at Microsoft Research Redmond. Mike Sinclair, Michel Pahud, and Hrvoje Benko mounted a multitouch, stereo-vision 3D monitor on a robot arm to study how the kinesthetic haptic sense, which relates to motion rather than tactile touch, can augment touchscreen interactions.

The result is Actuated 3D Display with haptic feedback, a project that features a haptic device that provides 3D physical simulation with force feedback. The system consists of a touchscreen with a robot arm, engineered for instant, sensitive responsiveness, smooth forward and backward movement, and applications that support multitouch screen interactions, force sensing, 3D visualizations, and depth movement. By moving a finger on the screen, the user can interact with on-screen 3D objects and experience different force responses that correspond to the physical simulation.

“I had been interested in the notion of putting a robot behind something you could touch. Originally, I had wanted a robot arm with many degrees of freedom. But complexity, costs, and safety issues narrowed down the options to one dimension of movement. At that point, I was sure that others must have already looked into this scenario, but after looking at the literature, it turned out no one had done this,” said Mr. Sinclair.

It also turned out that being limited to a robot armature with one dimension of movement – the Z-axis of the applications – provided valuable insights into how much or how little data humans need to detect the shape and type of object being touched.

View: Article @ Source Site