Hardware

LightSpace and the interactive room. The future according to Microsoft

Table of contents:

Anonim

At Microsoft Research, they have been working on technologies for recognition and interaction with the environment for a long time. Kinect is the paradigmatic example, but not the only one. LightSpace is another of the projects in this field of the Redmond research division. Combining elements of surface recognition with augmented reality, and thanks to depth cameras and projectors, LightSpace makes every surface in the room and even the free space between them interactive.

The goal of LightSpace is to allow interaction in the everyday environment, using different elements as projection surfaces.The system uses tables or walls to display graphics allowing users to use hands and gestures to manipulate projected content. Although only table and wall are used in demos, LightSpace can recognize a larger number of surfaces and use them as an interactive display.

Interacting with any surface

The system not only allows multiple hands to be recognized as if it were a multi-touch screen, but also adds new interactions with the body allowing transitions between various surfacesFor example, we can move content from one to another simply by placing a hand on the object in question and touching the destination surface. In this way, LightSpace simulates content traveling through our bodies from one part of the room to another.

In addition to moving the content from one surface to another, the system allows you to extract it from them and hold it in your hands as if it were a physical object.To do this, it projects a red ball on the hand that represents the content in question. The user can transport it like this across the room and even trade with other users. To return it to a surface, simply bring it closer to it so that it is transferred, rendering the content on it again.

The depth detection used by the system also allows to generate representations outside of the surfaces themselves Thus, as shown in the demo, we can navigate through the options of a menu by raising or lowering our hands in the air. The system detects the height at which it is varying between the different options. Like the rest, this is just one example of what could be accomplished with LightSpace.

How does it work

To make the room interactive in such a way, the system uses multiple depth cameras and projectorsThese are calibrated to detect the real position of the objects and surfaces in the room, allowing graphs to be represented on them. To do this, each of the cameras records the depth at which each object is found, differentiating between stable objects in the room and other mobile ones such as the users themselves. Each pixel is converted to a real world coordinate.

LightSpace uses this data to build a 3D mesh of the elements in the room, detecting the surfaces onto which the content can be projected. The model of the room will be used to recognize the interactions of the users in the environment The cameras used previously will allow the user's movements to be detected, differentiating their contours and detecting the position of the hands accurately.

The system will thus interpret the user's gestures to carry out each of the implemented actions.We have already seen some of these, such as working with the content on any surface, transferring it from one to another or transporting it around the room itself as if it were a real object. But the system could implement other instructions allowing more actions on any surface.

This is where everyone's imagination can be put to work. The goal is to turn any surface into an interactive screen. Without the need for additional monitors or motion sensors, LightSpace transforms any room into a new work or play space Into something more of the future.

In Xataka Windows | The future according to Microsoft More information | Microsoft Research

Hardware

Editor's choice

Back to top button