OmniTouch
Table of contents:
- How OmniTouch works: recognizing keystrokes
- How OmniTouch Works: Projecting the Image
- Precise technology with many possibilities
One of the great revolutions in computing has been touch screens. They brought a new way of interacting with the computer, either with a special pointer or with your finger. In 2011, Microsoft went further with the introduction of OmniTouch, a project that made any surface touchable. The basic idea is to mount a camera and projector device on the shoulder, which projects the screen and reads the user's keystrokes. The possibilities are endless, allowing us to turn our hand, the wall, a sheet or any other surface into a touch screen .
How OmniTouch works: recognizing keystrokes
The most important part of the OmniTouch project is the tracking of the position and depth of the fingers, to know where the user is touching. To do this, a depth-sensitive PrimeSense camera was used in the prototype. Unlike a normal camera that measures colors, PrimeSense measures the distance of each point in the image from the camera lens. The 1mm accuracy and 20cm minimum range are the main advantages over the Kinect camera, which was originally used in the project.
To locate fingers, OmniTouch first captures the depth map (A). Then, the inclination map is calculated>"
In (B) you can see that map translated into colors: red means that there is less depth in the positive direction of the X or Y axis (upwards or to the right) and blue means that there is less depth in the negative direction of the X or Y axis (down or to the left).Purple means there is hardly any change in depth.
With this map, the software looks for vertical cylindrical sections, a surface that approaches the camera, then stays, and finally moves away. What has been a finger if you run it from one side to the other, wow. On the color map, look for a red section, then a purple section, then a blue section, all on the same vertical axis.
Possible candidates are filtered for height, to filter out anything that cannot be a finger (for example, a 2-millimeter-tall cylinder cannot be recognized as a finger, so which is discarded). In figure (C) you can see all the finger sections identified.
Once this is done, all the vertical sections are brought together to form the finger (figure D). Fingers that may be too short are discarded, and it is assumed that since the user is right-handed the leftmost part of the finger is the tip.And voila, we now know where the user is pointing to .
Now, how do we know if the finger is touching the surface? They call it flood fill, but it will be more familiar if I tell you that it's like filling with the paint bucket of Paint.
The technique is simple: locate the middle point of the finger, and begin to fill pixels up, left, and right, with a tolerance of 13 millimeters. That is, they only fill a pixel if the difference between its depth and that of the midpoint of the finger is less than 13 millimeters.
This way, if your finger is not touching anything, only the pixels corresponding to your finger will be filled. If you are touching the hand, many more will be filled. In the image you can see what happens if the finger is in the air (left) or touching the hand (right). When a certain margin of filled pixels is passed, the software will send a tap or click in the corresponding place.
How OmniTouch Works: Projecting the Image
Although finger recognition is the central part, we cannot forget that OmniTouch also has to project an image onto any surface. The depth camera is also used for this. All surfaces in the image are detected using a connected component algorithm, which very efficiently detects interconnected points in the image.
Once the surfaces smaller than a hand have been discarded, we proceed to fix a center or point of reference to project the image. This point helps to detect the orientation of the surface and therefore allows to create an image that does not look distorted.
The next difficult point comes when it comes to detecting the size of the surface.Since the edges of surfaces cannot be recognized well enough, OmniTouch uses the mean and standard deviation of the component points to classify it into five points: hand, arm, notebook, wall, and table. Each of them has a certain size and a center for the image.
The software generates the image to be projected with all the data, distorting it so that it appears correctly on the surface. It then passes the image to the projector, which will display the image on whatever surface it is.
Precise technology with many possibilities
Tests used to measure the accuracy of OmniTouch.In testing, OmniTouch proved to be a very precise technology. 96.5% accuracy when it comes to recognizing a click, a very good figure and even more so considering that it is a prototype.Regarding the size of the interface, with buttons of 2 centimeters in diameter, 95% of the keystrokes would be recognized.
This maximum size is necessary for an interface projected in the hand. On other surfaces further away, such as a table or a wall, it could be reduced to 15 millimeters, more or less the same size recommended for a button on a conventional touch screen .
"As for the possibilities, they are endless. With the prototype, a lectern was created to paint: on the wall you drew and in your left hand you chose the colors. Also used as a highlighter>"
But the most interesting thing is what they mention at the end of the document: the possibilities that OmniTouch opens up when we stop considering two-dimensional surfaces, taking advantage of the shapes of the body to change how we interact with the computer.
"OmniTouch is a truly exciting project, both in its technique and in its possibilities. We will talk about him again soon in the special The future according to Microsoft>"
In Xataka Windows | The future according to Microsoft More information | OmniTouch