Intel’s RealSense is Kinect on steroids

This year’s Intel Developer Forum started with music. But there was something odd here — the two musicians on stage had no instruments and were literally playing in the air. And yet, the result was good music. The magic behind this was Intel’s RealSense technology, which tracked the hand movements of the musicians, passed them on to a computer and then whipped up a virtual concert. Showing off a practical use of your technology certainly beats raving about it through a few PowerPoint slides.

Euclid Intel RealSense
Intel has just made it easier for developers making RealSense software

And when Intel CEO Brian Krzanich took stage, he noted that not only technology was changing, but also Intel. Among other things, the company is currently focusing on redefining the experience of computing. And it seems that if Intel has its way, in the future we will not be computing in virtual worlds or even augmented ones. Instead, we will be mixing them up.

Merged reality

Intel calls it merged reality. The company showed off a prototype headgear that had a key advantage over the likes of Oculus Rift and the HTC Vive — there were no wires snaking from it to the computer. And though you had a headgear strapped over your eyes, you could still see things around you and move freely without tripping. Intel had embedded its RealSense cameras into the headgear, and the result is mixed reality, where objects from the real world are overlaid on the virtual one. Imagine working on a computer with a massive 200-inch display, your fingers tapping away on a keyboard, while in the background, you see the rings of Saturn as your spaceship floats by… sound amazing? Well, the only real bit here are your hands, which have been transplanted into this virtual world. Everything else, even the keyboard, has been conjured up by a computer and piped to your VR headgear. With Intel’s chips powering both headgear and computer in the merged reality experience. Or so it hopes.

Heart to heart

While mixed reality is just a year or two away — Intel is expecting hardware from partners to start showing up on store shelves later in 2017 — fast forwarding a decade or two brings us to the Star Trek style of computing. Why type when you can simply talk to an AI-powered entity? By then, the computer itself would not be something trapped inside a box, but most probably spread across a server farm on another continent, and you would be chatting up with a disembodied voice. Cloud computing is already letting you talk to your phone, but the interaction is rather limited — nowhere close to the version shown in the Spike Lee’s critically acclaimed Her, where “computers compose music, carry on seamless conversations with humans, organise emails instantaneously, and even fall in love”. Maybe even have heartbreaks that lead to a futuristic version of computer breakdown, where she/he/it sulks in a virtual corner and refuses to work.

Emotive computing

Now here is an intriguing idea from Rana Al Kaliouby, Chief Strategy and Science Officer at Affectiva, an MIT-spinoff that is working on emotion recognition technology. Imagine a web of smart devices around you, embedded with a chip that relies on various sensors to track your emotional state. For example, your mirror at home could use optical tracking to observe your eyes and understand how you are feeling. This information is then shared with your computers and other devices, and they adjust their response to you. If you are not in the mood, the computer could skip the cheery “Hello there! Lovely morning, isn’t it!” routine, and might instead go for a sombre “How may I help you today?”. This adjustment will continue until your mood changes — imagine everything from the background wallpaper to the colour scheme to screen brightness dynamically responding to your current emotions. As Rana told Wired, “In this mood-aware Internet of things, the emotion chip — always activated with our permission — will make our technology interactions more genuine and human.”

Perceptual computing

Coming back to ground reality, we stumble upon perceptual computing that builds upon existing hardware — no fancy headgear required here. Here, the computer and you rely on a combination of facial expressions, voice and hand gestures to communicate — like “two friends chatting in a cafe”. You can see early but limited examples of it in the Xbox Kinect, where cameras track your hand gestures and movements; or the Leap Motion tracker, a tiny device that plugs into your existing PCs and Macs, enabling rudimentary gesture control. Intel has already committed $100 million (Dh368 million) to the perceptual computing project and has also been holding contests “challenging developers to create innovative application prototypes” using such natural human interfaces.

Computing on the go

Another idea that is here and now — well, almost — is to bake desktop-grade computing into the smartphone. Sure, you can already do a few things on the mobile, but you cannot run software built for PCs — this includes everything from your accounting program to image editing tools like Photoshop. While mobile apps exist for some, they are not as powerful as the desktop versions. And this is precisely the gap Microsoft is hoping to fill with Continuum, where you can plug the phone to a big TV, attach a keyboard and mouse (or use the phone as a touch pad), and voila, you have a desktop PC in front of you. At least in theory — Continuum is still a work in progress and currently it supports Microsoft’s Office suite plus a growing list of apps across categories. And oh, Candy Crush is already on that list.