You might have heard from the tech news outlets about the Leap Motion. If you haven’t yet, you will soon. It’s a black box about half the size of an average mouse (computer, not actual) that sits on your desk and tracks the 3D movement of your hand (actual, not computer). Essentially, it’s like an XBox Kinect but instead of tracking limb movements, it tracks fingers. These devices are being released mid-July but the here.com team were lucky enough to get into the developer preview program so that we could integrate the Leap with here.com.
(A quick aside: it’s pretty cool that we get to play with cool toys like this as our actual job. You should totally think about joining our team.)
3D and the Web
Enough Talk, show me!
If you’ve got a Leap, read this far and just want the demo, go to here.com/leap. If you haven’t got one yet, this video should explain what we did:
Note: WebGL isn’t supported across all browsers yet. If you don’t have a WebGL-enabled browser, there’s still the rest of here.com to explore.
WebGL applications are already quite CPU-intensive. Once we started moving our 3D globe around quickly enough to keep up with the hand-tracking data provided by the Leap, we keep the CPU pretty busy. The experience is still nice and smooth with 3D maps rendered at 60FPS, though.
Gestures are hard
There are numerous native libraries available to help process gestures but translating this to an entirely web-based application is not easy. Gesture detection can be computationally heavy and, as mentioned above, we’re already pushing the computer quite hard. We prototyped with the $1 gesture library but in the end, removed gesture recognition from our demo.
No established patterns
The biggest thing we discovered that might not be obvious at first was not strictly a technical one. There are very few examples of 3D interactions. This is a new paradigm of computer interfaces and we struggled to get something that was familiar enough to be intuitive but accurate enough to not be unforgiving. Most of our early attempts involved introducing an arbitrary constraint to the ‘gesture space’. The most successful was probably the ‘virtual screen’ which registers movements as being ‘active’ if they are within a small z-distance from the device. In essence, it reduces the space in which user interactions are detected to an invisible screen parallel to the leap. By doing this, you get to fallback to typical touch gestures – pan, pinch-zoom – with the benefit of also having a hover state when the user’s fingers are detectable but not ‘touching’ the virtual screen.
The most successful interaction is the one we eventually ended up going with – direct control. To take most advantage of the hand-tracking, we decided to simply map the movements of the user’s hand to the position and angle of the airplane. This not only simpified things but also had the effect of being intuitive. The user can quickly learn that what they do is represented on screen.
On a technical level, this was much easier to accomplish than gesture-based control as we now take the information about the hand position coming from the Leap and pass it directly through to the position of the 3D camera in the WebGL world.
What do you think? Is this the future of computer interaction or just a great way to explore HERE 3D?