Brave New World

Experiment Written by Andre Levi

We've learned a lot since writing this article in 2015. If you're interested in more on design for VR, check out The Evolution of App Design.

Way back in 2015, out of sheer whimsy, we decided to allocate several weeks to tinkering with the Oculus Rift and Leap Motion. This post summarizes our findings during that time.

The goal was to build a cohesive VR interface that played nicely with the hardware’s current limitations.

It turned out that the current limitations at that time were considerable. Some of the challenges made some of our original ideas impossible — for example, we weren’t able to use WebVR+Three.js for easy distribution like we had hoped. Others made us radically rethink what sort of interactions were possible, and which ones were plausible. Those constraints helped us build interactions that were fun, unique, and comfortable. But more importantly, they helped us rethink fundamental interactions in an environment far from most of our day-to-day work.

Initial research

We got things started by downloading the most popular Oculus Rift and Leap Motion demos, and giving them a try. Of the downloaded demos, 50% of them did not work—either crashing immediately or were impossible to use. Given some of the difficulties with Unity’s compiler, it was understandable why so many apps were so broken.

We played through the VR demos with a purpose: Find interfaces with novel interactions. Two apps in particular, offered what we were looking for.

Inspiration

Hauhet. VRARlab.

Hauhet. VRARlab.

In Hauhet, players solve puzzles using their eyes and hands. Players select blocks by looking at them, and then move them based upon changes in hand position. The end result feels fluid and intuitive. Blocks don’t track the player’s hand directly — instead, the player triggers discrete motions. The decision to manipulate blocks with large-scale gestures was brilliant, as it greatly reduced the game’s dependency on precise hand-tracking.

Leap Garden. Ksiva.

Leap Garden. Ksiva.

LeapGarden features a straightforward button and slider interface. The slider works surprisingly well, and the buttons do what you’d expect. The menu is positioned permanently to the left of the starting field of vision, meaning that users have to rotate their heads whenever they want access to the menu. This leaves the default field of vision uncluttered, at the cost of frequent head-turning. The trade-off seems reasonable, considering that users won’t be navigating the menu too often.

Riffing off existing research

While trying out existing demos, we also read Leap Motion’s excellent articles on NUI (Natural User Interface) design. One of the Thomas Street designers, Ronald, put together a concise summary of the key points.

Leap Motion UI summary. Ronald Viernes.

Leap Motion UI summary. Ronald Viernes.

With ideas brewing, we decided that it was time to start making things.

Adventures in WebVR

We initially targeted Chrome and Firefox’s experimental WebVR builds. The prospect of distributing VR apps on the web, simply by pointing people to a URL, was too good to ignore.

The first version of our Planet Editor ran in the browser with WebVR.

We experimented with voice commands via Chrome’s built in Speech to Web feature. However, despite being so prevalent in science-fiction, voice commands in this context suffered from I’m-talking-to-my-computer-and-it-feels-awkward syndrome. People in the office were also getting annoyed by the constant screams of, “Rotate... ROTATE!!!” In the end, they were vetoed. 

Development for WebVR ended up being slow and error-prone. We used Three.js, along with the official Leap Motion javascript library and an open-source Oculus–WebVR adapter. The trio worked together, but we spent a lot of time dealing with low level concerns on all fronts.

The browsers also had a 60-fps cap that was possible to get around, but not in a way that was convenient to users. This 60 fps limit resulted in occasional judder when moving the head, and an overall inferior experience to the achievable 75 fps on desktop apps.

Because of these challenges we decided to abandon WebVR and adopt desktop as our platform.

Evolution of the project

When we first sat down and discussed the project, we envisioned an interface very similar to what you’d see in non-VR games: A 2D HUD overlay.

Planet Editor mock-up. Ronald Viernes.

Planet Editor mock-up. Ronald Viernes.

However, as the project progressed and we got a better feel for the tools, we gravitated towards an interface that would exist in the same 3D space as the game. This change improved cohesion between the interface and game objects.

The first implementation

Our initial implementation was heading down the path of a multi-level menu navigation. However, the more time we spent with buttons, the more we began to dislike them.

It was difficult to press buttons accurately with the Leap hands. Our virtual hands would contort sporadically, even when keeping our hands completely still. Actions requiring precise movements—pressing a button with a single finger—took extreme determination and patience. Similarly, it was difficult to press buttons precisely when grouped near others. Finally, the lack of perceived physical contact made even successful presses unsatisfying. as inhabitants of the real, physical world, we take the tactile qualities of buttons for granted.

Technical challenges aside, the idea of controlling a VR interface with buttons felt like we were importing the most boring controls from real life. We went back to the whiteboard, aiming for a concept more deserving of being called "futuristic".

The improved interface

The new improved interface was a grid of selectable items in 3D space. A user would navigate columns and rows via left-and-right and forwards-and-backwards hand movements. By rotating the hand, users could activate items which would either spawn objects, or change a planet’s surface.

Inspired by Hauhet, our interface allowed users to target a planet simply by looking at it. This mechanic, coupled with target planet's selection affordance, made the interaction feel natural.

We also learned a lot watching a handful of testers. Testers would discover the interface by accident, and then slowly learn the controls through trial-and-error. To speed up the learning process, we added a title card with visual cues.

Our thoughts on the final UI

Overall, we felt good about the interaction design for the interface. It made use of hand tracking in a more unique way, albeit at the cost of a small learning curve.

The first barrier was that—unbeknownst to the user—only the right hand controls the interface. It seems like a silly point, but most users raised both hands up immediately when starting the demo. Second was learning that the right hand’s position—relative to the Leap’s sensor range—was mapped to the interface’s rows and columns.

The Leap provides pretty consistent data for hand rotation under normal circumstances. However, when the Leap was head-mounted on the Oculus, rotation tracking got a little spotty. White walls and glossy screens were also problematic for the Leap's infrared sensors.

New users (understandably) had a difficult time anticipating the Leap's tracking limitations. Their movements would often be expressive, which was hard for the sensor's limited range to translate. Extending beyond the ideal range resulted in their virtual hands disappearing or flapping wildly.

Further reading

If you're curious about VR and looking for a better starting point, we wrote a very short series designed to help you get in the right mindset. And, as always, if you're interested in help with a VR experience (game, tool, product, etc), give us a shout.