Full body tracking in a virtual world

My Perception Neuron mocap system finally arrived last week. I ordered it after Siggraph 2014 last year. It's been a bit delayed, and it looks like the final price has increased dramatically since its Kickstarter (what I paid $340 for looks like it now costs over $1200 on their web site), but it seems to be a very slick setup so far.

Ever since I received my Oculus DK2 in the mail I've been intrigued by the idea of a "holodeck" simulator: Full body integration in a virtual world.  I've already played around with XBox Kinect and Kinect 2. They're actually quite good at providing you with a decent skeleton to work with. But it's glitchy and you have a limited movement range. I was hoping that Perception Neuron would help solve some of those issues.

So far it's been working well. Tracking is quite precise and has an amazing range. As long as you're in wi-fi range, the system will track you. You can walk around your entire house and be tracked the entire time.

The software ships with Unity integration, so that seemed to be the logical next step. After a morning's work, I managed to get a basic scene working in Unity with both Perception Neuron and Oculus VR support.

Here's some video of basic Perception Neuron integration with Unity (that's my friend Mark acting as the guinea pig):

It's really very promising considering that it's possible to do something resembling mocap inside of a home office. There's definitely drift and I find that the system needs to be zero'ed occasionally. Also, as you can see in the video, there's some registration issues with the hands and the knees, but I expect I could probably fix those if I knew more about setting up mocap rigs. Perhaps if I'd re-calibrated before that particular run I might have had better registration.

Oculus Integration

Next up, integration with Oculus VR. I've been using my Leap Motion controller with my DK2 for some time now, and it's been interesting, but also a bit hit or miss. I have one of the original models, so perhaps the resolution isn't quite what it is now, but it certainly suffers from occasional glitches. In contrast, the Perception Neuron system seems rock solid (outside of the need to zero it occasionally). That said, I only bought the 21 neuron kit, so I can do either both hands, or full body, not both at the same time. So I lose the finger movement if I want to walk around. I'm ok with that for now because being able to look down and see my arms, torso and legs all there and moving correctly is an amazing thing in a virtual world.

Here's a test of the DK2 and Perception Neuron used together:

You can't see it in the video, but there's definitely some occasional judder, and the streaming rate of the data in the Perception Neuron software seems faster than what's coming through to Unity. So there's definitely some room for future improvement. This is with the DK2's positional tracking disabled. All head position information is coming through from Perception Neuron. It seems to work fairly well. Not quite as smooth as the DK2's native head tracking, but good enough.

I'd like to try a test with more movement next: Navigating a world by walking it. But the DK2's HDMI/USB cable will be the limiting factor there. I may have to run it all through my laptop in a backpack so that I'm untethered, but I'm not sure my MacBook Pro will be up to the challenge.

What does this mean for virtual storytelling?

The discussion of how to tell stories and craft experiences in a virtual world has accelerated since the release of the original Oculus development kit. The possibilities seem nearly limitless at times:

  • Do you craft experiences for a viewer that's seated, standing, or moving in the real world?
  • Do you synchronize player movement with real world movement, or give them a navigation device such as a joystick?
  • For scripted storytelling, do you limit the useful storytelling space to that which is in front of the viewer, or do you fully immerse the viewer and allow events to happen all around them?

We've all seen Star Trek and their fictional depiction of the "holodeck". It's definitely in the camp of: "moving in the real world, moving in the virtual world, events happen all around you". On the other end of the spectrum, we're seeing recorded experiences from companies like Jaunt, which are non-interactive, are meant to be watched while seated, and typically focus action in front of the viewer. While it's excellent stuff, it's definitely quite a passive experience in comparison.

The bulk of VR demos and games so far have fallen into a middle ground: typically meant to be used while seated, using a controller for navigation. SteamVR seems to be trying to change that a bit by creating a "room-scale VR experience". I think that's fantastic, and it's definitely what I'm aiming for with the Perception Neuron integration. It increases your sense of presence, since real world movement translates to the virtual world, and your vestibular system is completely synchronized with your vision system.

With the addition of legs, arms and bodies, the sense of presence is further increased. You can really believe that you're there inside the world, because you can look down and see yourself and see your limb movement accurately represented.

In summary, I think it's an exciting development for VR, and I'm going to be spending quite a bit of time in the next few weeks trying to figure out ways to take best advantage of this full body setup.