Engadget RSS Feed |
Engadget // via fulltextrssfeed.com |
NASA JPL creates a more immersive way to control a space robot with the Oculus Rift and the Kinect 2
12/24/2013 12:29:00 AM
NASA's Jet Propulsion Laboratory has been on the hunt for a more natural way to maneuver robots in space for some time now, resulting in cool experiments like using a Leap Motion to remotely control a Mars rover and using an Oculus Rift plus a Virtuix Omni to take a virtual tour of the Red Planet. It therefore made sense for the folks at JPL to sign up for the latest Kinect for Windows developer program in order to get their hands on the newer and more precise Kinect 2 (which, incidentally, is not available as a standalone unit separate from the Xbox One) to see if it would offer yet another robotics solution.
They received their dev kit in late November, and after a few days of tinkering, were able to hook up an Oculus Rift with the Kinect 2 in order to manipulate an off-the-shelf robotic arm. According to our interview with a group of JPL engineers, the combination of the Oculus's head-mounted display and the Kinect's motion sensors has resulted in "the most immersive interface" JPL has built to-date. Join us after the break to see a video of this in action and find out just why one of them has called this build nothing short of revolutionary.
Of course, the JPL was in the first Kinect developer program as well. They built a series of applications and eventually worked with Microsoft to release a game where you were tasked to land Curiosity safely on Mars. The second Kinect, however, offers a lot more precision and accuracy than the first. "It allowed us to track open and closed states, and the rotation of the wrist," says Human Interfaces Engineer Victor Luo. "With all of these new tracking points and rotational degrees of freedom, we were able to better manipulate the arm."
Alex Menzies, also a Human Interfaces Engineer, calls this combination of a head-mounted display and the Kinect motion sensor as nothing short of revolutionary. "We're able for the first time with consumer-grade sensor control the entire orientation rotation of a robotic limb. Plus we're able to really immerse someone in the environment so that it feels like an extension of your own body -- you're able to look at the scene from a human-like perspective with full stereo vision. All the visual input is properly mapped to where your limbs are in the real world." This, he says, is very different from just watching yourself on a screen, because it's very difficult to map your own body movements. "It feels very natural and immersive. I felt like you have a much better awareness of where objects are in the world."
As you might imagine, latency is a very real concern, as most of the robots are on the other side of a long time delay. Jeff Norris, Mission Operations Innovation Lead for the JPL, says that therefore, a setup like this is mostly used to indicate goals, which the robots seek out. Luo and Menzies do point out however, that if you see in the video, there's a ghosted state to indicate where your arm, and a solid color to show where the robot is currently, so the latency is displayed on the screen. "It feels pretty natural because the ghosted hand moves immediately, and you see that the robot is catching up to your position," Menzies says. "You're commanding it a little bit ahead, but it doesn't feel laggy."
"We're building partnerships with commercial companies that make devices that maybe first and foremost weren't built for space exploration," says Luo. "Doing so helps us get a whole lot more done for space exploration than if we were starting everything from scratch. It also means we could build systems that could be available to the general public. Imagine how inspirational it would be for a seven-year-old to control a space robot with the tools he's already familiar with!"
Of course, the end goal is not just to control a robot arm, but space robots in general. As can be seen in the video demonstration, JPL hopes to bring the same technology to machines like the Robonaut 2, which is currently deployed in the ISS. "We want to integrate this work to eventually extend that to controlling robots like the Robonaut 2," Luo says. "There are tasks that are too boring, too menial or even too dangerous for an astronaut do the task, but fundamentally we still want to be in control of the robot... If we can make it more efficient for us to control them, we can get more done in less time."
You are receiving this email because you subscribed to this feed at feedmyinbox.com
If you no longer wish to receive these emails, you can unsubscribe from this feed, or manage all your subscriptions
No comments:
Post a Comment