Skip to Content

Controlling VR with Your Mind

The startup Neurable thinks its brain-computer interface will be fast and accurate enough for playing games in VR.
March 22, 2017
A prototype version of Neurable's brain-computer interface technology includes a number of dry electrodes placed on the scalp.

Virtual reality is still so new that the best way for us to interact within it is not yet clear. One startup wants you to use your head, literally: it’s tracking brain waves and using the result to control VR video games.

Boston-based startup Neurable is focused on deciphering brain activity to determine a person’s intention, particularly in virtual and augmented reality. The company uses dry electrodes to record brain activity via electroencephalography (EEG); then software analyzes the signal and determines the action that should occur.

“You don’t really have to do anything,” says cofounder and CEO Ramses Alcaide, who developed the technology as a graduate student at the University of Michigan. “It’s a subconscious response, which is really cool.”

Neurable, which raised $2 million in venture funding late last year, is still in the early stages: its demo hardware looks like a bunch of electrodes attached to straps that span a user’s head, worn along with an HTC Vive virtual-reality headset. Unlike the headset, Neurable’s contraption is wireless—it sends data to a computer via Bluetooth. The startup expects to offer software tools for game development later this year, and it isn’t planning to build its own hardware; rather, Neurable hopes companies will be making headsets with sensors to support its technology in the next several years.

Success may be a long shot. No method of interaction has come close to supplanting the physical devices we typically use to control digital experiences—handheld controllers, mouse, keyboard, touch screen. And brain-computer interfaces in particular can be clunky, slow, and prone to errors. But virtual and augmented reality are still in such early stages that the ways we use them aren’t yet entrenched, and they’re vastly different from other technologies.

In an early demo of a VR game, actions—such as picking an item of food off a table and throwing it at a goblin—are controlled by analyzing brain activity to decipher intent.

And while it’s nothing new to track brain activity via EEG and look for a particular signal that occurs when a user is trying to select something, the company says it has figured out how to reduce noise and use the signals more quickly than has been done in the past.

Alcaide showed me a few videos of how this may look with an HTC Vive headset and a bunch of dry electrodes dotting a user’s head. In an adaptation of the game Skyrim, he used brain activity to choose one of four spells, “charged” one with a button on a handheld controller, and again used brain activity to throw it at a foe.

For now, at least, it takes about five minutes of training before you can use the system, though Alcaide says that doesn’t need to be repeated for each application you want to use it with. Basically, Neurable records your brain activity while showing you different objects in virtual reality, tracking the changes detected from one object to the next in order to determine what you are trying to do.

Jaime Pineda, a professor at UC San Diego who heads the school’s Cognitive Neuroscience Lab, is interested in the idea of being able to use a brain-computer interface in virtual reality, but he says it’s very difficult to quickly capture the sort of event-related potential Neurable is tracking and extract it. This process can take several seconds, he says.

“In games you need to be able to do that really fast or else people just won’t be interested,” he adds.

Neurable won’t say how accurate the technology is today, but Alcaide says that a previous version was about 85 percent accurate at processing brain activity in real time and 99 percent accurate when doing it with a one-second delay. The technology has since improved, he says.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.