NextMind: This Brain-Computer Interface Lets You Control VR Games with Brain Signals
The French startup NextMind is developing a brain-computer interface that translates signals from the user’s virtual cortex into digital commands. The dev kit for the brain-computer will be unveiled at the CES 2020 and will cost just $399. The first demos of the product so far are very impressive.
The NextMind device lets a user input commands into AR/VR headsets and computers by simply using their visual attention. The startup eventually wants to let users input these commands by using their visual imagination. The dev kit for the brain-computer interface will be shipped to a few select partners and developers. After the conclusion of the early access period, the startup plans to ship the second limited run of dev kits from the second quarter of 2020.
NextMind is just one of the many startups currently working on noninvasive neural interfaces that leverage machine learning algorithms. Facebook acquired Ctrl-labs in September last year which is also developing an electromyography wristband which can translate musculoneural signals into digital commands that can be translated by machines. Like CTRL-Labs, NextMind is also working on a noninvasive device. It weighs 60 grams and can be worn in the form of a small disc at the back of the user’s head (an electroencephalogram device) where the visual cortex of the brain is located. The goal of the device is to realize a real-time interaction directly through the use of the brain.
Digital Neurosynchrony
NextMind calls its approach Digital Neurosynchrony. The device requires that the user actively look at a striking object. This is because when they do this, the wearer’s visual cortex will be activated. Any object that wearer perceives will activate a specific response in the visual cortex of the user. This neural response registers a distinctive fluctuation in the electroencephalogram (EEG). The visual cortex receives input from the eyes and amplifies the firing of neurons for the objects or features that the person is intentionally looking at.
When the wearer focalizes differentially on an object or feature, they generate the attention of performing the action. The NextMind’s device decodes the output of the wearer’s intention. As the user looks around and focuses more on the object, this information is amplified by the output of their decision. This way, the device will know that the user wants to move or activate some specific object or visual content.
This way, there will be a neurosynchrony between the wearer’s brain and the object. A resonance is created between the object and the brain. The more the user focalizes their attention on the object, the more the resonance increases and the more the machine learning decoding increases. This gives the device a very good indication of the object the user wants to move. The NextMind device is therefore able to measure intent which supersedes even the functions of eye trackers.
The startup describes its product as the “first real brain-computer interface”.
The device works only when the eyes are opened although a future version of the device will work even if the eyes are closed by decoding visual imagination. The company is working on two parallel tracks. The visual intent functionality is just the first track.
NextMind’s next track will be capable of decoding visual imagination. According to the company CEO Sid Kouider, the visual cortex of the brain is the region serving as both the input a person receives as well as the output of a person’s memories, dreams and imagination. The neurons giving the visual consciousness in the visual cortex are the same ones which process the information coming from the outside world.
Two tracks
The two tracks that NextMind is pursuing will be available on the same device with the different tasks being handled by different software and algorithm.
Due to the fact that Artificial Intelligence (AI) is something that is constantly learned, the device will get even more precise the more frequently someone uses it.
The device
The NextMind device uses eight electrodes to measure the brain activity. The electrodes are made from a very sensitive material which penetrates the surface of the head like a comb to measure the brain activity. The startup used 8 electrodes as this is the lowest number which they can afford to use without the risk of losing any potential data. The device weighs just 60 grams.
It isn’t comfortable to wear the electrodes but the company is already working on a smaller design that could have fewer electrodes. NextMind had to innovate the ultra-sensitive material used in making the electrodes which is a real breakthrough. The material has better sensitivity. According to Kouider, the material is like EEG but it allows his team to improve the SNR fourfold compared to the use of clinical EEG and does not require any gel. Every electrode is a microchip that will directly process the analog information. The material is thus able to capture more data for the machine learning algorithms.
The electrodes in the NextMind device are shaped like a comb and they can easily go through the hair to reach the scalp for a good signal. The device should begin working immediately but Kouider says the first time using it will be like using a mouse for the first time. The user has to gradually learn to sense when their brain is in action.
Early Demos of the NextMind Device are Convincing
A VentureBeat tester tried out the device through several demos. The first step involved calibrating device where the tester needed to concentrate on a recurring green triangle pattern on a screen which consisted of three green lines that formed something close to a triangle. The calibration and training session generated a neural profile. This green triangle appeared on various objects on the screen.
Within a few minutes, the device had generated approximately 1 to 10 megabytes of data representing the tester’s neural profile. For a good model, the tester was asked to sit still and talk in order to generate a good model for the demos.
In the subsequent demos, the tester was able to control a TV simply be focusing on the green triangles in various parts of a custom TV user interface. They could do tasks such as playing, pausing, changing channels, muting and unmuting sounds. They were able to do this simply by focusing their attention on corresponding green triangles.
In the second demo, the tester was able to control a hopping game by using the same principle as in the first demo above. In the third demo, the tester played a modified version of the NES classic Duck Hunt and even shot ducks simply by using their visual focus. Although the demos aren’t perfect, they have proven that the technology works.
After some further training, the VentureBeat tester tried out a virtual reality demo and was able to blow off alien brains by merely using visual focus.
Searching for the Killer App
The biggest challenge faced by the NextMind’s brain-computer interface so far is on the hardware side. The startup is, however, already working on smaller and more precise versions.
NextMind will send the developer kit for the technology to select developers as well as partners this month. After the early access period is over, the second tranche of hardware will be shipped to developers in the second quarter of 2020. You can join the waiting list here.
With the developer kit, NextMind will be pursuing two aims: The first will be to collect more data which will help improve the brainwave AI. The second is to test new applications. Manufacturers of self-driving cars could, for example, install electrodes on the car seats that will enable users to activate the comfort functions of the vehicle just by using brain signals.
NextMind’s device could also be used as a brain-computer interface for augmented reality glasses to supplement the existing controls such as eye-tracking, speech and gestures. Facebook purchased the startup CTRL-Labs for this very purpose. It offers a similar interface that intercepts electrical control signals from the brain via a bracelet that translates these signals into computer commands using Artificial Intelligence.
Founded by the neurosurgeon Sid Kouider, NextMind has a team of 15 which includes professionals from diverse fields including machine learning, software, game development and hardware.
Read more on VentureBeat.com.
https://virtualrealitytimes.com/2020/01/08/nextmind-this-brain-computer-interface-lets-you-control-vr-games-with-brain-signals/https://virtualrealitytimes.com/wp-content/uploads/2020/01/NextMind-600x300.jpghttps://virtualrealitytimes.com/wp-content/uploads/2020/01/NextMind-150x90.jpgInventionsThe French startup NextMind is developing a brain-computer interface that translates signals from the user’s virtual cortex into digital commands. The dev kit for the brain-computer will be unveiled at the CES 2020 and will cost just $399. The first demos of the product so far are very impressive. The...Sam OchanjiSam Ochanji[email protected]EditorVirtual Reality Times - Metaverse & VR