Loading

Brain-Computer Interfaces Brain-Computer Interfaces

In a brain-computer interface (BCI), neural signals recorded from the brain are fed into a decoding algorithm that translates these signals into outputs. This allows people with physical disabilities to control a variety of devices, such as communication software, games, upper-limb and lower-limb exoskeletons, mobile robots, and wheelchairs. The prosthetic device can then send feedback to the user either via normal sensory pathways (screens, sounds) or directly through brain stimulation, thereby establishing a closed control loop.

BCI technology offers a natural way to augment human capabilities by providing a new interaction link with the outside world. This makes them enormously important for patients with severe neuromuscular disabilities, but also opens up intriguing new possibilities in human-machine interaction for able-bodied people, too.

Brain signals can be recorded in different ways. Implanting microelectrodes lets us access the activity of individual neurons, while placing external electrodes over the scalp (also known as an electroencephalogram or EEG) allows us to observe the synchronous activity of millions of neurons. Each technique has advantages and disadvantages. For instance, EEGs do not provide information about small neural populations that may encode details about a user’s intended actions, but does allow for the monitoring of complex motor and decision-making processes that involve large brain networks.

The most important aspect of a BCI is to distinguish between different patterns of brain activity, each of which might be associated with particular intentions or mental commands. Adaptation is therefore a key characteristic of brain-computer interfaces, because, on the one hand, users must learn to modulate their brainwaves so as to generate distinct brain patterns, while, on the other, machine learning techniques must identify individual brain patterns that reflect the tasks the user is trying to execute. In other words, a BCI is a two-learner system that must engage in a mutual adaptation process.

Most media and popular trends focus almost exclusively on the machine learning aspects of BCI training, but recent research results have discovered that a mutual learning training approach grounded symmetrically on all three learning pillars (the machine, subject, and application level) is essential for users with severe physical disabilities to control their BCI devices over long periods of time and in real-world conditions.

In addition to ‘motor substitution’, where a BCI bypasses a central nervous injury to control a neuro-prosthesis, BCI technology can also facilitate ‘motor rehabilitation’, particularly after stroke. While most rehabilitation paradigms require the patient to have some degree of mobility remaining after a stroke, a brain-computer interface can assist stroke patients with complete paralysis.

BCIs do this by promoting neuroplasticity, the brain’s ability to recode lost functions, in this case due to stroke, elsewhere in the brain. A recent study revealed that a BCI device coupled with an electrical stimulation technique that delivers small currents to the paralyzed arm or hand to help it move provides chronic stroke survivors with significant, clinically relevant, and lasting motor recovery – strong evidence of neuroplasticity.

Current BCI technology, in particular EEG-based, gives patients the power to operate relatively simple devices. No doubt, this represents an important achievement for motor-disabled people. Creating robust and natural brain-computer interactions to control more complex devices, however, remains a major challenge, as does providing disabled people with the benefits of existing BCI technology outside laboratory conditions.

As the BCI field enters a more mature phase of development, the time is increasingly ripe to design new interaction modalities for able-bodied people. The idea, though, is not to control a device with your mind, but teach a device to predict a user’s actions or decode their cognitive state. This kind of technology would allow an intelligent device to assist their user in a much more personalized way, adapting to the unique moods and modes of the individual.

An example of this emerging research is to connect your car to your brain, so that it can better anticipate which actions you take (or don’t take). But such advanced systems will require better and more transparent recording technology. These could be invasive, sitting inside the human skull, utilizing a design of safe biophysical interfaces that would be ultra-low power and wireless. Or they could be non-invasive, such as dry electrodes that do not require any gel and can be integrated into everyday helmets and skin sensors that can remain operational for months.

In whichever direction BCI technology develops, it will undoubtedly raise deep societal and ethical questions. Some are predictable, as with most new technologies. How can we avoid a health divide with the rich gaining the benefits of BCIs, while the poor are left behind? How can we guarantee the privacy of brain data? How can we prevent malicious actors from interfering with BCIs, and attempting to manipulate the target patterns that BCIs are trained to decode?

But there are also new and potentially dangerous questions. Hypothetically, a ‘hacked’ BCI device could induce a stronger level of manipulation by externally generating in BCI users the same brain patterns that control their BCI-operated devices, whether an arm, car, or something more important like a safe or security system. How would a user know that they are voluntarily engaging in the interaction or having the brain patterns implanted? There will be no simple answer, but adopting a truly mutual learning approach that makes users conscious of how they acquire BCI skills will certainly be part of it.

Figure 1: Brain-controlled wheelchair. Users can drive it by voluntarily modulating their brain signals (EEG in this case). A BCI decodes individual patterns of brain activity associated with different mental commands. These commands are transformed into reliable and safe actions for the wheelchair thanks to the incorporation of information about the environment (e.g., obstacles perceived by the robot sensors in the case of a wheelchair) and the robot itself (position and velocities) to better estimate the user’s intent, or even override the mental commands in critical situations. This wheelchair illustrates the future of intelligent brain-controlled devices that, as our spinal cord and musculoskeletal system, works in tandem with motor commands decoded from the user’s brain cortex. This relieves users from the need to deliver continuously all the necessary low-level control parameters, reducing their cognitive workload.

Explore Explore

Treating mental health with AI-informed virtual reality