The art of mind reading is much more useful than the party tricks might have you believe. Being able to read the mind of a severely physically disabled person might be the only way to enable them to communicate with us. A machine that reads brain activity would give everyone a chance to express themselves and control their lives.

wizards

The computer wizards who specialise in Brain-Computer Interfaces had these aims in mind as they developed the Hex-o-Spell system. Their computer doesn’t just read a name or word, it allows a user to communicate any message by thought alone. It’s a brain-reading, machine-learning, text-input system that allows the user to type using their minds.

Simple brain-computer interfaces (BCIs) have been demonstrated for several decades. Using sensors placed on the head, often using gel to improve contact, the electrical impulses produced by large groups of neurons in different regions of the brain could be measured. The broad pattern or frequency of electroencephalogram (EEG) signals would then be used as the input to the computer. The problem is that our brains are hugely complex organs, each with its own individual design. It is possible for people to develop normally with just half a brain although their brains look very different to ours, and as Benjamin Blankertz explains, “also in normally developed humans, functions are differently located in the brain. And due to the foldings of the cortex, small changes in location may cause strong differences in the EEG.” Doing something as predictable as moving a finger or a foot, different people will have very different patterns of brain activity. Even worse, brains throw out new and different patterns of signals all the time, so the pattern of brain activity is different when you test the same person repeatedly. Even the equipment used to read the signals is unreliable – the gel dries out between sensor and head, or sensors are placed incorrectly or slip – so at different times the sensors may misread the brain and report completely different results from the same brain activity.

To overcome these problems, the standard approaches relied on the adaptability of our brains, and required users to learn how to relax and cause their brains to produce more

predictable signals. For example, the more a person falls into a state resembling meditation, the more regular and slow the frequency of the EEG waves becomes. This method can be successful in some cases, but it requires lengthy training periods for users and so can be very limited.

What was needed was a way to pick out the activity in the precise regions of the brain that were relevant, and somehow overcome all of the variability inherent in EEG signals.

The problem is rather like listening to the muffled sound of a huge orchestra through a thick wall, and trying to interpret the melody from a single violin – when the orchestra change their seats and play a slightly different tune every time you listen. The traditional approach was to train the whole orchestra to all play more or less the same note. The solution as used in Hex-o-Spell was to train the listener to pick out that elusive violin. To make this work, the field of brain-computer interaction had to combine forces with machine learning.

The Berlin BCI group is the first to achieve this new fusion of sciences. They have become the world leaders in the area, organising three machine learning competitions, supported by PASCAL (the European funded network of scientists who specialise on pattern analysis, statistical modelling and machine learning). But the latest success has come from a PASCAL “pump-priming” project, in collaboration with Dr John Williamson from Glasgow University, to create a system that can work without the users needing to be trained. Hex-o-Spell comprises three elements: EEG measurement of brain activity, machine learning to interpret that activity, and an intelligent hexagonal grid of the alphabet that uses a language model in order to simplify the picking of letters.