The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - July/August (2012 vol.14)
pp: 104
Published by the IEEE Computer Society
ABSTRACT
Is it possible to use computers to "read" the mind or simulate the brain? Here the author considers current research and possible scenarios.
LAST YEAR, THE WEBSITE OF BRITAIN'S DAILY MAIL NEWSPAPER BECAME THE WORLD'S MOST-VISITED ENGLISH-LANGUAGE NEWS SOURCE. ALTHOUGH THE MAIL'S WEBSITE OWES ITS POPULARITY TO A MENU RICH IN CELEBRITIES, CRIME, AND ROYALS, IT OFFERS READERS SOMETHING THAT
my stuffier hometown newspaper, The Washington Post, lacks: a top-level section devoted to science.
Granted, the Mail's science coverage tends toward the sensational, but it does encompass superluminal neutrinos, the Higgs boson, and other weighty topics. The story that led the science section on 1 February 2012 was both sensational and important, as you can tell from the headline:
Mind-boggling! Science creates computer that can decode your thoughts and put them into words.
The story's origin lies in an article published in PLoS Biology by Brian Pasley of the University of California, Berkeley, and his collaborators. 1 Fifteen patients who suffered either epilepsy or brain cancer agreed to let Pasley's team attach an array of electrodes to their brains while their skulls were opened for surgery. The electrodes recorded signals from neurons located in a part of the brain, the auditory cortex, that interprets spoken language.
Before the patients underwent surgery, they listened to single words and whole sentences. Pasley and his collaborators correlated the electrical recordings with the words' acoustic spectra. A machine-learning algorithm then derived a mapping that could reproduce an acoustic spectrum from a neural recording.
Predicting what someone hears based on his or her brain activity is impressive, but it hardly qualifies as mind reading. However, it turns out that the auditory cortex is also responsible for encoding speech. When Pasley's team asked each patient to think of words without uttering them, the algorithm accurately predicted what those unspoken words were. In that sense, the algorithm really did read the patients' minds.
Pasley's algorithm occupies one front in a broad campaign to understand how the human brain works. On another front, biophysicists are developing ways to map the topography of the brain's interconnected neurons. Given that the human brain contains on the order of 10 11 neurons, each of which is connected to up to 1,000 other neurons, assembling a complete neuronal map could turn out to be infeasible—and perhaps unnecessary.
A detailed map of a single, characteristic neighborhood of the brain might yield enough information to identify the physical features that underlie thought and memory. But knowledge of those features alone might fall short of demonstrating that someone understands the brain. If that turns out to be the case, then a convincing demonstration might entail building a simulated brain.
The anatomy and physiology of such a brain wouldn't necessarily resemble those of our own. Indeed, the first prototype could turn out to consist of a building-sized stack of optical tables where pulsed beams of light—the information-carrying signals—bounce off mirrors and pass through prisms. Provided that the simulated brain's topology and interconnections are described using the same mathematical equations that apply to a human brain, such a demonstration would be valid.
And if that fantasy becomes a reality, simulation would have attained a new and higher status in science. Rather than providing a way to calculate a theory's validation, the simulation would be the validation.

Reference

Charles Day is the Web editor at Physics Today. Echo, a seven-month-old Airedale terrier, understands "sit," "down," and other commands, but doesn't always obey them.
58 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool