Neural Noise Shows the Uncertainty of Our Memories

Neural Noise Shows the Uncertainty of Our Memories


In the moment between reading a phone number and punching it into your phone, you may find that the digits have mysteriously gone astray — even if you’ve seared the first ones into your memory, the last ones may still blur unaccountably. Was the 6 before the 8 or after it? Are you sure?

Maintaining such scraps of information long enough to act on them draws on an ability called visual working memory. For years, scientists have debated whether working memory has space for only a few items at a time or if it just has limited room for detail: Perhaps our mind’s capacity is spread across either a few crystal-clear recollections or a multitude of more dubious fragments .

The uncertainty in working memory may be linked to a surprising way that the brain monitors and uses ambiguity, according to a recent paper in Neuron from neuroscience researchers at New York University. Using machine learning to analyze the brain scans of people engaged in a memory task, they found that signals encoded an estimate of what people thought they saw — and the statistical distribution of noise in the signals encoded the uncertainty of memory. The uncertainty of your perceptions may be part of what your brain is representing in its recollections. And this sense of uncertainties may help the brain make better decisions about how to use its memories.

The findings suggest that “the brain is using that noise,” said Clayton Curtis, a professor of psychology and neuroscience at NYU and an author of the new paper.

The work adds to a growing body of evidence that, even if humans don’t seem adept at understanding statistics in their daily lives, the brain routinely interprets its sensory impressions of the world, both current and recalled, in terms of probabilities. The insight offers a new way of understanding how much value we assign to our perceptions of an uncertain world.

Predictions Based on the Past

Neurons in the visual system fire in response to specific sights, like an angled line, a particular pattern, or even cars or faces, sending off a flare to the rest of the nervous system. But by themselves, the individual neurons are noisy sources of information, so “it’s unlikely that single neurons are the currency the brain is using to infer what it is it sees,” Curtis said.

To Clayton Curtis, a professor of psychology and neuroscience at New York University, recent analyzes suggest that the brain uses the noise in its neuroelectric signals to represent uncertainty about the encoded perceptions and memories.Courtesy of Clayton Curtis

More likely, the brain is combining information from populations of neurons. It’s important, then, to understand how it does so. It might, for instance, be averaging information from the cells: If some neurons fire most strongly at the sight of a 45-degree angle and others at 90 degrees, then the brain might weight and average their inputs to represent a 60-degree angle in the eyes’ field of view. Or perhaps the brain has a winner-take-all approach, with the most strongly firing neurons taken as the indicators of what’s perceived.

“But there is a new way of thinking about it, influenced by Bayesian theory,” Curtis said.

Bayesian theory — named for its developer, the 18th-century mathematician Thomas Bayes, but independently discovered and popularized later by Pierre-Simon Laplace — incorporates uncertainty into its approach to probability. Bayesian inference addresses how confidently one can expect an outcome to occur given what is known of the circumstances. As applied to vision, that approach could mean the brain makes sense of neural signals by constructing a likelihood function: Based on data from previous experiences, what are the most likely sights to have generated a given firing pattern?

Wei Ji Ma, a professor of neuroscience and psychology at NYU, provided some of the first concrete evidence that populations of neurons can perform optimal Bayesian inference calculations.Courtesy of Wei Ji Ma

Laplace recognized that conditional probabilities are the most accurate way to talk about any observation, and in 1867 the physician and physicist Hermann von Helmholtz connected them to the calculations that our brains might make during perception. Yet few neuroscientists gave much attention to these ideas until the 1990s and early 2000s, when researchers began finding that people did something like probabilistic inference in behavioral experiments, and Bayesian methods began to prove useful in some models of perception and motor control.

“People started talking about the brain as being Bayesian,” said Wei Ji Ma, a professor of neuroscience and psychology at NYU and another of the new Neuron paper’s authors.



Source link

Leave a Reply

Your email address will not be published.