Who Decides in the Brain? How Decision-Making Processes Are Influenced by Neurons

Jan. 15, 2013 — Tübingen neuroscientists have shown how decision-making processes are influenced by neurons.


Whether in society or nature, decisions are often the result of complex interactions between many factors. Because of this it is usually difficult to determine how much weight the different factors have in making a final decision. Neuroscientists face a similar problem since decisions made by the brain always involve many neurons. Working in collaboration, the University of Tübingen and the Max Planck Institute for Biological Cybernetics, supported within the framework of the Bernstein Network, researchers lead by CIN professor Matthias Bethge have now shown how the weight of individual neurons in the decision-making process can be reconstructed despite interdependencies between the neurons.

When we see a person on the other side of the street who looks like an old friend, the informational input enters the brain via many sensory neurons. But which of these neurons are crucial in passing on the relevant information to higher brain areas, which will decide who the person is and whether to wave and say ‘hello’? A research group lead by Matthias Bethge has now developed an equation that allows them to calculate to what degree a given individual sensory neuron is involved in the decision process.

To approach this question, researchers have so far considered the information about the final decision that an individual sensory neuron carries. Just as an individual is considered suspicious if he or she is found to have insider information about a crime, those sensory neurons whose activity contains information about the eventual decision are presumed to have played a role in reaching the final decision. The problem with this approach is that neurons — much like people — are constantly communicating with each other. A neuron which itself is not involved in the decision may simply have received this information from a neighboring neuron and “joined in” the conversation. Actually, the neighboring cell sends out the crucial signal transmitted to the higher decision areas in the brain.

The new formula that has been developed by scientists addresses this by accounting not just for the information in the activity of any one neuron but also for the communication that takes place between them. This formula will now be used to determine whether only a few neurons that carry a lot of information are involved in the brain’s decision process, or whether the information contained in very many neurons gets combined. In particular, it will be possible to address the more fundamental question: In which decisions does the brain use information in an optimal way, and for which decisions is its processing suboptimal?


Story Source:

The above story is reprinted from materials provided byUniversitaet Tübingen.

Note: Materials may be edited for content and length. For further information, please contact the source cited above.


Journal Reference:

  1. Ralf M Haefner, Sebastian Gerwinn, Jakob H Macke, Matthias Bethge. Inferring decoding strategies from choice probabilities in the presence of correlated variabilityNature Neuroscience, 2013; DOI:10.1038/nn.3309
Universitaet Tübingen (2013, January 15). Who decides in the brain? How decision-making processes are influenced by neurons.ScienceDaily. Retrieved January 20, 2013, from http://www.sciencedaily.com/releases/2013/01/130115124347.htm

Scientists Read Monkeys’ Inner Thoughts: Brain Activity Decoded While Monkeys Avoid Obstacle to Touch Target

ScienceDaily (July 19, 2012) — By decoding brain activity, scientists were able to “see” that two monkeys were planning to approach the same reaching task differently — even before they moved a muscle.


Anyone who has looked at the jagged recording of the electrical activity of a single neuron in the brain must have wondered how any useful information could be extracted from such a frazzled signal.

But over the past 30 years, researchers have discovered that clear information can be obtained by decoding the activity of large populations of neurons.

Now, scientists at Washington University in St. Louis, who were decoding brain activity while monkeys reached around an obstacle to touch a target, have come up with two remarkable results.

Their first result was one they had designed their experiment to achieve: they demonstrated that multiple parameters can be embedded in the firing rate of a single neuron and that certain types of parameters are encoded only if they are needed to solve the task at hand.

Their second result, however, was a complete surprise. They discovered that the population vectors could reveal different planning strategies, allowing the scientists, in effect, to read the monkeys’ minds.

By chance, the two monkeys chosen for the study had completely different cognitive styles. One, the scientists said, was a hyperactive type, who kept jumping the gun, and the other was a smooth operator, who waited for the entire setup to be revealed before planning his next move. The difference is clearly visible in their decoded brain activity.

The study was published in the July 19th advance online edition of the journal Science.

All in the task

The standard task for studying voluntary motor control is the “center-out task,” in which a monkey or other subject must move its hand from a central location to targets placed on a circle surrounding the starting position.

To plan the movement, says Daniel Moran, PhD, associate professor of biomedical engineering in the School of Engineering & Applied Science and of neurobiology in the School of Medicine at Washington University in St. Louis, the monkey needs three pieces of information: current hand and target position and the velocity vector that the hand will follow.

In other words, the monkey needs to know where his hand is, what direction it is headed and where he eventually wants it to go.

A variation of the center-out task with multiple starting positions allows the neural coding for position to be separated from the neural coding for velocity.

By themselves, however, the straight-path, unimpeded reaches in this task don’t let the neural coding for velocity to be distinguished from the neural coding for target position, because these two parameters are always correlated. The initial velocity of the hand and the target are always in the same direction.

To solve this problem and isolate target position from movement direction, doctoral student Thomas Pearce designed a novel obstacle-avoidance task to be done in addition to the center-out task.

Crucially, in one-third of the obstacle-avoidance trials, either no obstacle appeared or the obstacle didn’t block the monkey’s path. In either case, the monkey could move directly to the target once he got the “go” cue.

The population vector corresponding to target position showed up during the third hold of the novel task, but only if there was an obstacle. If an obstacle appeared and the monkey had to move its hand in a curved trajectory to reach the target, the population vector lengthened and pointed at the target. If no obstacle appeared and the monkey could move directly to the target, the population vector was insignificant.

In other words, the monkeys were encoding the position of the target only when it did not lie along a direct path from the starting position and they had to keep its position “in mind” as they initially moved in the “wrong” direction.

“It’s all,” Moran says, “in the design of the task.”

And then some magic happens

Pearce’s initial approach to analyzing the data from the experiment was the standard one of combining the data from the two monkeys to get a cleaner picture.

“It wasn’t working,” Pearce says, “and I was frustrated because I couldn’t figure out why the data looked so inconsistent. So I separated the data by monkey, and then I could see, wow, they’re very different. They’re approaching this task differently and that’s kind of cool.”

The difference between the monkey’s’ styles showed up during the second hold. At this point in the task, the target was visible, but the obstacle had not yet appeared.

The hyperactive monkey, called monkey H, couldn’t wait. His population vector during that hold showed that he was poised for a direct reach to the target. When the obstacle was then revealed, the population vector shortened and rotated to the direction he would need to move to avoid the obstacle.

The smooth operator, monkey G, in the meantime, idled through the second hold, waiting patiently for the obstacle to appear. Only when it was revealed did he begin to plan the direction he would move to avoid the obstacle.

Because he didn’t have to correct course, monkey G’s strategy was faster, so what advantage was it to monkey H to jump the gun? In the minority of trials where no obstacle appeared, monkey H approached the target more accurately than monkey G. Maybe monkey H is just cognitively adapted to a Whac-A-Mole world. And monkey G, when caught without a plan, was at a disadvantage.

Working with the monkeys, the scientists had been aware that they had very different personalities, but they had no idea this difference would show up in their neural recordings.

“That’s what makes this really interesting,” Moran says.

Link:

https://news.wustl.edu/news/Pages/24043.aspx

Journal Reference:

  1. Thomas M. Pearce and    Daniel W. Moran. Strategy-Dependent Encoding of Planned Arm Movements in the Dorsal Premotor Cortex. Science, 2012; DOI: 10.1126/science.1220642

Citation:

Washington University in St. Louis (2012, July 19). Scientists read monkeys’ inner thoughts: Brain activity decoded while monkeys avoid obstacle to touch target. ScienceDaily. Retrieved July 21, 2012, from http://www.sciencedaily.com­ /releases/2012/07/120719141804.htm