Why Does a Vivid Memory ‘Feel So Real?’

ScienceDaily (July 23, 2012) — Neuroscientists have found strong evidence that vivid memory and directly experiencing the real moment can trigger similar brain activation patterns.

The study, led by Baycrest’s Rotman Research Institute (RRI), in collaboration with the University of Texas at Dallas, is one of the most ambitious and complex yet for elucidating the brain’s ability to evoke a memory by reactivating the parts of the brain that were engaged during the original perceptual experience. Researchers found that vivid memory and real perceptual experience share “striking” similarities at the neural level, although they are not “pixel-perfect” brain pattern replications.

The study appears online this month in the Journal of Cognitive Neuroscience, ahead of print publication.

“When we mentally replay an episode we’ve experienced, it can feel like we are transported back in time and re-living that moment again,” said Dr. Brad Buchsbaum, lead investigator and scientist with Baycrest’s RRI. “Our study has confirmed that complex, multi-featured memory involves a partial reinstatement of the whole pattern of brain activity that is evoked during initial perception of the experience. This helps to explain why vivid memory can feel so real.”

But vivid memory rarely fools us into believing we are in the real, external world — and that in itself offers a very powerful clue that the two cognitive operations don’t work exactly the same way in the brain, he explained.

In the study, Dr. Buchsbaum’s team used functional magnetic resonance imaging (fMRI), a powerful brain scanning technology that constructs computerized images of brain areas that are active when a person is performing a specific cognitive task. A group of 20 healthy adults (aged 18 to 36) were scanned while they watched 12 video clips, each nine seconds long, sourced from YouTube.com and Vimeo.com. The clips contained a diversity of content — such as music, faces, human emotion, animals, and outdoor scenery. Participants were instructed to pay close attention to each of the videos (which were repeated 27 times) and informed they would be tested on the content of the videos after the scan.

A subset of nine participants from the original group were then selected to complete intensive and structured memory training over several weeks that required practicing over and over again the mental replaying of videos they had watched from the first session. After the training, this group was scanned again as they mentally replayed each video clip. To trigger their memory for a particular clip, they were trained to associate a particular symbolic cue with each one. Following each mental replay, participants would push a button indicating on a scale of 1 to 4 (1 = poor memory, 4 = excellent memory) how well they thought they had recalled a particular clip.

Dr. Buchsbaum’s team found “clear evidence” that patterns of distributed brain activation during vivid memory mimicked the patterns evoked during sensory perception when the videos were viewed — by a correspondence of 91% after a principal components analysis of all the fMRI imaging data.

The so-called “hot spots,” or largest pattern similarity, occurred in sensory and motor association areas of the cerebral cortex — a region that plays a key role in memory, attention, perceptual awareness, thought, language and consciousness.

Dr. Buchsbaum suggested the imaging analysis used in his study could potentially add to the current battery of memory assessment tools available to clinicians. Brain activation patterns from fMRI data could offer an objective way of quantifying whether a patient’s self-report of their memory as “being good or vivid” is accurate or not.

The study was funded with grants from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council of Canada.



Journal Reference:

  1. Bradley R. Buchsbaum, Sabrina Lemire-Rodger, Candice Fang, Hervé Abdi. The Neural Basis of Vivid Memory Is Patterned on Perception. Journal of Cognitive Neuroscience, 2012; : 1 DOI: 10.1162/jocn_a_00253


Baycrest Centre for Geriatric Care (2012, July 23). Why does a vivid memory ‘feel so real?’. ScienceDaily. Retrieved July 25, 2012, from http://www.sciencedaily.com­ /releases/2012/07/120723134745.htm

Infants’ Recognition of Speech More Sophisticated Than Previously Known

ScienceDaily (July 17, 2012) — The ability of infants to recognize speech is more sophisticated than previously known, researchers in New York University’s Department of Psychology have found. Their study, which appears in the journal Developmental Psychology, showed that infants, as early as nine months old, could make distinctions between speech and non-speech sounds in both humans and animals.

“Our results show that infant speech perception is resilient and flexible,” explained Athena Vouloumanos, an assistant professor at NYU and the study’s lead author. “This means that our recognition of speech is more refined at an earlier age than we’d thought.”

It is well-known that adults’ speech perception is fine-tuned — they can detect speech among a range of ambiguous sounds. But much less is known about the capability of infants to make similar assessments. Understanding when these abilities become instilled would shed new light on how early in life we develop the ability to recognize speech.

In order to gauge the aptitude to perceive speech at any early age, the researchers examined the responses of infants, approximately nine months in age, to recorded human and parrot speech and non-speech sounds. Human (an adult female voice) and parrot speech sounds included the words “truck,” “treat,” “dinner,” and “two.” The adult non-speech sounds were whistles and a clearing of the throat while the parrot non-speech sounds were squawks and chirps. The recorded parrot speech sounds were those of Alex, an African Gray parrot that had the ability to talk and reason and whose behaviors were studied by psychology researcher Irene Pepperberg.

Since infants cannot verbally communicate their recognition of speech, the researchers employed a commonly used method to measure this process: looking longer at what they find either interesting or unusual. Under this method, looking longer at a visual paired with a sound may be interpreted as a reflection of recognition. In this study, sounds were paired with a series of visuals: a checkerboard-like image, adult female faces, and a cup.

The results showed that infants listened longer to human speech compared to human non-speech sounds regardless of the visual stimulus, revealing the ability recognize human speech independent of the context.

Their findings on non-human speech were more nuanced. When paired with human-face visuals or human artifacts like cups, the infants listened to parrot speech longer than they did non-speech, such that their preference for parrot speech was similar to their preference for human speech sounds. However, this did not occur in the presence of other visual stimuli. In other words, infants were able to distinguish animal speech from non-speech, but only in some contexts.

“Parrot speech is unlike human speech, so the results show infants have the ability to detect different types of speech, even if they need visual cues to assist in this process,” explained Vouloumanos.

The study’s other co-author was Hanna Gelfand, an undergraduate at NYU’s College of Arts and Science at the time of the study and currently a graduate student in the San Diego State University/University of California, San Diego Joint Doctoral Program in Language and Communicative Disorders.


Journal Reference:

  1. Athena Vouloumanos, Hanna M. Gelfand. Infant Perception of Atypical Speech Signals. Developmental Psychology, 2012; DOI: 10.1037/a0029055


New York University (2012, July 17). Infants’ recognition of speech more sophisticated than previously known. ScienceDaily. Retrieved July 18, 2012, from http://www.sciencedaily.com­ /releases/2012/07/120717100050.htm

The Eyes Don’t Have It: New Research Into Lying and Eye Movements

ScienceDaily (July 11, 2012) — Widely held beliefs about Neuro-Linguistic Programming and lying are unfounded.

Proponents of Neuro-Linguistic Programming (NLP) have long claimed that it is possible to tell whether a person is lying from their eye movements.  Research published July 11 in the journal PLoS ONE reveals that this claim is unfounded, with the authors calling on the public and organisations to abandon this approach to lie detection.

For decades many NLP practitioners have claimed that when a person looks up to their right they are likely to be lying, whilst a glance up to their left is indicative of telling the truth.

Professor Richard Wiseman (University of Hertfordshire, UK) and Dr Caroline Watt (University of Edinburgh, UK) tested this idea by filming volunteers as they either lied or told the truth, and then carefully coded their eye movements.  In a second study another group of participants was asked to watch the films and attempt to detect the lies on the basis of the volunteers’ eye movements.

“The results of the first study revealed no relationship between lying and eye movements, and the second showed that telling people about the claims made by NLP practitioners did not improve their lie detection skills,” noted Wiseman.

A final study involved moving out of the laboratory and was conducted in collaboration with Dr Leanne ten Brinke and Professor Stephen Porter from the University of British Columbia, Canada.  The team analysed films of liars and truth tellers from high profile press conferences in which people were appealing for missing relatives or claimed to have been the victim of a crime.

“Our previous research with these films suggests that there are significant differences in the behaviour of liars and truth tellers,” noted Dr Leanne ten Brinke. “However, the alleged tell-tale pattern of eye movements failed to emerge.”

“A large percentage of the public believes that certain eye movements are a sign of lying, and this idea is even taught in organisational training courses.  Our research provides no support for the idea and so suggests that it is time to abandon this approach to detecting deceit” remarked Watt.


Journal Reference:

  1. Richard Wiseman, Caroline Watt, Leanne ten Brinke, Stephen Porter, Sara-Louise Couper, Calum Rankin. The Eyes Don’t Have It: Lie Detection and Neuro-Linguistic Programming. PLoS ONE, 2012; 7 (7): e40259 DOI: 10.1371/journal.pone.0040259