Kathleen Gustafson, a research associate professor in the Department of Neurology at the University of Kansas Medical Center’s Hoglund Brain Imaging Center (right), with a mother-to-be in the fetal biomagnetometer.
Credit: KU News Service photo

A month before they are born, fetuses carried by American mothers-to-be can distinguish between someone speaking to them in English and Japanese.

Using non-invasive sensing technology from the University of Kansas Medical Center for the first time for this purpose, a group of researchers from KU’s Department of Linguistics has shown this in-utero language discrimination. Their study published in the journal NeuroReport has implications for fetal research in other fields, the lead author says.

“Research suggests that human language development may start really early — a few days after birth,” said Utako Minai, associate professor of linguistics and the team leader on the study. “Babies a few days old have been shown to be sensitive to the rhythmic differences between languages. Previous studies have demonstrated this by measuring changes in babies’ behavior; for example, by measuring whether babies change the rate of sucking on a pacifier when the speech changes from one language to a different language with different rhythmic properties, Minai said.

“This early discrimination led us to wonder when children’s sensitivity to the rhythmic properties of language emerges, including whether it may in fact emerge before birth,” Minai said. “Fetuses can hear things, including speech, in the womb. It’s muffled, like the adults talking in a ‘Peanuts’ cartoon, but the rhythm of the language should be preserved and available for the fetus to hear, even though the speech is muffled.”

Find your dream job in the space industry. Check our Space Job Board »

Minai said there was already a study that suggested fetuses could discriminate between different types of language, based on rhythmic patterns, but none using the more accurate device available at the Hoglund Brain Imaging Center at KU Medical Center called a magnetocardiogram (MCG).

“The previous study used ultrasound to see whether fetuses recognized changes in language by measuring changes in fetal heart rate,” Minai explained. “The speech sounds that were presented to the fetus in the two different languages were spoken by two different people in that study. They found that the fetuses were sensitive to the change in speech sounds, but it was not clear if the fetuses were sensitive to the differences in language or the differences in speaker, so we wanted to control for that factor by having the speech sounds in the two languages spoken by the same person.”

Two dozen women, averaging roughly eight months pregnant, were examined using the MCG.

Kathleen Gustafson, a research associate professor in the Department of Neurology at the medical center’s Hoglund Brain Imaging Center, was part of the investigator team.

“We have one of two dedicated fetal biomagnetometers in the United States,” Gustafson said. “It fits over the maternal abdomen and detects tiny magnetic fields that surround electrical currents from the maternal and fetal bodies.”

That includes, Gustafson said, heartbeats, breathing and other body movements.

“The biomagnetometer is more sensitive than ultrasound to the beat-to-beat changes in heart rate,” she said. “Obviously, the heart doesn’t hear, so if the baby responds to the language change by altering heart rate, the response would be directed by the brain.”

Which is exactly what the recent study found.

“The fetal brain is developing rapidly and forming networks,” Gustafson said. “The intrauterine environment is a noisy place. The fetus is exposed to maternal gut sounds, her heartbeats and voice, as well as external sounds. Without exposure to sound, the auditory cortex wouldn’t get enough stimulation to develop properly. This study gives evidence that some of that development is linked to language.”

Minai had a bilingual speaker make two recordings, one each in English and Japanese, to be played in succession to the fetus. English and Japanese are argued to be rhythmically distinctive. English speech has a dynamic rhythmic structure resembling Morse code signals, while Japanese has a more regular-paced rhythmic structure.

Sure enough, the fetal heart rates changed when they heard the unfamiliar, rhythmically distinct language (Japanese) after having heard a passage of English speech, while their heart rates did not change when they were presented with a second passage of English instead of a passage in Japanese.

“The results came out nicely, with strong statistical support,” Minai said. “These results suggest that language development may indeed start in utero. Fetuses are tuning their ears to the language they are going to acquire even before they are born, based on the speech signals available to them in utero. Pre-natal sensitivity to the rhythmic properties of language may provide children with one of the very first building blocks in acquiring language.”

“We think it is an extremely exciting finding for basic science research on language. We can also see the potential for this finding to apply to other fields.”


Story Source:Materials provided by University of KansasNote: Content may be edited for style and length.

Journal Reference:
Utako Minai, Kathleen Gustafson, Robert Fiorentino, Allard Jongman, Joan Sereno.Utako Minai, Kathleen Gustafson, Robert Fiorentino, Allard Jongman, Joan Sereno. NeuroReport, 2017; 28 (10): 561 DOI: 10.1097/wnr.0000000000000794