When we sign, we build phrases with similar neural mechanisms as when we speak

Differences between signed and spoken languages are significant, yet the underlying neural processes we use to create complex expressions are quite similar for both, a team of researchers has found.

“This research shows for the first time that despite obvious physical differences in how signed and spoken languages are produced and comprehended, the neural timing and localization of the planning of phrases is comparable between American Sign Language and English,” explains lead author Esti Blanco-Elorrieta, a doctoral student in New York University’s Department of Psychology and NYU Abu Dhabi Institute.

The research is reported in the latest issue of the journal Scientific Reports.

“Although there are many reasons to believe that signed and spoken languages should be neurobiologically quite similar, evidence of overlapping computations at this level of detail is still a striking demonstration of the fundamental core of human language,” adds senior author Liina Pylkkanen, a professor in New York University’s Department of Linguistics and Department of Psychology.

The study also included Itamar Kastner, an NYU doctoral student at the time of the study and now at Berlin’s Humboldt University, and Karen Emmorey, a professor at San Diego State University and a leading expert on sign language, who adds, “We can only discover what is universal to all human languages by studying sign languages.”

Past research has shown that structurally, signed and spoken languages are fundamentally similar. However, less clear is whether the same circuitry in the brain underlies the construction of complex linguistic structures in sign and speech.

To address this question, the scientists studied the production of multiple two-word phrases in American Sign Language (ASL) as well as speech by deaf ASL signers residing in and around New York and hearing English speakers living in Abu Dhabi.

Signers and speakers viewed the same pictures and named them with semantically identical expressions. In order to gauge the study subjects’ neurological activity during this experiment, the researchers deployed magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brain.

For both signers and speakers, phrase building engaged the same parts of the brain with similar timing: the left anterior temporal and ventromedial cortices, despite different linguistic articulators (the vocal tract vs. the hands).

The researchers point out that this neurobiological similarity between sign and speech, then, goes beyond basic similarities and into more intricate processes — the same parts of the brain are used at the same time for the specific computation of combining words or signs into more complex expressions.

The research was supported by grants from National Science Foundation (BCS-1221723) (LP), the National Institutes of Health (R01-DC010997) (KE), and the NYUAD Institute as well as a La Caixa Foundation fellowship for Post-Graduate Studies (EBE).

Source: Read Full Article