Lexical processing in sign language: A visual mismatch negativity study

Qinli DENG, Feng GU, Shelley Xiuli TONG

Research output: Contribution to journalArticlespeer-review

3 Citations (Scopus)

Abstract

Event-related potential studies of spoken and written language show the automatic access of auditory and visual words, as indexed by mismatch negativity (MMN) or visual MMN (vMMN). The present study examined whether the same automatic lexical processing occurs in a visual-gestural language, i.e., Hong Kong Sign Language (HKSL). Using a classic visual oddball paradigm, deaf signers and hearing non-signers were presented with a sequence of static images representing HKSL lexical signs and non-signs. When compared with hearing non-signers, deaf signers exhibited an enhanced vMMN elicited by the lexical signs at around 230 ms, and a larger P1–N170 complex evoked by both lexical sign and non-sign standards at the parieto-occipital area in the early time window between 65 ms and 170 ms. These findings indicate that deaf signers implicitly process the lexical sign and that neural response differences between deaf signers and hearing non-signers occur at the early stage of sign processing. Copyright © 2020 Elsevier Ltd. All rights reserved.

Original languageEnglish
Article number107629
JournalNeuropsychologia
Volume148
Early online dateOct 2020
DOIs
Publication statusPublished - Nov 2020

Citation

Deng, Q., Gu, F., & Tong, S. X. (2020). Lexical processing in sign language: A visual mismatch negativity study. Neuropsychologia, 148, Article 107629. https://doi.org/10.1016/j.neuropsychologia.2020.107629

Keywords

  • Visual mismatch negativity (vMMN)
  • Deaf signers
  • Lexical processing
  • Hong Kong Sign Language (HKSL)

Fingerprint

Dive into the research topics of 'Lexical processing in sign language: A visual mismatch negativity study'. Together they form a unique fingerprint.