In:
Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, Vol. 119, No. 32 ( 2022-08-09)
Abstract:
Understanding spoken language requires transforming ambiguous acoustic streams into a hierarchy of representations, from phonemes to meaning. It has been suggested that the brain uses prediction to guide the interpretation of incoming input. However, the role of prediction in language processing remains disputed, with disagreement about both the ubiquity and representational nature of predictions. Here, we address both issues by analyzing brain recordings of participants listening to audiobooks, and using a deep neural network (GPT-2) to precisely quantify contextual predictions. First, we establish that brain responses to words are modulated by ubiquitous predictions. Next, we disentangle model-based predictions into distinct dimensions, revealing dissociable neural signatures of predictions about syntactic category (parts of speech), phonemes, and semantics. Finally, we show that high-level (word) predictions inform low-level (phoneme) predictions, supporting hierarchical predictive processing. Together, these results underscore the ubiquity of prediction in language processing, showing that the brain spontaneously predicts upcoming language at multiple levels of abstraction.
Type of Medium:
Online Resource
ISSN:
0027-8424
,
1091-6490
DOI:
10.1073/pnas.2201968119
Language:
English
Publisher:
Proceedings of the National Academy of Sciences
Publication Date:
2022
detail.hit.zdb_id:
209104-5
detail.hit.zdb_id:
1461794-8
SSG:
11
SSG:
12
Permalink