link for the meeting: https://bit.ly/3qnDpxU
An influential model in speech and language processing (Giraud & Poeppel 2012) posits that a cascade of embedded neural oscillations allows us to process the hierarchy of linguistic units (phonemes, syllables, words, etc.) in parallel as we listen to speech unfold over time. The model is based on the observation that neural oscillations operate in frequency bands that match the time scales of linguistic units highly relevant for speech processing (e.g. gamma oscillations > 35Hz correspond to (sub)phonemic units, theta oscillations 4-8Hz to syllables, etc.). A large body of research now supports this model empirically in non-human animals and human adults. However, the developmental origins of the model remain little know. This talk explores how the embedded oscillatory hierarchy emerges on the basis of prenatal experience and postnatal experience with speech, testing newborn and 6-month-old infants’ electrophysiological responses to speech in the native language as well as in unfamiliar languages.