Speech perception for both normally-hearing and deaf individuals involves an integrative process between auditory and lip-reading information. During my PhD I studied this integrative processes through Cued Speech perception. Cued Speech (CS) is a manual system originally designed to help deaf individuals to perceive speech with the visual modality only, but without the ambiguity inherent to lip-reading (Cornett, 1967). In CS, each syllable is uttered with a complementary gesture called a manual cue. In its French version, vowels are coded with five different hand placements near the face, and consonants are coded with eight handshapes. Within this system, both labial and manual information, as alone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept.
In the first part of my thesis I studied the integration of manual and labial information by deaf but also normo-hearing individuals. In the second part, as nowadays, many congenitally deaf children are fitted with a cochlear implant (CI), I examined audio-visual (AV) integration was affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely.
Recently I joined the GIPSA-Lab (Grenoble, France) as a post-doctoral researcher. In my new project I study the perceptuo-motor link in speech process through Cued Speech perception. And as I am convinced that is important to bring CS users and researchers I am frequently collaborate with ALPC and LPC Belgique associations.