Multisensory Processing of Human Speech Measured with msec and mm Resolution
Congressional District Code:
Clinical Science R&D
January 2020 -
FY 2021 Funding Amount:
Total Award Amount (all years):
View full abstract and other project information on NIH RePORTER Go To NIH RePORTER Excerpt:
Face-to-face communication is the most important form of human interaction. When conversing we receive auditory information from the talker's voice and visual information from the talker's face. Combining these two sources of information is difficult, as they arrive quickly (about 5 syllables per second) and the correspondence between the vocal sounds and the mouth movements made by the talker is complex. We propose to study the neural mechanisms that underlie multisensory (auditory and visual)...