Speech clinical trials at UCSF
3 in progress, 0 open to eligible people
Functional Organization of the Superior Temporal Gyrus for Speech Perception
Sorry, accepting new patients by invitation only
The basic mechanisms underlying comprehension of spoken language are still largely unknown. Over the past decade, the study team has gained new insights to how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. However, the next set of questions await pertaining to the sequencing of those auditory elements and how they are integrated with other features, such as, the amplitude envelope of speech. Further investigation of the cortical representation of speech sounds can likely shed light on these fundamental questions. Previous research has implicated the superior temporal cortex in the processing of speech sounds, but little is known about how these sounds are linked together into the perceptual experience of words and continuous speech. The overall goal is to determine how the brain extracts linguistic elements from a complex acoustic speech signal towards better understanding and remediating human language disorders.
San Francisco, California
Neural Mechanisms for Stopping Ongoing Speech Production
Sorry, accepting new patients by invitation only
Speech and communication disorders often result in aberrant control of the timing of speech production, such as making improper stops at places where they should not be. During normal speech, the ability to stop when necessary is important for maintaining turn-taking in a smooth conversation. Existing studies have largely investigated neural circuits that support the preparation and generation of speech sounds. It is believed that activity in the prefrontal and premotor cortical areas facilitates high-level speech control and activity in the ventral part of the sensorimotor cortex controls the articulator (e.g. lip, jaw, tongue) movements. However, little is known about the neural mechanism controlling a sudden and voluntary stop of speech. Traditional view attributes this to a disengagement of motor signals while recent evidence suggested there may be an inhibitory control mechanism. This gap in knowledge limits our understanding of disorders like stuttering and aphasia, where deficits in speech timing control are among the common symptoms. The overall goal of this study is to determine how the brain controls the stopping of ongoing speech production to deepen our understanding of speech and communication in normal and impaired conditions.
San Francisco, California
Neural Coding of Speech Across Human Languages
Sorry, accepting new patients by invitation only
The overall goal of this study is to reveal the fundamental neural mechanisms that underlie comprehension across human spoken languages. An understanding of how speech is coded in the brain has significant implications for the development of new diagnostic and rehabilitative strategies for language disorders (e.g. aphasia, dyslexia, autism, et alia). The basic mechanisms underlying comprehension of spoken language are unknown. Researchers are only beginning to understand how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. Traditional theories have posited a 'universal' phonetic inventory shared by all humans, but this has been challenged by other newer theories that each language has its own unique and specialized code. An investigation of the cortical representation of speech sounds across languages can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. Most of this work has been entirely carried out in English. The recording of neural activity directly from the cortical surface from individuals with different language experience is a promising approach since it can provide both high spatial and temporal resolution. This study will examine the mechanisms of phonetic encoding, by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local and population encoding of speech sounds in the lateral temporal cortex. This study will also examine the neural encoding of speech in patients who are monolingual and bilingual in Mandarin, Spanish, and English, the most common spoken languages worldwide, and feature important contrastive differences of pitch, formant, and temporal envelope. A cross-linguistic approach is critical for a true understanding of language, while also striving to achieve a broader approach of diversity and inclusion in neuroscience of language.
San Francisco, California
Our lead scientists for Speech research studies include Edward F Chang, MD Lingyun Zhao, PhD.
Last updated: