The Neural Coding of Speech Across Human Languages
The overall goal of this study is to reveal the fundamental neural mechanisms that underlie comprehension across human spoken languages. An understanding of how speech is coded in the brain has significant implications for the development of new diagnostic and rehabilitative strategies for language disorders (e.g. aphasia, dyslexia, autism, et alia). The basic mechanisms underlying comprehension of spoken language are unknown. Researchers are only beginning to understand how the human brain extracts the most fundamental linguistic elements (consonants and vowels) from a complex and highly variable acoustic signal. Traditional theories have posited a 'universal' phonetic inventory shared by all humans, but this has been challenged by other newer theories that each language has its own unique and specialized code. An investigation of the cortical representation of speech sounds across languages can likely shed light on this fundamental question. Previous research has implicated the superior temporal cortex in the processing of speech sounds. Most of this work has been entirely carried out in English. The recording of neural activity directly from the cortical surface from individuals with different language experience is a promising approach since it can provide both high spatial and temporal resolution. This study will examine the mechanisms of phonetic encoding, by utilizing neurophysiological recordings obtained during neurosurgical procedures. High-density electrode arrays, advanced signal processing, and direct electrocortical stimulation will be utilized to unravel both local and population encoding of speech sounds in the lateral temporal cortex. This study will also examine the neural encoding of speech in patients who are monolingual and bilingual in Mandarin, Spanish, and English, the most common spoken languages worldwide, and feature important contrastive differences of pitch, formant, and temporal envelope. A cross-linguistic approach is critical for a true understanding of language, while also striving to achieve a broader approach of diversity and inclusion in neuroscience of language.
Experimental approaches with significantly greater spatial and temporal resolution are necessary to directly resolve, both local- and population- level, the contrastive encoding of speech sounds. This study proposes an innovative methodological approach using customized intracranial high-density electrode arrays to record neural activity directly from nonprimary auditory cortex in patients undergoing clinical neurosurgical procedures (acute intraoperative and chronic extraoperative). This approach overcomes obstacles in traditional neuroimaging by offering high signal-to-noise recordings, unprecedented detailed spatiotemporal resolution, and a large number of simultaneously recorded cortical sites in awake, behaving subjects. The research study team will leverage the diversity of languages spoken by patients that are treated at the large volume epilepsy and tumor brain mapping programs at the University of California, San Francisco. They will examine cortical responses to speech stimuli (natural speech corpora and control tokens) in Spanish, Mandarin, and English speakers (monolingual and bilingual). They will also focus on encoding models in three fundamental domains of acoustic-phonetic cues that are present in all languages: pitch, formants, and amplitude envelope. The aims of this study seeks to determine how pitch cues encode lexical tone processing in Mandarin (Aim 1), the cortical representation of vowels in Spanish and English (Aim 2), and the encoding of the speech amplitude envelope in Spanish and English (Aim 3). Together, these aims will elucidate mechanistic principles of speech encoding in the human auditory cortex to understand what is shared and different across human spoken languages. Abnormalities in these fundamental processes have been implicated in a host of communication disorders such as dyslexia, developmental language disorder, central hearing loss, and aphasia. These results should heavily impact current theories of speech processing and, therefore, will have significant implications for understanding and remediating human disorders across different languages.
Epilepsy Brain Tumor Speech Bilingualism Brain Neoplasms Speech Tasks Electrocorticography (ECoG) recording during Speech Tasks
You can join if…
Open to people ages 18-70
- Participants with epilepsy or brain tumors at UCSF undergoing surgical electrode implantation for seizure localization or for speech and language mapping and
- Participants with electrodes implanted in at least two regions of interest who are willing and able to cooperate with study tasks.
You CAN'T join if...
- Participants who lack capacity or decline to provide informed consent,
- Participants who have significant cerebral lesions or
- Participants with cognitive deficits that preclude reliable completion of study tasks.
- University of California, San Francisco
accepting new patients
San Francisco California 94143 United States
Lead Scientist at UCSF
- Edward F Chang, MD
Professor, Neurological Surgery. Authored (or co-authored) 268 research publications.
- accepting new patients
- Start Date
- Completion Date
- University of California, San Francisco
- Study Type
- Last Updated
Please contact me about this study
We will not share your information with anyone other than the team in charge of this study. Submitting your contact information does not obligate you to participate in research.
The study team should get back to you in a few business days.
You will also receive an email with next steps. Check your junk/spam folder if needed.
If you do not hear from the study team, please call 888-689-8273 and tell them you’re interested in study number NCT05014841.