The ability to perceive spoken language is a faculty that many of us take for granted, but one that we rely on almost every waking moment of every day. Speech perception happens so efficiently and seems so effortless that we rarely give much thought to the complex processes that are occurring. I am interested in the sensory, cognitive, and neurobiological mechanisms that underlie the perception of speech. The elegance and complexity of such a system is an absolute marvel to me, and it is the goal of my program of research to better understand how it works. Speech perception utilizes a number of different neural and cognitive systems: from the very basic sensory mechanisms required to transduce and encode sound itself, to the higher level cognitive mechanisms required to turn it into a complex mental representation. I investigate three fundamental questions with my program of research: what is the relationship between the sensory encoding of speech and the percept that it evokes (or more succinctly, how do you get from sensation to perception); to what degree are these mechanisms specific to speech and language as compared to domain general perceptual abilities; and how malleable are these mechanisms, and how can they be enhanced. As a Cognitive Neuroscientist, I examine these questions using techniques from multiple disciplines including cognitive psychology, behavioral neuroscience, linguistics, and the speech and hearing sciences. Each informs my work, and has provided me with a repertoire of skills that have been valuable both in terms of pursuing a productive research program, as well as engaging, advising and training student researchers in psychology, neuroscience, and linguistics.
Perceptual Learning of Degraded Auditory Signals
A tremendous challenge for our perceptual systems is to create a stable and reliable percept from a highly variable input signal. When perceiving speech in our native language, we have to deal with talker variability (accent, dialect, speech pathologies), environmental variability (competing talkers, background noise, variability in room and environmental acoustics), and signal variability (fast or slow rate, signal loss, spectral filtering and degradation). The ability of our perceptual systems to withstand such variability relies on rapid perceptual learning. I am interested in understanding how lower-level sensory mechanisms interact with higher-level cognitive mechanisms in perceptual learning, and the resulting cognitive and perceptual skills that develop as a consequence. To this end, one line of my research focuses on the perceptual learning of degraded speech by normal hearing (NH) subjects listening to acoustic simulations of a cochlear implant.
An exciting (and perhaps unique) aspect of my research program is its applicability to clinical practice, namely to cochlear implant users themselves. Although the perceptual learning process is incredibly important to the development of accurate perceptual abilities following cochlear implantation, most adult cochlear implant users do not undergo any formal training or rehabilitation. Thus, there is significant variability across CI users in terms of their proficiency of use and therefore their psychological experience and satisfaction with their implant. Over the past 4 years, we have developed and are currently testing a novel, multi-day, targeted training paradigm for new CI users. This training paradigm focuses both on the higher level (context, syntactic) and lower level (word identification, phoneme discrimination) linguistic aspects of speech, in addition to other paralinguistic aspects of speech (talker identification) and domain general auditory abilities (environmental sound identification). My goal for this training paradigm is to help new CI users develop a standardized set of cognitive and perceptual abilities that enhances performance in real world listening situations, such as when listening to speech in noise, a difficult condition for CI users.
Audiovisual Speech Perception
An additional line of research that I have developed at St. Olaf focuses on the influence that visual speech information has on speech perception. Vision can have a significant impact on speech perception, but the mechanisms by which it does so are not well understood. This research examines the level at which visual information becomes integrated (pre- or post-linguistically) and whether visual speech information is a natural part of speech perception, or a supplementary second order addition that can be used when there is incomplete or conflicting auditory information. Understanding how visual information is integrated will inform us about the higher level cognitive mechanisms involved in speech perception. However, rather than focusing exclusively on audition, this line of inquiry focuses on multisensory integration to determine when and where in processing the auditory and visual streams merge.
I am also keenly interested in applying a variety of psychophysiological techniques in order to investigate the neural bases of speech perception. I have experience using fNIRS, EEG and ERP in my laboratory courses as well as in my own empirical research, and have developed additional lines of research using eye-tracking and pupillometry.