“Does the Brain Flicker? Evidence for time-division multiplexing in the brain, and implications for cognition”

BME Seminar

Friday, February 10, 2017
ITE 336 at Storrs & Videoconferenced to UCHC CG-079B
12:00-12:50 pm

“Does the Brain Flicker? Evidence for time-division multiplexing in the brain, and implications for cognition”

Presented By: Dr. Jennifer Groh, Professor of Neurobiology, Psychology & Neuroscience, Duke University

Abstract: Most of what we know about brain representations comes from experiments in which only a single stimulus is presented at a time. However, it is not at all clear how to extend the findings from the single stimulus case to the natural situation with multiple stimuli at once, particularly when each stimulus can activate an overlapping population of neurons. One possible solution to this problem is time-division multiplexing, a telecommunications strategy in which different signals are interleaved across time. We have developed a statistical method for assessing whether neural responses exhibit such interleaved signals (collaboration with Surya Tokdar). I’ll present evidence from two very different cases: how two sounds are represented in the primate inferior colliculus (Groh lab) and how two visual face stimuli are represented in IT face patches (collaboration with Winrich Freiwald).

Biography: Jennifer Groh is a Professor of Neurobiology and Psychology & Neuroscience at Duke University. She is the author of the book Making Space: How the Brain Knows Where Things Are (Harvard University Press, 2014), and runs a Coursera course entitled “The Brain and Space”. Her research spans the computational and experimental domains, and concerns the neural algorithms that underlie spatial processing within and across the visual, auditory, and oculomotor systems. For example, her research has demonstrated interactions between visual/eye movement and auditory signals throughout the auditory pathway from the eardrum to the inferior colliculus to auditory cortex, interactions which are hypothesized to support the computation of auditory space in a visual/eye-centered reference frame. For more information, see her website www.duke.edu/~jmgroh or follow her on twitter @jmgrohneuro.