AI/Computational Modeling Meet & Speak: Titles and Abstracts

Introduction: Watch the video

 

1. Gerry Altmann
Director, IBACS; Psychological Sciences

Talk title: Understanding events during language comprehension: What Recurrent Neural Networks can teach us about human understanding
Abstract: I shall describe how an RNN trained on Wikipedia can learn dependencies across sentences that allow it to anticipate how events ‘unfold’ across time, and how the individual objects in those events change through time. As a model of how the human mind encodes events, the work suggests that certain optimizations that ‘fall out’ of its dynamical systems architecture map onto some of the engineered solutions to computational tractability that pervade much of modern AI. Watch the video 

 


2. Whit Tabor
Psychological Sciences

Talk title: Dynamical systems theory insight into the relationship between networks and languages
Abstract: There is much interest in neural mechanisms for higher cognitive processing, especially natural language behavior.  While neural learning models have achieved some remarkable successes, the mechanisms have a black box character—we know how to build them but we have a hard time expressing their theoretical implications.  Dynamical systems theory offers helpful insight via an extension of symbolic dynamics. I illustrate how this works for certain discrete dynamical systems.  Two open questions are (1) how to do it with (quasi) oscillatory systems (2) the relation to deep learning. Watch the video

 


3. Jim Magnuson
Psychological Sciences

Talk title: Bridging the gaps between automatic speech recognition and human speech recognition
Abstract: Cognitive models of human speech recognition (HSR) have used neural networks since the 1980s. However, to keep the models simple enough to guide theories, numerous simplifying assumptions have been made. For most models, these include using abstract phonetic descriptions of speech rather than actual speech signals, and fixed rather than learned weights. In the meantime, deep learning approaches have enabled robust real-world automatic speech recognition (ASR) for billions of smartphone users. Those models provide little insights into human cognition or neurobiology due to their complexity. Our team is working on developing models that gradually bridge the gaps between ASR and models of HSR. I will describe our first model, EARSHOT, a two-layer recurrent network using long short-term memory (LSTM) nodes that learns to map speech input to semantic vectors and achieves high accuracy on 1000 words produced by 10 talkers while successfully simulating key aspects of HSR. Intriguingly, hidden nodes develop phonetic sensitivity that resembles that observed in human superior temporal gyrus. Watch the video

 


4. Jay Rueckl
Psychological Sciences

Talk Title: Modeling Individual Differences in Reading
Abstract: The triangle model is a connectionist model of reading and reading acquisition.   In an ongoing project we have been modeling individual differences in reading acquisition by tracking the joint effects of variation in a number of the triangle model’s key control parameters.   Our results are consistent with a dimensional view of developmental dyslexia that asserts that reading disability represents the lower tail of a normal distribution of reading ability.  Further, these results reveal that variation in certain parameters are associated with identifiable behavioral differences in the pattern of impairment, suggesting a pathway for linking the control parameters of the model to the risk/protective factors related to reading outcomes. Watch the video

 


5. Ed Large
Psychological Sciences

Talk Title: Dynamic Responses to Syncopated Rhythms Reveal the Neural Origins of Pulse Perception
Abstract: Syncopation in music can be described as a kind of surprise effect. We feel it when a note does not occur on the beat – when it is expected. Instead the event occurs between beats – when it is more surprising. Here, we consider an extreme example of syncopated rhythms: Rhythms that have no energy in their modulation spectrum at the frequency people experience as the main beat, or pulse. We call these “missing pulse rhythms.” How does the brain generate the pulse frequency in such extreme cases? Our hypothesis is that this frequency could emerge from the interaction of a rhythmic stimulus with oscillatory neural networks in spatially distributed brain regions. We test the model’s predictions by asking whether we can 1) observe such oscillations in EEG recordings, and 2) localize their neural origin. We measured steady-state evoked potentials (SS-EPs) using a 256-channel, high-density EEG, and we localized participants’ neural responses using individual MRI images. We found that the answer to the first question is ‘yes’, we can observe SS-EPs at the missing pulse frequency EEG. Next, we identified the neural loci of pulse perception by exploiting the special properties of missing pulse rhythms. Because the pulse frequency is absent from the stimulus, we were able to distinguish those brain areas that generate the pulse frequency from those areas that reflect mainly stimulus frequencies. Our results are consistent with a model in which pulse emerges from the interaction of a complex rhythmic stimulus with oscillatory activity in motor planning brain regions. They show how the model can be refined by pointing to the actual brain areas where such activity can be observed. Watch the video

 


6. Ian Stevenson
Psychological Sciences

Talk Title: Detecting and modeling synapses from spiking activity
Abstract: Our work focuses primarily on statistical analysis of large-scale spike recordings from animal models. Here I’ll describe one branch of this work that aims to reconstruct which neurons are connected to which and, then, how synaptic strengths change over time. I’ll discuss some of the statistical challenges in this work, the approaches that we’ve been using, and a bit about where this work is heading. Watch the video

 


7. Monty Escabi
Biomedical Engineering

Talk Title:  From neurons to machines: combining auditory neurophysiology and a hierarchical spiking neural network to understand natural sound recognition in noise
Abstract: I will describe a hierarchical spiking neural network that we have developed and optimized for speech recognition noise. The network achieves high accuracy, within 10% of human performance, in a digit identification task. Furthermore, the network predicts neural transformations that we and others have observed to occur between the auditory periphery and the auditory cortex. Using the spiking network, I will show how sequential nonlinear transformations denoise the speech signals and help enhance speech recognition in noise performance. Watch the video

 


8. Sabato Santaniello
Biomedical Engineering

Talk Title: Biophysically-principled modeling for brain disorders and neuromodulation
Abstract: Abnormal neural oscillations in the brain are a hallmark for several neurological disorders. Oscillations capture a shift in the collective dynamics of wide neural circuits, but the cellular mechanisms underlying this shift are rarely captured by neural recordings. In this talk, we show that a combination of biophysically-principled models of neuronal networks and network control tools can help understand the formation of these oscillations and assist with the design of effective neuromodulation protocols for the treatment of neurological disorders. A case study involving essential tremor and cerebellar transcranial stimulation will be used. Watch the video

 


9. Jinbo Bi
Associate Head, Computer Science & Engineering

Talk Title: Machine Learning: empowering complex and big data analysis in health care
Abstract: Challenges in medical informatics motivate and demand the development of novel machine learning principles, models, and algorithms. In this talk, we will touch upon two new ML methods: spatiotemporal modeling to deal with the multi-dimensional nature of brain signals and multi-view cluster analysis to help identify disease subtypes. Watch the video

 


10. Derek Aguiar
Computer Science & Engineering

Talk Title: Bayesian machine learning in large models with applications in genomics
Abstract: I will briefly describe Bayesian machine learning, large (nonparametric) models, and challenges with scaling inference to modern genome-scale data. With robust statistical models of genetic variation, I will demonstrate how we can capture clusters of genetic sequence at a wide range of ancestral times and accurately predict gene expression. Watch the video

 


11. Caiwen Ding
Computer Science & Engineering

Talk Title: Accelerating Deep Neural Networks using Block Circulant Matrix
Abstract: Deep learning or deep neural networks (DNNs) has been able to deliver impressive results in many challenging problems such as visual and recognition tasks, machine translations, and drug discovery. Despite the advantage of improved overall accuracy, the deep-layered structure and large model sizes increase the computational complexity and memory requirements. To achieve better performance and energy efficiency, we build efficient machine learning & deep neural network systems using advanced algorithms and optimization techniques. Watch the video

 


12. Ranjan Srivastava
Head of Chemical & Biomolecular Engineering

Talk Title:Evolutionary Algorithms for Solving Inverse Function Problems
Abstract:Inverse function problems are those where you wish to identify the inputs that produce a desired output.  Evolution, by its very nature, is an inverse function problem solver.  Genotypes that produce desired phenotypes are selected for.  Our efforts involve applying evolutionary algorithms to solve the inverse function problems that arise in engineering.   I will discuss a case study from our biochemical engineering work. Watch the video

 

Closing remarks – computational tools for explaining and creating behavior: Watch the video