We apply machine learning and Bayesian Optimization techniques to estimate neural responses to high-dimensional visual stimuli in real-time. This approach adaptively selects the next stimulus to test, potentially speeding up results exponentially. In collaboration with the Savier, Burgess, and Naumann labs, we use our streaming software platform, improv (preprint, Github) to run adaptive experiements where we can gain insights into the current brain state in real-time and use this information to dynamically adjust an experiment while data collection is ongoing. By briding the gap between simplistic and complex stimulus spaces, these methods could provide new insights into how sensory stimuli are represented in the brains of behaving animals.
We investigate techniques for efficient modeling and computation of latent spaces in neural and behavioral data.
We hypothesize that the relationship between neural circuitry and behavior is mediated through low-dimensional dynamical patterns embedded in the neural activity of the brain.
The adaptive latent project aims to extract and analyze these patterns, or latent variables, as they unfold using streaming machine learning algorithms, like Bubblewrap.
Constructing these latents in real-time will also allow us to model and design stimulations which we can use to test causal hypotheses about the latent variables we discover, both in-house experiements
and in collaboration with experimentalists.
In collaboration with the Kaczorowski Lab, we also develop probabilistic multimodal models to uncover latent factors in complex datasets, crucial for research in neuroscience and genetics. Using variational autoencoders (VAEs), we integrate various data types, such as transciptomics and behavioral data, to explore interactions between different factors. We aim to capture hidden structures by modelling long-tail distributions and introducing hyper-priors to emulate an infinite mixture of Gaussians.
We develop models for real-time analysis of signals for brain-computer interfaces that incorporate neural stimulations.
In collaboration with the Chestek Lab, we use a markerless deep learning based tracking tool called DeepLabCut to automatically extract positions and calculate joint positions of non-human primates.
We aim to integrate this system with existing brain-computer interface paradigms and to automatically optimize high-dimensional stimulation patterns in real time.
We also examine latent representations of neural data that are stable across sequential time points, and design new alignment methods for these representations across days to create latent spaces that are stable across both short and long timescales. By training and running a decoding model using these modified datasets, we aim to improve the long term stability of BCI decoders, thus reducing or eliminating the need for frequent re-calibration.
We developed a new method for approximating dynamics as a probability flow between discrete tiles on a low-dimensional manifold. The model can be trained quickly and retains predictive performance many time steps into the future, and is fast enough to serve as a component of closed-loop causal experiments in neuroscience. Our recent preprint on this work can be found here.
How can we get connectivity between large systems of neurons in vivo? Using stimulations of small ensembles and a statistical method called group testing, we show in our recent paper that this is now feasible even in networks of up to 10 thousand neurons.