We develop new models of low-dimensional neural and behavioral data through manifold construction and adaptation during diverse tasks or stimulus conditions.
While many latent space models are fit after a recording is over, we instead focus on the real-time domain to learn ongoing latent dynamics as data are acquired, and to learn response maps for how these dynamics are perturbed under arbitrary stimuli.
We also develop new constrained optimization methods to determine best high-dimensional stimulation patterns to drive neural dynamics in latent spaces.
Figure: (Left) We simulate neural responses to photostimulation events marked by red lines. (Right) These stimulation patterns drive latent neural dynamics along the first latent dimension.
We work on multimodal integration and probabilistic models to uncover latent factors across neural, genetic, and behavioral datasets.
In collaboration with the Kaczorowski Lab, we are developing new variational autoencoder architectures to identify novel factors underlying cognitive resilience in Alzheimer’s disease.
We also develop trajectory modeling techniques to propose novel interventions (genetic, environmental) based on structured latent spaces.
Figure: (Left) Distributions of data projected along a phenotypic axis in our learned latent space shows the spectrum of cognitive resilience. (Right) Cognitive metrics (CFM) can be continously generated along a path from susceptible to resilient (solid) or vice versa (dashed).
We develop models for real-time analysis of signals for brain-computer interfaces that incorporate neural stimulations.
In collaboration with the Chestek Lab, we use a markerless deep learning based tracking tool called DeepLabCut to automatically extract positions and calculate joint positions of non-human primates.
We aim to integrate this system with existing brain-computer interface paradigms and to automatically optimize high-dimensional stimulation patterns in real time.
We also examine latent representations of neural data that are stable across sequential time points, and design new alignment methods for these representations across days to create latent spaces that are stable across both short and long timescales.
By training and running a decoding model using these modified datasets, we aim to improve the long term stability of BCI decoders, thus reducing or eliminating the need for frequent re-calibration.
Figure: (Top) Experimental setup with multiple camera views. Processing speeds for inferring finger and wrist angles are faster than image acquisition. The map of reachable poses observed from user-selected stimulation patterns agrees with our simulated model. (Bottom) Diagram of workflow for finding and aligning latent representations of neural data for BCI decoding.
We apply machine learning and Bayesian Optimization techniques to estimate neural responses to high-dimensional visual stimuli in real-time.
This approach adaptively selects the next stimulus to test, potentially speeding up results exponentially. In collaboration with the Savier, Burgess,
and Naumann labs, we use our streaming software platform, improv (preprint, Github) to run adaptive experiements where we can gain insights into the current
brain state in real-time and use this information to dynamically adjust an experiment while data collection is ongoing.
By briding the gap between simplistic and complex stimulus spaces, these methods could provide new insights into how sensory stimuli are represented in the brains of behaving animals.
Figure: (Top) Example 2D slices from a 4-dimensional tuning curve of a V1 neuron. (Bottom) Processing speeds for image processing, tuning curve analysis, and optimization for our real-time system.
We developed a new method for approximating dynamics as a probability flow between discrete tiles on a low-dimensional manifold. The model can be trained quickly and retains predictive performance many time steps into the future, and is fast enough to serve as a component of closed-loop causal experiments in neuroscience. Our recent preprint on this work can be found here.
How can we get connectivity between large systems of neurons in vivo? Using stimulations of small ensembles and a statistical method called group testing, we show in our recent paper that this is now feasible even in networks of up to 10 thousand neurons.