Research Interests

 Broadly speaking, my work focuses on extracting meaning from complex, high-dimensional biophysiological data. My Ph.D. work was within the realm of Computational Neuroscience, modeling the dynamics of the neuronal network of the nematode C. elegans. Specifically, I analyzed the low-dimensional complex structure of the nonlinear network dynamics by drawing techniques from dynamical systems theory, control theory, and machine learning.
 In my postdoctoral work at PNRI my interests have expanded to a broader variety of complex, high-dimensional heterogenous biological datasets. My work in this domain has two main fronts: (i) theoretical work which develops and extends techniques for data analysis using Information theory; (ii) applied data science to solve specific challenges in bioinformatics.
Modeling and Understanding the C. elegans Connectome
 I study the structure and dynamics of complex, high-dimensional physical systems, particularly networked dynamical systems. Computational Neuroscience has emerged as an exciting domain for studying how large, high dimensional physical networks robustly generate low-dimensional outputs in response to various inputs. Modeling of the nematode C. elegans, for which all of the connections between all of its neurons are mapped, has been the focus of much of my work.
 My work asks the question: why is the connectome connected in the way it is? What, if anything, does it encode? I show that through computational modeling (even when the parameters are largely unknown) and by developing novel mathematical tools (drawing from fields such as Dynamical Systems Theory, Control Theory and Machine Learning), we can make strides for answering these questions for both the C. elegans Connectome and for nonlinear dynamical networks more broadly.

"Low-dimensional functionality of complex network dynamics: Neurosensory integration in the Caenorhabditis elegans connectome",
by Kunert, Shlizerman and Kutz. Physical Review E (2014).
Abstract: We develop a biophysical model of neurosensory integration in the model organism Caenorhabditis elegans. Building on experimental findings on the neuron conductances and their resolved connectome, we posit the first full dynamic model of the neural voltage excitations that allows for a characterization of network structures which link input stimuli to neural proxies of behavioral responses. Full connectome simulations of neural responses to prescribed inputs show that robust, low-dimensional bifurcation structures drive neural voltage activity modes. Comparison of these modes with experimental studies allows us to link these network structures to behavioral responses. Thus the underlying bifurcation structures discovered, i.e., induced Hopf bifurcations, are critical in explaining behavioral responses such as swimming and crawling.
"Functionality and Robustness of Injured Connectomic Dynamics in C. elegans: Linking Behavioral Deficits to Neural Circuit Damage",
by Kunert, Maia and Kutz. PLOS Computational Biology (2017).
Abstract: Using a model for the dynamics of the full somatic nervous system of the nematode C. elegans, we address how biological network architectures and their functionality are degraded in the presence of focal axonal swellings (FAS) arising from neurodegenerative disease and/or traumatic brain injury. Using biophysically measured FAS distributions and swelling sizes, we are able to simulate the effects of injuries on the neural dynamics of C. elegans, showing how damaging the network degrades its low-dimensional dynamical responses. We visualize these injured neural dynamics by mapping them onto the worm’s low-dimensional postures, i.e. eigenworm modes. We show that a diversity of functional deficits arise from the same level of injury on a connectomic network. Functional deficits are quantified using a statistical shape analysis, a procrustes analysis, for deformations of the limit cycles that characterize key behaviors such as forward crawling. This procrustes metric carries information on the functional outcome of injuries in the model. Furthermore, we apply classification trees to relate injury structure to the behavioral outcome. This makes testable predictions for the structure of an injury given a defined functional deficit. More critically, this study demonstrates the potential role of computational simulation studies in understanding how neuronal networks process biological signals, and how this processing is impacted by network injury.
Spatiotemporal Feedback and Network Structure Drive and Encode Caenorhabditis elegans Locomotion,
by Kunert, Proctor, Brunton and Kutz. PLOS Computational Biology (2017).
Abstract: Using a computational model of the Caenorhabditis elegans connectome dynamics, we show that proprioceptive feedback is necessary for sustained dynamic responses to external input. This is consistent with the lack of biophysical evidence for a central pattern generator, and recent experimental evidence that proprioception drives locomotion. The low-dimensional functional response of the Caenorhabditis elegans network of neurons to proprioception-like feedback is optimized by input of specific spatial wavelengths which correspond to the spatial scale of real body shape dynamics. Furthermore, we find that the motor subcircuit of the network is responsible for regulating this response, in agreement with experimental expectations. To explore how the connectomic dynamics produces the observed two-mode, oscillatory limit cycle behavior from a static fixed point, we probe the fixed point’s low-dimensional structure using Dynamic Mode Decomposition. This reveals that the nonlinear network dynamics encode six clusters of dynamic modes, with timescales spanning three orders of magnitude. Two of these six dynamic mode clusters correspond to previously-discovered behavioral modes related to locomotion. These dynamic modes and their timescales are encoded by the network’s degree distribution and specific connectivity. This suggests that behavioral dynamics are partially encoded within the connectome itself, the connectivity of which facilitates proprioceptive control.
Complexity and Vulnerability Analysis of the C. Elegans Gap Junction Connectome
by Kunert-Graf, Sakhanenko and Galas. Entropy (2017).
Abstract: We apply a network complexity measure to the gap junction network of the somatic nervous system of C. elegans and find that it possesses a much higher complexity than we might expect from its degree distribution alone. This “excess” complexity is seen to be caused by a relatively small set of connections involving command interneurons. We describe a method which progressively deletes these “complexity-causing” connections, and find that when these are eliminated, the network becomes significantly less complex than a random network. Furthermore, this result implicates the previously-identified set of neurons from the synaptic network’s “rich club” as the structural components encoding the network’s excess complexity. This study and our method thus support a view of the gap junction Connectome as consisting of a rather low-complexity network component whose symmetry is broken by the unique connectivities of singularly important rich club neurons, sharply increasing the complexity of the network.
Multistability and Long-Timescale Transients Encoded by Network Structure in a Model of C. elegans Connectome Dynamics
by Kunert, Shlizerman, Walker and Kutz. Frontiers in Computational Neuroscience (2017).
Abstract: The neural dynamics of the nematode Caenorhabditis elegans are experimentally low-dimensional and may be understood as long-timescale transitions between multiple low-dimensional attractors. Previous modeling work has found that dynamic models of the worm's full neuronal network are capable of generating reasonable dynamic responses to certain inputs, even when all neurons are treated as identical save for their connectivity. This study investigates such a model of C. elegans neuronal dynamics, finding that a wide variety of multistable responses are generated in response to varied inputs. Specifically, we generate bifurcation diagrams for all possible single-neuron inputs, showing the existence of fixed points and limit cycles for different input regimes. The nature of the dynamical response is seen to vary according to the type of neuron receiving input; for example, input into sensory neurons is more likely to drive a bifurcation in the system than input into motor neurons. As a specific example we consider compound input into the neuron pairs PLM and ASK, discovering bistability of a limit cycle and a fixed point. The transient timescales in approaching each of these states are much longer than any intrinsic timescales of the system. This suggests consistency of our model with the characterization of dynamics in neural systems as long-timescale transitions between discrete, low-dimensional attractors corresponding to behavioral states.
The control structure of the nematode Caenorhabditis elegans: neuro-sensory integration and propioceptive feedback
by Fieseler, Kunert-Graf and Kutz. Preprint. Accepted by Journal of Biomechanics (2018).
Abstract: We develop a biophysically realistic model of the nematode C. elegans that includes: (i) its muscle structure and activation, (ii) key connectomic activation circuitry, and (iii) a weighted and time-dynamic propioception. In combination, we show that these model components can reproduce the complex waveforms exhibited in C. elegans locomotive behaviors, such as omega turns. We show that weighted, time-dependent synaptic dynamics are necessary for this complex behavior, ultimately revealing key functions that must be executed at the connectomic level. Such dynamics are biologically plausible due to the presence of many neuromodulators which have recently been experimentally implicated in complex behaviors such as omega turns. This is the first integrated neuromechanical model to reveal a mechanism capable of generating the complex waveforms observed in the behavior of C. elegans, thus providing a mathematical framework for understanding how control decisions must be executed at the connectome level in order to produce the full repertoire of observed behaviors.
Characterizing Multivariable Dependency with Information Theory
The analysis of large, complex heterogenous datasets must face the following problems: (i) detecting the dependence (or independence) among variables in a dataset; (ii) quantifying the statistical significance of how much a set of variables is or is not interdependent; (iii) characterizing the functional form through which variables depend on each other. Building off of previous work by Sakhanenko and Galas, we are developing techniques which can fully address these questions through novel applications of Information Theory.
Expansion of the Kullback-Leibler Divergence, and a New Class of Information Metrics
by Galas, Dewey, Kunert-Graf and Sakhanenko. Axioms (2017).
Abstract:Inferring and comparing complex, multivariable probability density functions is fundamental to problems in several fields, including probabilistic learning, network theory, and data analysis. Classification and prediction are the two faces of this class of problem. This study takes an approach that simplifies many aspects of these problems by presenting a structured, series expansion of the Kullback-Leibler divergence—a function central to information theory—and devise a distance metric based on this divergence. Using the Möbius inversion duality between multivariable entropies and multivariable interaction information, we express the divergence as an additive series in the number of interacting variables, which provides a restricted and simplified set of distributions to use as approximation and with which to model data. Truncations of this series yield approximations based on the number of interacting variables. The first few terms of the expansion-truncation are illustrated and shown to lead naturally to familiar approximations, including the well-known Kirkwood superposition approximation. Truncation can also induce a simple relation between the multi-information and the interaction information. A measure of distance between distributions, based on Kullback-Leibler divergence, is then described and shown to be a true metric if properly restricted. The expansion is shown to generate a hierarchy of metrics and connects this work to information geometry formalisms. An example of the application of these metrics to a graph comparison problem is given that shows that the formalism can be applied to a wide range of network problems and provides a general approach for systematic approximations in numbers of interactions or connections, as well as a related quantitative metric.
The information content of discrete functions and their application to genetic data analysis
by Sakhanenko, Kunert-Graf and Galas. Journal of Computational Biology (2017).
Summary: This paper investigates the problem of inferring the functional form of how variable depend on each other. We show that quantifying the information contained in discrete functions maps them onto a low-dimensional function space. Functions may be classified depending on their location in this space. We can similarly map a dataset onto this space and use this to the set of possible functions relating the variables within. We apply this to typical models in genetics, showing that some common genetic models may be easily confused due to their proximity in this information space.
Applications in Bioinformatics
I spearhead multiple collaborations which bring novel data analysis approaches to specific problems in biomedical data analysis. I'm specifically interested in techniques which focus on characterizing the dynamics of biological systems.
The information content of discrete functions and their application to genetic data analysis
by Sakhanenko, Kunert-Graf and Galas. Submitted.
Summary: Resting state networks (RSNs) extracted from functional magnetic resonance imaging (fMRI) scans are believed to reflect the intrinsic organization and network structure of brain regions. Most traditional methods for computing RSNs typically assume these functional networks are static throughout the duration of a scan lasting 5-15 minutes. However, they are known to vary on timescales ranging from seconds to years; in addition, the dynamic properties of RSNs are affected in a wide variety of neurological disorders. Recently, there has been a proliferation of methods for characterizing RSN dynamics, yet it remains a challenge to extract reproducible time-resolved networks. In this paper, we develop a novel method based on dynamic mode decomposition (DMD) to extract networks from short windows of noisy, high-dimensional fMRI data, allowing RSNs from single scans to be resolved robustly at a temporal resolution of seconds. We demonstrate this method on data from 120 individuals from the Human Connectome Project and show that unsupervised clustering of DMD modes discovers RSNs at both the group (gDMD) and the single subject (sDMD) levels. The gDMD modes closely resemble canonical RSNs. Compared to established methods, sDMD modes capture individualized RSN structure that both better resembles the population RSN and better captures subject-level variation. We further leverage this time-resolved sDMD analysis to infer occupancy and transitions among RSNs with high reproducibility. This automated DMD-based method is a powerful tool to characterize spatial and temporal structures of RSNs in individual subjects.