Causally consistent abstractions of time-series data

Causally consistent abstractions of time-series data

Abstract

Understanding complex dynamical systems, particularly in the realm of neuroscience, poses significant challenges due to the high dimensionality and intricacy of available data. While acknowledging the significance of establishing causal relationships, this dissertation contends that a mere causal understanding may not suffice since it may be too complex to be interpretable. Hence, we identify the need for causally consistent abstractions. To address this, we present a mathematical framework outlining key assumptions that facilitate the derivation of causally-consistent high-level models directly from observational data. We then introduce BunDLe-Net – an architecture to learn high-level models directly from neuronal and behavioural time-series data. The efficacy of our architecture is demonstrated across various modalities of neuroscience data, which consistently produces interpretable insights that not only align with existing knowledge but also reveal novel insights about the data. Additionally, this thesis introduces a toolbox for the implementation of BunDLe-Net. Finally, we discuss future avenues for research that our work opens up in a variety of scientific domains, such as causality, data science and neuroscience.

Grafik Top
Authors
  • Kumar, Akshey
Grafik Top
Shortfacts
Category
Thesis (PhD)
Divisions
Neuroinformatics
Subjects
Kuenstliche Intelligenz
Date
2023
Export
Grafik Top