Networks of neurons -either biological or artificial- are called recurrent if their connections are distributed and contain feedback loops. Such networks can perform remarkably complex computations, as evidenced by their ubiquity throughout the brain and ever-increasing use in machine learning. They are, however, notoriously hard to control and their dynamics are generally poorly understood, especially in the presence of external forcing. This is because recurrent networks are typically chaotic systems, meaning they have rich and sensitive dynamics leading to variable responses to inputs. How does this chaos manifest in the neural code of the brain? How might we tame sensitivity to exploit complexity when training artificial recurrent networks for machine learning? Understanding how the dynamics of large driven networks shape their capacity to encode and process information presents a sizeable challenge.
In this talk, I will discuss the use of Random Dynamical Systems Theory as a framework to study information processing in high-dimensional, signal-driven networks. I will present an overview of recent results linking chaotic attractors to entropy production, dimensionality and input discrimination of dynamical observables. I will outline insights this theory provides on how cortex performs complex computations using sparsely connected inhibitory and excitatory neurons, as well as implications for gradient-based optimization methods for artificial networks.