Neuro-Vesicles: Neuromodulation Should Be a Dynamical System, Not a Tensor Decoration
By: Zilin Li, Weiwei Xu, Vicki Kane
Potential Business Impact:
Makes AI learn and remember like a brain.
We introduce Neuro-Vesicles, a framework that augments conventional neural networks with a missing computational layer: a dynamical population of mobile, discrete vesicles that live alongside the network rather than inside its tensors. Each vesicle is a self contained object v = (c, kappa, l, tau, s) carrying a vector payload, type label, location on the graph G = (V, E), remaining lifetime, and optional internal state. Vesicles are emitted in response to activity, errors, or meta signals; migrate along learned transition kernels; probabilistically dock at nodes; locally modify activations, parameters, learning rules, or external memory through content dependent release operators; and finally decay or are absorbed. This event based interaction layer reshapes neuromodulation. Instead of applying the same conditioning tensors on every forward pass, modulation emerges from the stochastic evolution of a vesicle population that can accumulate, disperse, trigger cascades, carve transient pathways, and write structured traces into topological memory. Dense, short lived vesicles approximate familiar tensor mechanisms such as FiLM, hypernetworks, or attention. Sparse, long lived vesicles resemble a small set of mobile agents that intervene only at rare but decisive moments. We give a complete mathematical specification of the framework, including emission, migration, docking, release, decay, and their coupling to learning; a continuous density relaxation that yields differentiable reaction diffusion dynamics on the graph; and a reinforcement learning view where vesicle control is treated as a policy optimized for downstream performance. We also outline how the same formalism extends to spiking networks and neuromorphic hardware such as the Darwin3 chip, enabling programmable neuromodulation on large scale brain inspired computers.
Similar Papers
Data-Efficient Neural Training with Dynamic Connectomes
Neurons and Cognition
Shows how computer brains learn by watching their activity.
Learning Chaotic Dynamics with Neuromorphic Network Dynamics
Disordered Systems and Neural Networks
Makes computers learn by mimicking brain circuits.
Improving the adaptive and continuous learning capabilities of artificial neural networks: Lessons from multi-neuromodulatory dynamics
Neurons and Cognition
Teaches computers to learn like brains.