Day 6 - Computing with Spikes: Mihai Petrovici, Nicolas Brunel, Elisabetta Chicca
From exploring how the brain might learn by adding noise, synaptic plasticity and building insect-inspired robots that navigate using spikes.
"Noise Is All You Need?" - Learning in the brain without copying weights
One of the most fundamental problems in brain-inspired computing is how to move information backward through a network to adjust synaptic weights, something backpropagation does effortlessly, but is considered biologically implausible. Why? Because “parameter transport” (which is copying weights backward) is something that biology doesn’t do, and biological neurons do not come with a reverse gear. But what if we don’t need to copy weights?
When we train a neural network, we rely on the transpose of the forward weights during backpropagation. But the brain doesn’t seem to use symmetric weights. That brings us to a key idea: could noise allow learning to occur without weight transport at all? This may sound “wild”, but ideas like this underpin theoretical models like Hopfield networks and Boltzmann machines, and take inspiration from neuromorphic chips, too.
Measuring synaptic weights: A hard problem
Let’s say you want to know the strength of a synapse. In biology, you might:
• Look at the post-synaptic current,
• Use microscopy to observe synapse size (a proxy),
• Or infer the weight by seeing how the output changes with varying input.
In neuromorphic hardware, it’s similar:
• Synaptic weights may be stored digitally, but their effect is analog.
• Reading them still requires looking at the somatic response, the “output” of the neuron.
• Access to the crossbar array (where weights are stored) is often limited, noisy, or indirect.
So in both brains and chips, we often can’t directly read or copy weights. While electrophysiological techniques like patch-clamp recordings can infer synaptic strength, they’re far from practical for system-level analysis, so both in biology and hardware, direct access to weights remains difficult.
Transposing with noise
Here’s the idea attributed to Kevin Max and a team effort from Mihai's group [1]: Imagine 2 neurons connected forward and backward. Instead of copying the forward weight W backward, you can:
1. Add independent Gaussian noise ξi to each neuron (e.g., N(0,1)). Here, we also assumed that activation is linear (but the same applies if it is non-linear).
2. High-pass filter the synaptic signals.
3. Let the system observe how noise propagates across the connection.
To estimate gradients without knowing the backward weights, this method uses correlated noise propagation.
How to picture both mechanisms?
Because the forward and backward synapses will respond differently depending on W and its transpose, the statistical structure of the output changes and learning can exploit this. You can even recover useful signals from this noisy process. On benchmarks like CIFAR-10, this strategy was tested and can work across layers. If you can’t copy the weight, let the system infer it from how noise transforms as it flows through the network.
Entering spikes and STDP
What if your neurons now spike? Then you get interesting time-based dynamics, explored in detail by Timo Gierlich, whose PhD work is focused on spike timing and cortical microcircuits [2]:
• With noise in the membrane potential, your output spike train becomes jittered, not by amplitude, but by timing.
• If weights are symmetric (Wij = Wji), the distribution of pre-post spike intervals is also symmetric.
• If weights are asymmetric, this distribution becomes skewed.
And that’s where STDP comes in: you can use these distributions to learn both Hebbian and anti-Hebbian rules that reduce the asymmetry, effectively "symmetrizing” the weights over time. Noise in spike timing encodes useful asymmetries. STDP can learn from the shape of that noise to achieve functional balance. This is learning without copying, driven by timing, noise, and the natural dynamics of the system.
Neurons can use noise as a carrier for hidden information. It’s a bit weird, but nature often is. And in noisy systems like analog chips or the brain, noise is free, and no extra energy is required. Just make sure you are adding high-frequency noise, and you’ve got a "communication channel".
Is Noise All You Need?
If we can’t transport weights directly, maybe we don’t need to. Maybe we just need the right kind of noise and smart rules to learn from how it flows. Noise is not just tolerated. Not just filtered. But harnessed in order to learn, balance, and build networks that work without backpropagation, without copying, and maybe, just maybe... a little more like the brain. “In these cases, yes, noise is all you need.” Mihai concluded.
REFERENCES
[1] Max, Kevin, et al. "Learning efficient backprojections across cortical hierarchies in real time." Nature Machine Intelligence 6.6 (2024): 619-630.
The next speaker was Nicolas Brunel, and he started the discussion by writing down 2 fundamentally different types of neural dynamics that the brain uses for different purposes:
1. Persistent activity
This is what we see during working memory tasks, when an animal (or human) needs to hold onto information in the short term (e.g, a delayed response task where an animal waits a few seconds before making a decision). Here, the same neurons stay active, maintaining a representation over time.
2. Sequential activity
In contrast, sequential dynamics are observed when animals perform structured behaviors over time, like generating a motor sequence (think birdsongs). Here, different neurons become active at different times, creating a “moving” pattern of activation through the network (e.g, here classical synfire chain). This is not just a memory, but rather a “neuronal choreography”.
What shapes these dynamics? (spoiler: Synaptic Plasticity)
To generate these dynamics, the brain relies on plasticity rules, ways synapses strengthen or weaken based on neural activity, e.g, STDP (Spike-Timing Dependent Plasticity). It adjusts synaptic strength based on the precise timing of spikes between pre- and post-synaptic neurons.
But as Brunel points out, spike timing alone doesn’t explain everything.
We also need to consider:
• Firing rate effects (not just spike coincidences)
• Calcium-based models of plasticity, which depend on intracellular Ca levels in the post-synaptic neuron
• Experimental data showing multiple peaks and a wide spread in synaptic efficacy curves
These all hint at more complex rules than simple Hebbian STDP...Biology is complex, but that only makes it more fascinating to try and understand.
Building to understand: Event-based intelligence in Robotics
Last but not least, Elisabetta Chicca.
Elisabetta Chicca’s work sits at the intersection of understanding biology and engineering intelligent systems. The focus of her group is on using event-based sensing and spiking neural computation to solve real-world robotic problems inspired by nature.
Interdisciplinary exposure: Through group projects and joint supervision, students gain experience outside their home lab’s domain.
Annual retreats: All students come together at four major events over the program’s life span. These are not just for presentations, but also to form teams, launch collaborative projects, and build a strong interdisciplinary research community.
A large part of the discussion centered around benchmarking. How do we evaluate progress in neuromorphic engineering? And whether current benchmarks are really serving the field, and how to do better.
An important theme was networking beyond academia, not just for funding, but to stay in touch with practical challenges and evolving benchmarks. Yulia Sandamirskaya added that neuromorphic researchers must be able to articulate the value of their work, to be able to say not only what they built, but why it matters: Is it faster? More efficient? More robust under noise or energy constraints? And critically: who benefits?
Comments
Post a Comment