Day 6 - Computing with Spikes: Mihai Petrovici, Nicolas Brunel, Elisabetta Chicca


From exploring how the brain might learn by adding noise, synaptic plasticity and building insect-inspired robots that navigate using spikes. 

"Noise Is All You Need?" - Learning in the brain without copying weights

One of the most fundamental problems in brain-inspired computing is how to move information backward through a network to adjust synaptic weights, something backpropagation does effortlessly, but is considered biologically implausible. Why? Because “parameter transport” (which is copying weights backward) is something that biology doesn’t do, and biological neurons do not come with a reverse gear. But what if we don’t need to copy weights?

When we train a neural network, we rely on the transpose of the forward weights during backpropagation. But the brain doesn’t seem to use symmetric weights. That brings us to a key idea: could noise allow learning to occur without weight transport at all? This may sound “wild”, but ideas like this underpin theoretical models like Hopfield networks and Boltzmann machines, and take inspiration from neuromorphic chips, too.

Measuring synaptic weights: A hard problem

Let’s say you want to know the strength of a synapse. In biology, you might:

Look at the post-synaptic current,

Use microscopy to observe synapse size (a proxy),

Or infer the weight by seeing how the output changes with varying input.

In neuromorphic hardware, it’s similar:

Synaptic weights may be stored digitally, but their effect is analog.

Reading them still requires looking at the somatic response, the “output” of the neuron.

Access to the crossbar array (where weights are stored) is often limited, noisy, or indirect.

So in both brains and chips, we often can’t directly read or copy weights. While electrophysiological techniques like patch-clamp recordings can infer synaptic strength, they’re far from practical for system-level analysis, so both in biology and hardware, direct access to weights remains difficult.

 Transposing with noise

Here’s the idea attributed to Kevin Max and a team effort from Mihai's group [1]: Imagine 2 neurons connected forward and backward. Instead of copying the forward weight W backward, you can:

1. Add independent Gaussian noise ξi to each neuron (e.g., N(0,1)). Here, we also assumed that activation is linear (but the same applies if it is non-linear). 

2. High-pass filter the synaptic signals.

3. Let the system observe how noise propagates across the connection. 

To estimate gradients without knowing the backward weights, this method uses correlated noise propagation. 



How to picture both mechanisms?

Backward and forward weights (B and W)


Because the forward and backward synapses will respond differently depending on W and its transpose, the statistical structure of the output changes and learning can exploit this. You can even recover useful signals from this noisy process. On benchmarks like CIFAR-10, this strategy was tested and can work across layers. If you can’t copy the weight, let the system infer it from how noise transforms as it flows through the network.

Entering spikes and STDP

What if your neurons now spike? Then you get interesting time-based dynamics, explored in detail by Timo Gierlich, whose PhD work is focused on spike timing and cortical microcircuits [2]:

With noise in the membrane potential, your output spike train becomes jittered, not by amplitude, but by timing.

If weights are symmetric (Wij = Wji), the distribution of pre-post spike intervals is also symmetric.

If weights are asymmetric, this distribution becomes skewed.


And that’s where STDP comes in: you can use these distributions to learn both Hebbian and anti-Hebbian rules that reduce the asymmetry, effectively "symmetrizing” the weights over time. Noise in spike timing encodes useful asymmetries. STDP can learn from the shape of that noise to achieve functional balance. This is learning without copying, driven by timing, noise, and the natural dynamics of the system. 

Neurons can use noise as a carrier for hidden information. It’s a bit weird, but nature often is. And in noisy systems like analog chips or the brain, noise is free, and no extra energy is required. Just make sure you are adding high-frequency noise, and you’ve got a "communication channel".

Is Noise All You Need?

If we can’t transport weights directly, maybe we don’t need to. Maybe we just need the right kind of noise and smart rules to learn from how it flows. Noise is not just tolerated. Not just filtered. But harnessed in order to learn, balance, and build networks that work without backpropagation, without copying, and maybe, just maybe... a little more like the brain. “In these cases, yes, noise is all you need.” Mihai concluded.

REFERENCES

[1] Max, Kevin, et al. "Learning efficient backprojections across cortical hierarchies in real time." Nature Machine Intelligence 6.6 (2024): 619-630.

[2] Gierlich, Timo, et al. "Weight transport through spike timing for robust local gradients." arXiv preprint arXiv:2503.02642 (2025).




The next speaker was Nicolas Brunel, and he started the discussion by writing down 2 fundamentally different types of neural dynamics that the brain uses for different purposes:

1. Persistent activity

This is what we see during working memory tasks, when an animal (or human) needs to hold onto information in the short term (e.g, a delayed response task where an animal waits a few seconds before making a decision). Here, the same neurons stay active, maintaining a representation over time.


2. Sequential activity

In contrast, sequential dynamics are observed when animals perform structured behaviors over time, like generating a motor sequence (think birdsongs). Here, different neurons become active at different times, creating a “moving” pattern of activation through the network (e.g, here classical synfire chain). This is not just a memory, but rather a “neuronal choreography”.



What shapes these dynamics? (spoiler: Synaptic Plasticity)


To generate these dynamics, the brain relies on plasticity rules, ways synapses strengthen or weaken based on neural activity, e.g, STDP (Spike-Timing Dependent Plasticity). It adjusts synaptic strength based on the precise timing of spikes between pre- and post-synaptic neurons.

But as Brunel points out, spike timing alone doesn’t explain everything. 

We also need to consider:

Firing rate effects (not just spike coincidences)

Calcium-based models of plasticity, which depend on intracellular Ca levels in the post-synaptic neuron

Experimental data showing multiple peaks and a wide spread in synaptic efficacy curves

These all hint at more complex rules than simple Hebbian STDP...Biology is complex, but that only makes it more fascinating to try and understand.




Building to understand: Event-based intelligence in Robotics

Last but not least, Elisabetta Chicca.

Elisabetta Chicca’s work sits at the intersection of understanding biology and engineering intelligent systems. The focus of her group is on using event-based sensing and spiking neural computation to solve real-world robotic problems inspired by nature.



Nature often solves problems under extreme constraints, e.g, few neurons, sparse data, tiny brains, yet yields efficient and (sometimes) elegant solutions. Insects, for example, perform robust navigation and obstacle avoidance using sparse motion detection, time-based cues, minimal energy, and structure. 
These systems aren’t just clever, they’re optimal under limitations and that’s a lesson for engineering. Elisabetta started by defining 2 aims, which are to understand biology and the second is to solve a problem/find a solution as an engineer. In her group, they focus on Time Difference Encoders (TDE), which is a bio-inspired method that captures temporal differences between spikes, implemented in CMOS mixed-signal circuits with LIF neurons and synaptic gating, capable of detecting for example motion flow in vision (2-pixel silicon retina), frequency differences in auditory or tactile spatial patterns and more. The key mechanism: spike from one input sets a gate, then second spike modulates the amplitude of the EPSC, and the system detects the dt (timing difference and not the spike count), this way enables sparse but highly informative sampling. By using a small set of TDE units and LIF neurons, they were able to build robots capable of estimating motion flow, performing obstacle avoidance, detecting interaural time differences for sound localisation, and reacting with head-tuning behaviours. 

This approach reminded some people of the philosophy of Braitenberg Vehicles, simple sensor-motor systems that exhibit life-like complex behavior through direct connections but minimal processing. The book "Vehicles: Experiments in Synthetic Psychology" by Valentino Braitenberg was mentioned during the talk as a good, inspiring read, and also served as a reminder that even very simple architectures can produce emergent "intelligent" behavior when embedded in the right context. 

After some questions from the audience,  the discussion circled back to a central theme: simulation or emulation? We often simulate “brains”, but do we really understand them until we build systems that behave similarly? Elisabetta ended with a powerful point, a quote by Richard Feynman: “What I cannot build, I do not understand. ” 

Her point was clear: real understanding comes not just from modeling, but from building systems that behave like the ones we study, whether in silicon or in nature...





After the morning talk sessions, there were again several workgroup gatherings, and people continued collaborating and developing their project ideas.  There was a discussion group in the lecture room before lunch on ELEVATE Marie-Curie Doctoral Network with several PIs from CCW25 being part of it and sharing insights.  The group reflected on the challenges and opportunities of interdisciplinary PhD training. The next-generation neuromorphic research lives at the crossroads of neuroscience, hardware, algorithms, robotics, and machine learning. The ELEVATE program wants to address this by structuring training around a few core principles:
Mobility: PhD students must change physical environments during their training; industry to academia, or across institutions.
Interdisciplinary exposure: Through group projects and joint supervision, students gain experience outside their home lab’s domain.
Annual retreats: All students come together at four major events over the program’s life span. These are not just for presentations, but also to form teams, launch collaborative projects, and build a strong interdisciplinary research community.

A large part of the discussion centered around benchmarking. How do we evaluate progress in neuromorphic engineering? And whether current benchmarks are really serving the field, and how to do better.

Interaction with Industry
An important theme was networking beyond academia, not just for funding, but to stay in touch with practical challenges and evolving benchmarks. Yulia Sandamirskaya added that neuromorphic researchers must be able to articulate the value of their work, to be able to say not only what they built, but why it matters: Is it faster? More efficient? More robust under noise or energy constraints? And critically: who benefits?

Neuromorphic for what?
There was also a discussion of societal and ethical implications. If neuromorphic computing is to succeed, it might need more than just technical validation:
Energy efficiency: a green alternative to traditional AI?
Real-time performance: crucial for autonomous systems?
Resilience: operation in noisy, uncertain environments?

But there are risks, especially when such technologies might be applied in defense or surveillance contexts. The neuromorphic community must stay conscious of how the work/knowledge might be used.

Fundamental vs. Applied Research
A "tension" emerged during the discussions :). Should neuromorphic engineering be application-driven (e.g. real-time robotics, energy-efficient AI)? Or is its value also in addressing fundamental theoretical questions: about learning, understanding brains, and computation?
“We don’t need to prove anything to do basic research,” someone noted, and rightly so. Yet when industry engagement is on the table, researchers have to communicate clearly if there are benefits. Not necessarily commercial products, but hard numbers and performance gains that can serve as "benchmarks" for the field. At the end of the day, science must remain grounded enough to be tested, shared, and built upon step by step, and that means we need more than ideas: we need evidence. It is about demonstrating value, whether in energy efficiency, speed, robustness, scalability, or other gains... These quantifiable metrics don’t diminish fundamental research; they can strengthen its credibility and open doors for collaboration, funding, and actual real-world impact. However, one should remember that curiosity-driven science can be the bedrock of innovation. It’s what leads us to ask the unexpected questions, explore new mechanisms, and lay the groundwork for discoveries that may not have immediate ground-breaking applications to the "here and now", but eventually have the power to transform entire fields and perhaps even push us to think in new, unconventional ways. :)

Here are a few snapshots from the discussion group, which drew a large and active audience:



Later in the afternoon, there was another discussion group on "Discovering Strategies for Robust Computation in Mixed-signal Hardware and Biology". In this session, participants explored the core challenges (and limitations) faced when working with noisy, imprecise, or constrained physical systems, both in neuromorphic hardware but also in biological models. Some of the questions that arise are: how do we build systems that compute reliably even under uncertainty, and how much inspiration should we actually take from the brain? Are we staying true to biological principles? Are we drifting away from biological relevance? The conversation centered on identifying strategies to overcome these issues, sharing ideas for practical solutions, and discussing how robustness can be engineered or evolved into such systems.  




This Saturday evening was a special one, as we gathered for our social dinner at a lovely restaurant. It was a relaxed and enjoyable time with great food, lively conversations, and the chance to connect more deeply and get to know each other beyond the talks and sessions.

Social Dinner @"Le Pinnette" Restaurant
A big congratulations and heartfelt thanks to the organizing team for their incredible effort in creating such a memorable experience :)


That's it for today.
Sunday is a free day, and on Monday, it will be officially the 2nd week of CCW25.
See you in the next one!



Photos from our hike on Sunday:)



Comments

Popular posts from this blog

Day 1: Welcome and history of neuromorphic engineering

Day 7: Future Neuromorphic Technologies with G. Indiveri, M. Payvand, R. Sepulchre and S.C. Liu

Day 2: Evolution of Neural Circuits, Eulogy for Yves Frégnac - Gilles Laurent and Florian Engert