Day 10 - BMIs with neuromorphic substrates: Francesca Santoro, Elisa Donati, Jean-Jacques Slotine


Approaching Capocaccia by speedboats 🌞

On Day 10, we explored how neuromorphic systems can interface with the brain and help us understand learning itself.  Francesca Santoro showed how organic electronics can mimic synaptic behavior and physically integrate with biological nets, opening the door to biocompatible and future intelligent implants. Elisa Donati demonstrated that neuromorphic twins, real-time spiking replicas of brain areas, can help restore lost neural communication and even replicate human sensory adaptation. Jean-Jacques Slotine connected neuroscience and deep learning through contraction theory, revealing how high-dimensional systems like the brain or artificial deep neural nets can achieve stability, generalization, and attention through their geometry.

Francesca Santoro: Engineering Organic Neuromorphics

What if the materials we use to interface with the brain were as soft, adaptive, and chemically dynamic as the brain itself?

Francesca's work in organic bioelectronics proposes exactly that. She started by introducing the term Neurohybrids and by setting the stage with a key observation about the biological brain: its function is deeply shaped by its topology (connectivity), mechanics (structure and material properties), and plasticity (adaptability). Unlike traditional artificial neural networks (ANNs), which are largely implemented in software or rigid CMOS hardware, biological systems operate in soft, aqueous, ion-rich environments. So, how do we bridge that gap?

She introduced organic mixed ionic-electronic conductors (OMIECs), which are materials that can conduct both electrons and ions, operate at low voltages, and remain functional in liquid, biocompatible environments. Unlike rigid CMOS, these materials are flexible, stretchable, and even biodegradable, making them uniquely suited for long-term integration with brain tissue. A key example of organic materials mentioned is OLEDs, already common in flexible screens, operate using similar organic principles.


The OECT: An Organic Synapse?

At the heart of her talk was the organic electrochemical transistor (OECT), which is a soft, ion-driven device capable of mimicking short-term plasticity (STP) and, under certain conditions, long-term plasticity (LTP). These devices can be tuned to “remember” activity patterns by leveraging mechanisms such as dopamine-triggered oxidation, much like biological synapses!

When a train of voltage pulses is applied, ions diffuse into the polymer, modifying its conductivity.

This behavior can be reversible (for short-term plasticity, STP) or irreversible (for long-term plasticity, LTP), depending on timing, amplitude, and chemistry.

These devices are “slow” (on the order of milliseconds or seconds), but they match biological time constants, making them excellent candidates for bio-inspired computing.

“Sometimes,” Santoro noted “these are bad transistors, but that’s exactly what makes them good synapses.” Their imperfections, like residual memory from ion diffusion, mimic how biological systems forget and adapt. This convergence of bio-mimicry and electronics is more than just an engineering trick. It opens a door to two-way communication between neurons and machines, toward a future of biocompatible implants, closed-loop neuromodulation, and hybrid systems.

How Do They Work? A Quick Analogy

Imagine applying a small voltage pulse (like an action potential). Ions from an electrolyte move into the organic channel. This changes the channel’s conductivity, like neurotransmitters modulating a real synapse.

If the next input comes quickly, residual ions affect the response --> this is STP.

If the stimulus is strong enough, it triggers molecular changes (like oxidation via dopamine), and the device remembers the event --> this is LTP.

Santoro showed how using dopamine (a real neurotransmitter) allows these devices to encode memory chemically. The number of protons released during dopamine oxidation modulates the device’s conductivity, emulating how real synapses strengthen with more neurotransmitter activity.

The Big Vision: Biohybrid Systems

The long-term vision is clear: create materials that don’t just connect to neurons, but communicate with them. She discussed several key research directions:

Creating closed-loop neuromorphic circuits that adapt based on feedback.

Embedding miniaturized organic systems into implants, possibly for drug delivery.

Designing 3D, biomimetic architectures that resemble real neurons, not just functionally, but structurally (!), so that real neurons can form synaptic-like contacts with devices. She emphasized that these materials can be made to look and feel like biological tissue, increasing integration and functionality. This includes surface textures that encourage neurons to grow and form structured, functional interfaces with the artificial system. “We don’t have to be limited to 2D,” she noted. “We can design materials that neurons recognize, that feel like home.”

Some Takeaways 

OMIECs and OECTs are enabling new types of neuromorphic devices that are soft, low-power, and bio-friendly.

These devices mimic synaptic plasticity via ion dynamics, both reversible (STP) and irreversible (LTP).

Real neurochemicals like dopamine can be used to encode memory in hardware.

The future of bioelectronics involves not just reading from the brain, but embedding systems that interact, adapt, and even grow with brain tissue.

This talk set the tone for the rest of the day. Neuromorphic substrates aren’t just about mimicking the brain in silicon, they are about making brain-machine communication possible in entirely new ways :)



Elisa Donati: Building a Neuromorphic Twin

What if we could build a digital twin of the brain, not a simulation, but a living, spiking network that runs in real time and restores lost function? 

In her talk, Elisa introduced the concept of the "Neuromorphic Twin", an innovative framework that goes beyond conventional modeling towards embodied neural restoration.

What is a Neuromorphic Twin?

Most of us have heard of digital twins in engineering; these are exact digital replicas of real-world systems used to monitor, simulate, and optimize performance. But when we move into neuroscience, the idea becomes more profound. Can we create a real-time, hardware-embedded twin of a neural circuit that mimics its function, dynamics, and behavior? In this work, the idea is brought to life with neuromorphic hardware, specifically FPGAs running Hodgkin-Huxley neuron models. The twin is designed to replicate the behavior of a specific cortical region in rats, the Rostral Forelimb Area (RFA), which is involved in motor control. After a stroke, this area’s communication with the sensorimotor cortex (S1) is disrupted. The goal is to use the neuromorphic twin to restore that communication and behavior.

From Recording to Remapping: How?

1. Record from RFA before or after stroke.

2. Extract neural statistics: mean firing rate, burst duration, activity patterns.

3. Create a neuromorphic twin: a spiking network on an FPGA mimicking the RFA’s behavior.

4. Use PCA to extract a latent space, a reduced-dimensional map of RFA activity over time.

5. Stimulate S1 based on the twin’s dynamics, to remap activity to restore movement.

The result? In rats, sensorimotor behaviors were partially restored, even with just one well-matched artificial neuron. And critically, this wasn’t just a hard-coded pattern, it was a dynamic, adaptive interaction between the chip and the brain.

Why FPGAs?

FPGAs allow fast, flexible reconfiguration of large neuron populations (1024 in this case), supporting real-time feedback. This is crucial when the goal is to intervene in active sensorimotor loops, where milliseconds matter. However, there’s a tradeoff: complexity and computational cost. Real-time emulation of even 1000 coupled Hodgkin-Huxley neurons is non-trivial. That’s why Elisa's team also looked at dimensionality reduction, focusing on just a few principal components (often 3) of the RFA’s activity to guide stimulation of S1.

From Rats to Humans: The Bottom-Up Path

Donati also shared a related bottom-up study in collaboration with clinical researchers. In tetraplegic patients, electrodes implanted in S1 (sensory cortex) are used to stimulate sensory perception. But a common issue arises: the adaptation. After a few seconds, patients stop feeling the stimulus, even when firing rates remain high. To understand this better, they modeled layer 4 of S1, incorporating:

Pyramidal cells - excitatory neurons

PV (Parvalbumin) - inhibitory neurons

SST (Somatostatin) - inhibitory neurons

They built a simple spiking circuit with realistic adaptation properties and showed that at different stimulation frequencies (50, 100, 300 Hz), pyramidal cell firing adapted just like human perception does. This was a powerful demonstration that even minimal neuromorphic models can replicate human sensory phenomena, which is a major validation of the neuromorphic twin as a concept.

Closed Loop - Real Time - Real Impact?

The next goal? To close the loop using the twin, not just to reproduce activity, but to modify stimulation in real time, adapting to the brain’s feedback. But current tools are limited: full closed-loop control at this scale is still computationally very intense. Still, the twin offers a powerful advantage over pure simulation:

It operates in real time.

It embeds physical computing constraints, just like the brain.

It enables exploration of emergent dynamics within biologically grounded models.

“You start with emulation,” Elisa said, “and only then go back to simulation, now with a better understanding of what matters in solving the real problem, as you are forced to deal with the physical constraints of the system.”

Some Takeaways

Neuromorphic Twins are the concept of real-time spiking models that replicate specific brain regions, not just in structure but in function.

They could support neuroprosthetics and brain restoration, e.g., in post-stroke motor control.

Minimal spiking networks can reproduce complex sensory adaptation seen in humans.


Jean-Jacques Slotine: What Deep Learning Still Doesn’t Understand About the Brain (and Vice Versa?)

The morning discussions ended with a deeply reflective talk by Jean-Jacques Slotine, who posed a powerful question:

“Everyone says AI is inspired by neuroscience, but can you name one major discovery from neuroscience in the last 50 years that made it into AI?” 

Jean-Jacques reminded us that while machine learning has made incredible strides, its connection to real biological computation is often superficial and/or outdated. Many of AI’s foundational ideas, from Hebbian learning to local node computation, are decades old. Meanwhile, neuroscience has uncovered a rich world of complexity, including the roles of glial cells, astrocyte dynamics, and non-synaptic modulation, most of which is still absent in mainstream AI.

Astrocytes: The Silent Giants of Computation

Jean-Jacques began by highlighting astrocytes, glial cells that often outnumber neurons and can each connect to up to 100,000 synapses. These are not just passive support cells. They regulate neurotransmitter levels, shape synaptic plasticity, and potentially enable new forms of computation. If evolution kept astrocytes around, it probably found them computationally useful. He suggested that modeling astrocytic processes could offer AI novel mechanisms, especially for sparse connectivity, context modulation, and attention, as seen in transformer architectures. In fact, he demonstrated how a simplified astrocyte-modulated model can mathematically implement something equivalent to attention (seen in transformers). In his words: “Astrocyte-inspired dynamics may offer a biologically grounded alternative to the transformers' "magic".”

Here, T is a modulatory connectivity tensor that, if it is 1, then approximates attention (like in transformers), if it is dense and fully-connected, behaves like deep associative memory/hopfield-style associative memory. This connects astrocyte-inspired dynamics and modern AI. So by changing the structure of T, you can move from focused, context aware computation (attention) to broad, content-addressable memory (associative recall).

Why Does Deep Learning Work? A Dynamical Systems Perspective

He then pivoted to the mathematical foundations of deep learning, asking a "simple" question:
“Why does overparameterized gradient descent actually work so well?”

Using tools from nonlinear dynamics, he described how the loss landscape in deep networks can contain multiple equilibria (solutions), and how gradient descent often finds flat valleys of global minima, regions in which many solutions generalize well. 

But this only works under one critical condition.
Paths between solutions must not break, meaning the landscape must be connected and smooth. This is where contraction theory comes in: if a system is contracting or semi-contracting (in a specific Riemannian distance metric), then nearby trajectories converge and the paths cannot break. Remarkably, as the dimensionality increases, the likelihood of finding such a stable metric also increases, suggesting that high dimensionality may be the reason deep nets, despite having more parameters than training samples, often succeed.  “Overparameterization isn’t a flaw,” he said, “it’s what makes the space flexible enough to find generalizable solutions.” 

The time derivative of the potential energy of the system shows that as the system evolves under gradient descent, it always decreases the cost heading toward minima. The hand-drawn valley diagram (with the red path) illustrates the idea that in a well-behaved (contracting) system, you can travel along a smooth valley of solutions between two equilibria. The system flows toward this valley over time. Lower on the flipchart we see a contraction condition. This means that distance between trajectories decreases over time or at worst, stays constant (if λ=0, it is called semi-contraction).

Some Takeaways

  • Astrocytes could provide a biological analog for mechanisms like attention and context-sensitive learning.

  • Gradient descent in high dimensions works in part because of contractive geometry: smooth, connected solution paths enabled by overparameterization.

  • Incorporating ideas from dynamical systems and biology could help reveal why neural networks generalize and guide us toward more robust and efficient AI systems.

Jean-Jacques merged beautifully biological, mathematical insights and intuition. “It’s not just about having more data or bigger networks,” he concluded. “It’s about understanding the structure of the space you’re learning in and that’s something biology knows quite well...”


    
                                         Proof that scientists can be great artists ;) 🖌 

 

       Notes and sketches by Barbara Webb during morning discussions (🤩 wow)

Notes and sketches by Stan Kerstjens during this morning discussion (so cool🧠)


Later that day, some of us signed up for a speed boat trip close to Capocaccia cave that we postponed on Sunday due to weather conditions (very windy). The wait was worth it, so sunny, the sea was calm, the mood high, and the views spectacular. Huge thanks to Mihai for organizing this, thanks to all the amazing boat captains (for keeping us safe haha), and to everyone who came along and made this experience so much fun! 

Speedboats are coming!!!!! yayy:)

Let's jump on! 

    

That was Day 10, and one of my personal favorites:) 
Thanks for reading, see you in the next one!

    

Comments

  1. Two comments:
    (1) Francesca Santoro's ideas sound brilliant - a much better way of doing neuron/electronic bidirectional communication that large arrays of tiny electrodes.

    (2) J-J Slotin's question "“Everyone says AI is inspired by neuroscience, but can you name one major discovery from neuroscience in the last 50 years that made it into AI?”": look into the details of neocortical pyramidal neurons which have both basal and apical dendrites and two sites of integration with modulation taking place by the apical on the basal. See Adeel Ahsan https://arxiv.org/abs/2505.06257 (paper now submitted to NIPS 20250, based on decades of work by Bill Phillips and neuroscientists at HBP and elsewhere, see Phillips, W. A. et al. Cellular psychology: relating cognition to context-sensitive pyramidal cells. Trends Cogn. Sci. (2024) doi:10.1016/j.tics.2024.09.002.

    ReplyDelete

Post a Comment

Popular posts from this blog

Day 1: Welcome and history of neuromorphic engineering

Day 7: Future Neuromorphic Technologies with G. Indiveri, M. Payvand, R. Sepulchre and S.C. Liu

Day 2: Evolution of Neural Circuits, Eulogy for Yves Frégnac - Gilles Laurent and Florian Engert