Day 7: Future Neuromorphic Technologies with G. Indiveri, M. Payvand, R. Sepulchre and S.C. Liu

Author : Shreya Kshirasagar

Following an exhilarating weekend adventure in Alghero - packed with late night disco, hiking, climbing, windsurfing, and a variety of other thrilling sports - everyone has returned revitalized and brimming with enthusiasm for the second half of CapoCaccia’25. It's time to channel this energy and make the most of our upcoming experiences!


Giacomo announced the speakers for the day. Gentle reminder to all the readers to upload the content on the cloud server.

A thought-provoking discussion highlights the enduring relevance of John Platt’s 1964 paper on “strong inference”, a rigorous approach to experimental science (physics, biology, neuroscience, etc.). The core idea? Design experiments to maximize uncertainty, ensuring outcomes are equally likely to confirm or refute your hypothesis. This contrasts with common practices where experiments are biased toward confirming preconceived ideas, a pitfall in fields like biology and engineering.

Platt argues that science progresses fastest when researchers actively try to “kill their babies”—rigorously testing and attempting to disprove their own hypotheses. By framing experiments where negative results are as plausible as positive ones, scientists maximize learning and avoid wishful thinking.

The paper’s 1964 origins underscore its timelessness, remaining a vital critique of confirmation bias in research, especially. Thanks to Florian for opening the floor with this reference, who wouldn’t agree he is an absolute genius at creating such flawless patterns. Shreya likes to summarize and quote Florian’s wit in sci-fi context here – “A common mistake that people make when trying to design something completely foolproof was to underestimate the ingenuity of complete fools.” ~Douglas AdamsSo Long and Thanks for All the Fish.

We’ve got some new arrivals, look through:

Lisa Li – University of Michigan – Control Theorist/Computational Neuroscientist

Christian Mayr – TU Dresden – Chips and Algorithms – SpiNNcloud, take a look!

Dan Goodman – Imperial College London - Computational Neuroscientist – He immediately reminds me of SNUFA, and I am excited to learn about his production.

Guido de Croon – TU Delft – Robotics, Drones and Aerospace. I met him during breakfast, also during the poker session on Sunday night. He took me down the memory lane of my Delft days.

A thoughtful scientist and my dear uncle, recommended me Feynman’s memoir and that has certainly been a great source of inspiration to pursue electronics for a long time. I would like to quote him as it fits our topic contextually. “Silicon sculptors seek Feynman’s spark: ‘What we cannot craft, we cannot clock’—stitching sub nanometer synapses with stubborn symmetry.” On the same note, today’s topic is on ‘Future Neuromorphic Hardware’. Chiara Bartalozzi is moderating the session. 

Giacomo Indiveri & Melika Payvand's talk on design choices for future neuromorphic hardware

Take away message at the beginning:

  • Why should one worry about doing analog?
  • Why continuous time? Most of the logic are synchronous
  • Why spikes? 

Giacomo introduces a tutorial by posing three central questions: Why consider analog elements in chip design instead of relying solely on digital synchronous logic? and Why use spiking signals (neuromorphic approaches)? The context is basic research aimed at understanding biological neural computation and solving problems that non-linguistic animals navigate using principles from biological nervous systems.

Key Points:

  1. Digital vs. Analog Chips: While most chips (99.99%) use synchronous digital logic (discrete time), Giacomo argues for exploring analog/mixed-signal designs to emulate biological computation.
  2. Why Spikes? Spiking signals are central to neuromorphic engineering, inspired by how biological systems process information.
  3. Research Context: The goal is curiosity-driven, focusing on principles of neural computation in animals—not product development or incremental tech improvements. Giacomo politely dismisses machine learning concerns (e.g., credit assignment) as irrelevant to their focus.
  4. Philosophy: Prioritizes understanding how animals solve problems (e.g., navigation, sensory processing) using neural principles, rather than benchmarks or human-language-dependent tasks.
Giacomo goes on to further refer to and provide interpretative microscopic analogies to biology that are indicative of building sub-threshold circuits. He elaborates different problems in analog computing. He further adds the point of what we pay in terms of memory for analog vs digital circuit framework. Most of the framework for digital infrastructure is inspired by the early works of three distinguished scientists - Alan Turing, John von Neuman, and McCulloch Pitts. 

In an artistic way Giacomo is conveying the message that how parallel the brain is. Lets look at his example: Clock is time-multiplexing. Delta T can be controlled, if you have 1000 neurons, run simulation, update and put it back for neuron 3. You are in the gigahertz regime. If you want to simulate neurons, you have to be faster. If you take GPU, its still not as parallel as the brain. Movement of data is tricky, challenging. The major distinguishable difference between digital and analog systems is that you use continuous time signals. 

Giacomo quotes Pitts and Mead which did raise some controversial questions, especially from Guido de Croon. Lets look at it. 

Time is unity- McCulloch Pitts, Let time represent itself- C. Mead

A quick and rough impressions of how the arguments evolved. 

M. Hopkins – what do you think of digital synchronous. C. Frenkel has top-down and bottom-up approach. S. Krestjens – asks how time represent itself and if it could be extrapolated for future technologies. Florian - If you do integration, you need time. C. Mayr - it helps for asynchronous, parallel computing. D. Goodman asked this question - Computing with time in analog could be difficult. Landauer's limit – for digital circuits. In digital circuits, if you go fast, you pay. In analog, its different. M. Payvand – processing is done different in different parts/cores of the chip. Its competitive to get precise performance. C. Mayr – Clock trees related comment.

Asynchronous clock trees could lead to deadlock. Kwabena Bohonen mentions that silicon retina in static vs movement is different. Its noisy when there is movement., how do we circumvent these challenges?

Why spikes? If you have to be optimal, spikes are crucial. Its trivial as an engineer to think in spikes. DVS and bio-sensors encode spikes.. write about that.

If you have spikes, use low-pass filter, you get back weights, for rate-coded. 

How do we do computations. Analog, continuous times, using spikes. Giacomo concludes his impressive talk with these 7 ways.

7 ways to compute: 

1.       Integrate

a.       Limited dynamic range in integration. Always going to be de-saturation effects. Non-linearity is useful.

2.       Averaging

a.       Population averaging is good trick. Precise computation using this noisy substrate, the price paid is in terms of area.

3.       Representation

a.       Never/hardly gets discussed in AI. Population coding is robust.

b.       Auditory representation in cortical microcircuits show that one can represent words, phenomes etc. using same networks.

4.       Computational primitives

a.       AND gate- logic circuits, …compiler. Biology has the same.

b.       Meta-plasticity, learn

5.       Exploit temporal dynamics

6.       Use attractor dynamics

7.    Combine weights and symbols 




Shii-Chii Liu & Rodolphe Sepulchre – Approaches for next gen neuromorphic chip

Rodolphe talks about circuit design (brain). He has never designed a circuits/chips. 

Elements “restrict

Computation, for the better. *safety, *adaptive *evolutionary


1.       Filter
2.       Associative memory
3.       Brain

Filter

In each category, we need new elements. Circuit theory started to design filters. R, C, L, Parallel and Series composition. We get linear circuits using these elements. (not really, we cant solve LTI just using these three elements. Filters you could design are passive LTI systems. Computations mean energy. Battery is the fourth element. 

Associative memory

If you remove L, it is neuromorphic. No dissipation. Filter with relaxation property. By removing resonance from inductor. Filter being determined by time constants. Clocked time, discrete time, Rudolph claims perceived time. It’s a matter of time constant. Not easy to learn patterns with LTI systems. Non-linear resistor. Hopfield neural network – with resistor, battery and non-liner resistor one can build Hopfield network.

Memristive elementLeon Chua, 1976, Adding relaxation for memristive elements. One needs to have a pure integrator. Conductance based models are attractive. Each timescale gets weight matrix. Restrict time constants. Contractions is all you need. 


Melika Payvand gives a talk on the 'right way' of building large scale systems. 


Pitor Dudek provided a nice overview on von Neumann bottleneck, figuratively. 


Memory hierarchy and von Neuman bottleneck. Computing in digital is just moving the bits. Adder will need few thousand transistors. For 64 bit, you will need 64 blocks. That how much space it will cost. Entire hotel just to compute arithmetic. L2 is in Alghero.. cache. Physical distance is corelated to memory hierarchy. Cloud computing is corelated to sending distance to Jupiter. Not all roads lead to Rome. 

Christian Mayr joins the floor, he talks about systems with memory access, communication systems. Shuffling bits around. Everyone assumes one big ideal von Neuman systems. Look at the distributed systems. SpiNNaker is one of them. No unlimited physical bandwidth, therefore distributed compute.

Asynchronicity: locally coupled memory. Even with optics, cant shuffle the bits. Period. Its super hard.

Dynamic Sparsity: We are going for edge systems. Large scale systems can learn from general principles of brain or biological systems. Spinnaker is near-memory computing. Was it engineering constraints or biology inspired? Dynamic computations, recurrent networks are exciting as per Christian Mayr.

Mixture of experts models.. compute certain aspects locally.. read about this more. 



Melika stresses further on adopting quantizing memory. Instead of floating point or lot of precision, go to lower precision. Reducing precision of computing. Trends in AI hardware - nice paper that positions all the AI hardware. Bad link error! 

Summary - Memory and Power Efficiency:

  • Memory hierarchy (SRAM, DRAM) and 3D integration are discussed to address the von Neumann bottleneck. In-memory computing (e.g., resistive crossbars) is proposed for energy-efficient multiply-accumulate operations.
  • Power-performance-area (PPA) trade-offs are emphasized, with analog systems and sparse activation achieving microwatt-level consumption for edge applications like keyword spotting.

Shii-Chii provided a deep dive into acoustic applications. 

Brain-inspired principles into digital or analog systems. Four quadrants of continuous time, discrete time, continuous signals, discrete signals. Analog is CT in 2nd quadrant., digital is 4th quadrant. Asynchronous digital is CT, and discrete signals. Switched capacitors, CCD. Change couple

Carve mead days, two curves that are beautiful. SNR graph. Power vs SNR analog circuits high SNR is expensive. Digital is cheaper. Add image here. Binary or ternary networks. Sort of spiking.

Siri – voice activity detection, Speaker verification, Speech enhancement, Speech recognition are some of the applications she is looking into. 

Shii-Chii explains her audio chip which consists of microphone – feature extractor – neural network that runs behind it. Replacing FFT with simplified models of cochlea. Throw away everything that is not necessary. Teacher extractor. Low noise amplifiers. Rectifiers in cochlea cell. Pulses or spikes. Create feature and feed into the network.   

She introduced her work on 'Time step networks – delta time networks'. Output of Feature extractor is high enough, there is layer variation in terms of time difference. KWS – Google Speech Commands dataset. Spoken Language Understanding is the next direction people at ICCASP settings are headed into. 

Temporally sparse NN – 8mW for KWS they have different thresholds across layers. Single ended microphone. 8mW excluding the power of microphone. Low power microphone (analog). Board level power is 500mW. Display is involved. Runs on coin battery for 50 days. Shii-Chii demonstrated the chip and board in real-time. Running 128 GRUs to run the same problem would do x orders of magnitude C. Mayr’s group - 60 microWatt. GRU is 4mW.

General wrap up : Hybrid solutions with system level optimization.

Summary - System Level Design:

  • Hybrid analog-digital systems and biologically inspired architectures (e.g., cochlea models for audio processing) are showcased. A coin-cell-powered microcontroller demonstrates low-power speech recognition using spiking networks and temporal sparsity.
  • Challenges include scalability, parameter variability, and balancing biological fidelity with engineering constraints.

Don’t forget to maybe try the system in real-time while she is here. 

There were a couple of discussion groups planned throughout the day. Take a glimpse. 

DG: Energy efficiency and benchmarking topic led by Emre Neftci.




DG: Mixed signal design for structural plasticity. Chip developed by Giacomo's group, INI. 



Summary of the session: 
  • Interactive discussions on neuromorphic principles, memory hierarchy, and hardware-software co-design. Practical demonstrations (e.g., real-time speech intent detection) and calls for collaborative exploration of inductive biases (e.g., local connectivity, multi-scale dynamics).
  • In conclusion, the session bridges biology and engineering, advocating for neuromorphic solutions that prioritize energy efficiency, adaptability, and task-specific optimization while acknowledging the need for interdisciplinary collaboration to address scalability and real-world application challenges.

Some pictures from our hike trip on Sunday.



Tobi saving his boat at 0700 Hr. 


Late night working groups call for late night music jamming sessions. We have some talented folks around. 

Its been a bit rough with the waves the last two days. Therefore, a general observation was that neuromorphs skipped a nice swim. Let's hope the weather gets better. See you'll tomorrow. :)











Comments

Popular posts from this blog

Day 1: Welcome and history of neuromorphic engineering

Day 2: Evolution of Neural Circuits, Eulogy for Yves Frégnac - Gilles Laurent and Florian Engert