Skip to main content
Research

Think Locally, Act Intelligently: Dr. Sayani Majumdar and the Hardware Revolution Behind Sustainable AI

Published on 24.2.2026
Tampere University
Photo: Harri Hinkka
Artificial intelligence is everywhere – but the energy required to power it is becoming unsustainable. At Tampere University, Sayani Majumdar is rethinking computing from the ground up, drawing inspiration from the human brain to build ultra-efficient hardware that could make truly local, low-power AI a reality.

Artificial intelligence is now embedded in everyday life. Yet behind every query to a large language model, every generated image and every automated recommendation lies a data centre consuming electricity at a staggering scale. At the same time, the human brain runs on roughly the power of a dim light bulb – about 20 watts – while outperforming today’s most sophisticated AI systems in adaptability and complex reasoning. 

The power problem no one is talking about 

That gap between artificial design and nature’s solution sits at the heart of neuromorphic computing: chips designed not by imitating conventional computers, but by learning from the architecture of biological brains. 

“Any large language models running on today’s computing hardware take megawatts of power, whereas a human brain runs on only 20 watts,” says Dr. Sayani Majumdar, Associate Professor of Electrical Engineering at Tampere University. “How we can learn from nature and implement that in computing hardware – that is mainly our target.” 

The central question, she argues, is whether AI can ever become truly ubiquitous if it remains tethered to centralised computing infrastructure with enormous power requirements. 

The case for local intelligence 

The vision underpinning her work is what she calls “intelligence everywhere”: a future in which everyday devices can learn and make decisions independently, without constantly deferring to remote servers. 

The stakes are clearest in critical situations. If a health monitor detects an irregular heartbeat, or an autonomous vehicle must avoid a collision, the delay caused by sending data to a remote server and waiting for a response is not merely inefficient – it can be life-threatening. 

The case becomes even more stark in space exploration. When Mars is at its furthest point from Earth, a round-trip communication can take close to 45 minutes. 

“Local decision-making, that’s the main key message,” she says. “With low power – whatever sunlight a device gets in space – it has to run the intelligence with that amount of power.” 

Closer to home, the argument is less dramatic but equally urgent. Every time data leaves a device, it becomes vulnerable to attack. For health records and personal data, that risk only grows as systems scale. 

How the brain already solved this 

The brain’s efficiency is not accidental. It is the result of millions of years of evolution optimised to do more with less. 

In conventional chips, operations run at a fixed rhythm regardless of whether anything meaningful is happening. In the brain, neurons fire only when something changes; otherwise, they remain silent. 

“When something changes, then only data gets communicated. Otherwise it should not send.” 

A surveillance camera that transmits footage only when it detects movement, instead of streaming continuously, illustrates this principle in simple form. The brain applies it constantly. The eye alone compresses visual information more than a hundredfold before it reaches the brain. 

Chips built on this event-driven model remain in a near-idle state and activate only when required. The result is a fundamentally different relationship between computation and energy – one where doing nothing costs almost nothing. 

Rethinking memory itself 

Achieving this in hardware requires confronting a structural limitation in conventional computing: memory and processing are physically separate. Every operation requires shuttling data back and forth between them, consuming energy and introducing delay – a cost that grows as AI workloads expand. 

Dr. Majumdar’s team focuses on ferroelectric materials, used to build memory that retains information even when power is switched off, operates at very low current and integrates with existing chip manufacturing processes. 

Crucially, these materials can store more than binary ones and zeros. They support multiple intermediate states, functioning more like the graded signals of biological synapses than the rigid on/off logic of traditional switches. 

“We showed in a ferroelectric device we could write 32 different states in one single cell, compared to standard binary ‘ones’ and ‘zeroes’. That not only saves costly area on your chip, but also causes less power loss and heating.” 

In practice, her team is developing hardware-level implementations of associative learning for autonomous vehicles. A system trained on multiple sensor inputs can still make safe decisions even if one input is missing – similar to Pavlov’s dog learning to expect food at the sound of a bell, even when no food is present. 

 CMOS and Beyond: Devices and Systems Research GroupPhoto: Harri Hinkka

A field built at the intersection of everything 

This research does not sit neatly within a single discipline. It draws on physics, materials science, electrical engineering, computer science and neuroscience. Progress in one area depends heavily on advances in the others. 

“In silos, it’s very hard to do anything here. We are trying to take understanding from neuroscience and put it into computer science and electrical engineering.” 

Tampere University, with its strong microelectronics tradition and close industry connections, provides fertile ground for this interdisciplinary work. Its network of nearby industrial partners enables ideas to move from theory to tested hardware faster than in many academic environments. 

For those entering the field, Majumdar’s advice echoes the philosophy behind her research: think broadly and collaborate widely. 

 

Think in a bigger way. Learning to collaborate, building on different ideas – it cannot be a perfect device or system, but when developing hand-in-hand, it can do things we don’t even expect it to do. 

Sayani Majumdar

 

Nature spent millions of years developing a brain that thinks quickly, adapts continuously and operates on almost no energy. The challenge now is to understand how it achieved that – and to build computing systems that can do the same. 

 

 

Sayani Majumdar

Associate Professor, Thin Film Electronics

Research topics

  • Neuromorphic Computing and Adaptive Sensing for Extreme Edge devices
  • Low-thermal budget ferroelectric memories
  • Atomic Layer Deposited (ALD) thin film devices
     

Fields of expertise

  • Micro and Nanoscale Solid-state Electronic Device design and fabrication; Characterization; Modelling; and Neuromorphic Computing.


CMOS and Beyond group 

 

Author: Sujatro Majumdar