Zero-Failure: Dr. Syed Mohsin Abbas and Engineering Near-Flawless Reliability for the 6G Era

Beyond Speed: Why Reliability Defines 6G
As wireless technology races from 5G to 6G, the world is promised faster downloads and seamless streaming. But beyond the flashy speeds lies a more urgent challenge: consistency. In applications where communication errors can have real-world consequences, like remote surgery or autonomous vehicles, the stakes are far higher than a stalled video. To tackle this, Dr. Syed Mohsin Abbas, assistant professor at Tampere University, designs systems that deliver data not just swiftly, but nearly flawlessly. He explains why reliability, not speed, is the defining hurdle of next-generation networks.
The challenge defines the domain of Ultra-Reliable Low-Latency Communication, or URLLC: a design philosophy that treats delay and failure not as inconveniences, but as unacceptable risks.
"From 5G onwards, this new communication scenario, URLLC, will always be there. It is designed for applications which have very stringent requirements for reliability and latency."
These requirements become stark when distance is no longer abstract. When communication controls physical action, even a momentary lapse can have irreversible consequences.
"Take remote surgery, where a robot is operating on a patient and the doctor is on the other side of the world. We cannot afford anything unreliable during transmission. The reliability has to be extreme, and the latency should be very, very low. There should be absolutely no delay."
A Universal Approach to "Noise"
Ensuring that kind of reliability begins deep inside the receiver, where signals arrive distorted by noise, interference, and imperfect channels. Traditionally, wireless systems protect data using specific 'channel coding' schemes, mathematical rules that help correct errors, matching each scheme with its own dedicated decoder; an approach that works, but fragments both hardware and design effort.
"We have so many different channel coding techniques, and all of them have their own unique decoding methods. However, there is a different paradigm called Universal Decoding, which can be used to decode any kind of code."
Rather than trying to understand every possible encoding strategy, universal decoding takes a different view of the problem. It treats the interference itself as the puzzle, focusing on identifying the noise pattern directly to subtract it.
"One of those universal decoding techniques is called GRAND: Guessing Random Additive Noise Decoding. Since 2020, I have been focused on this from a hardware implementation perspective. I design architectures for how these variants of GRAND can be efficiently implemented on hardware."
This shift from decoding structure to decoding disturbance opens the door to flexible, one-size-fits-all receivers. The promise is immense, but in practice, these theoretical innovations must still function within the physical limits of modern electronics.
"The hardware I design has to have low latency, high reliability and resource utilization, and low power consumption. All embedded systems, like the phone we use, are constrained by the battery. We want to use chips that consume less power but deliver more work."
Where the Data Lives: Rethinking Computation Itself
Power consumption is not only about algorithms; it is also about where computation happens. Modern computing still bears the imprint of an old architectural compromise: memory and processing live apart, and energy is spent shuttling data between them.
"In traditional systems, we have computation separate and storage separate. Whenever we need computation, we extract the data from the memory, take it to the CPU, and after computation, store it back. This retrieval and storing results in huge latency and higher power consumption."
One can imagine moving data between memory and CPU like a worker driving an hour to an office just to type one sentence and then driving an hour back home again.
To solve this, Dr. Abbas explores In-Compute Memory; by embedding computational logic directly within memory storage, the design eliminates this energy waste.
"In-Compute Memory is a paradigm where you combine the storage and computation at the same time, at the same place. Since you're not moving the data, it means you can save power, and since data is being processed where it is saved, the latency is also reduced."
This architectural shift becomes especially significant as artificial intelligence enters the communication stack itself.
"We are talking about integrating AI with communication. AI itself is very power hungry and requires low latency. One way to solve this problem is using these new storage architectures."
Photo: Antti YrjönenThe Long Horizon of Research
Many of the technologies that define modern communication began as ideas that arrived too early, underscoring why pursuing theoretical solutions today matters, even if practical adoption takes decades. The history of low-density parity-check (LDPC) codes, integral in present-generation URLLC, offers a stark example of this timeline.
"When Professor Robert Gallager introduced LDPC codes in the 1960s, they were largely ignored at the time. Although their decoding method was theoretically efficient, the computing power required to run it was far beyond what the existing hardware could handle. As a result, the idea was largely set aside until the 1990s, when advances in computing made considering practical implementations possible. By 2017, LDPC became part of the 5G standard."
"That’s the beauty of research. Something discovered back in the 60s became part of commercial products after a long time. All we can do is propose solutions. Who knows, maybe after a few years they can be adopted, or maybe after decades they will be rediscovered."
Tampere's Collaborative Ecosystem
The research environment itself plays a crucial role in fostering these innovations. Contrasting with the more specialized research environments of traditional academia, Tampere University offers a distinctively integrated model.
"In Asia and Canada, I was focused on just one topic. I only had one thing to do. In Finland, it's more of a collaborative kind of work; everybody is part of everything."
This philosophy is embodied in the System-on-Chip (SoC) Hub, where academic research and industrial practice are deliberately intertwined to create a functioning ecosystem of innovation.
"The SoC Hub is a consortium between professors, research teams, and local Finnish companies; Nokia is also part of it. They join forces, and then they create this system. If you want to grow here, the opportunities are limitless."
Author: Sujatro Majumdar








