To find out more about the podcast go to Audio Edition: How Can AI Researchers Save Energy? By Going Backward..
Below is a short summary and detailed review of this podcast written by FutureFactual:
How reversible computing could save energy in AI
Reversible computing offers a way to cut energy use in AI by running calculations backward to avoid erasing information and producing heat. This episode from Quanta Magazine traces the idea from Landauer's principle through Charles Bennett's uncomputation to MIT prototypes that test low-heat circuits. It explains the tradeoffs between speed, memory, and cooling, and how running more chips slowly in parallel might beat cooling demands in future AI systems. The program also surveys the people and labs pushing reversible hardware toward reality, including Hannah Early at Cambridge and Bayer Computing, and highlights the big questions still to prove reversible computing can deliver practical energy savings for AI workloads.
Introduction: The energy challenge in AI and the reversible idea
As artificial intelligence models grow larger and run on fleets of processors, energy efficiency becomes a central bottleneck. Reversible computing offers a theoretical path to dramatically cut heat production by avoiding the erasure of information—the very act that dissipates energy as heat. The episode summarises a lineage that begins with Rolf Landauer, who in the 1960s showed that information handling has a thermodynamic cost, and continues with Charles Bennett’s 1973 uncomputation concept, which proposes running calculations forward, storing the essential result, and then running the process backward to erase unnecessary data without wasting energy. The discussion frames reversible computing as a response to the physical limits facing conventional chips, especially as transistors shrink and heat becomes harder to manage.
"A reversible computer emits much less heat than a conventional one, but she found it must still emit some heat." - Hannah Early
Foundations: From thermodynamics to uncomputation
The narrative moves through Landauer’s principle, which connects information deletion to unavoidable energy loss, and Bennett’s pioneering idea of uncomputation, which leaves only the desired data behind while reversing the rest of the computation. The bread-crumb metaphor—Hansel and Gretel picking up crumbs on the way back home—serves to illustrate how uncomputation keeps the trail of information intact without discarding the initial data. The piece explains that while uncomputation can, in theory, reduce energy use, it often doubles the time required for a calculation, making it impractical without further innovations in memory management and circuit design.
"Hansel and Gretel picked up their trail of bread crumbs on the way back home." - Charles Bennett, IBM
From theory toward practice: The MIT prototypes and industry doubts
The podcast traces how the idea matured in the 1990s, with MIT engineers building prototype low-heat circuits and researchers like Michael Frank promoting reversible computing as a plausible path to sustained computational progress. Yet the field faced a sharp skepticism as traditional chips continued to improve and investors questioned the relevance of a theory that seemed distant from industry needs. Frank recalls that reviewers would say reversible computing sounded useful in theory, but industry didn’t know what to do with it, creating a rift between academic innovation and commercial adoption.
"It sounded crazy to industry because it dealt with a problem that seemed so remote." - Michael Frank
Parallel paths and practical considerations: The modern case for reversibility
In the 2000s, efforts shifted toward designing circuits with low heat loss from the ground up, rather than retrofitting existing architectures. The narrative highlights Hannah Early’s rigorous 2022 analysis, which shows that even optimized reversible systems still emit some heat, but that the heat scales with how quickly the system runs. The idea is to run many chips more slowly in parallel, preserving performance while reducing cooling demands. This concept reframes energy efficiency as a function of architecture and timing, rather than a simple attribute of a single device. The discussion also notes investor interest and the emergence of startup and academic collaborations aimed at bringing reversible processors closer to commercialization.
"The most exciting thing would be seeing reversible processors made in practice so we can actually use them." - Torben Aegidius Morgensen, University of Copenhagen
Implications for AI and the road ahead
The episode concludes by considering how reversible computing could fit into the AI landscape. If reversible chips can be built with low heat and appropriate memory management, the energy footprint of AI inference and training could drop substantially, enabling denser data centers and new hardware architectures that complement software advances. The program points to ongoing efforts at Bayer Computing, continued academic research, and industry interest as evidence that reversible computing is moving from a theoretical curiosity toward a potential component of future AI hardware.
"Uncomputation means you are left with only the data you want, and you never lose track of it because none of the initial information is deleted." - Charles Bennett, IBM