Toward Utility-Scale Quantum Computing: New Analysis Suggests a Shorter Path Forward

Industry April 16, 2026

April 14, 2026 -- For decades, the promise of quantum computing has been shaped by a central challenge: building a machine powerful enough to solve meaningful problems requires an enormous scale, millions if not billions of qubits working together with high precision. That assumption has shaped both the technical roadmap for the field and also its timeline, pushing truly useful quantum computers into the future as research and engineering teams surmount one technical challenge after another in scaling up.

A new study, led by researchers at California Institute of Technology and a new startup company Oratomic, Inc., in collaboration with partners at UC Berkeley, offers a striking shift in that outlook. The work shows that utility-scale, fault-tolerant quantum computers may be achievable with systems on the order of 10,000 qubits, a dramatic reduction that brings the prospect of useful quantum computation much closer to reality.

The work reflects the kind of progress enabled by center-scale efforts that link scientists pursuing different aspects of quantum computing across multiple institutions. In this case, many of the senior authors on the paper are members of the NSF-funded Challenge Institute for Quantum Computation (CIQC), which focuses on experimental and theoretical development of quantum computation. “These kinds of advances build on the excellence our community has developed in basic research,” said Claire Cramer, Executive Director of the CIQC and of Berkeley Quantum, “and it highlights how critically the advancement of scalable quantum technology depends on bringing theory, experiment, and application together.”

Rethinking the Limits of Quantum Error Correction

At the heart of the challenge is a deceptively simple problem: quantum systems are fragile.

Unlike classical bits, which can tolerate small errors without consequence, quantum bits (qubits) are exquisitely sensitive to noise. Even tiny disturbances can corrupt a calculation. To add to the complexity, it’s not possible to use techniques for correcting errors developed for classical computers. To address this, physicists developed the concept of quantum error correction, a strategy that uses redundancy to protect information. But that solution came with a cost.

“To protect a single ‘logical’ qubit, which is the kind you actually want to compute with, you typically need hundreds or even thousands of physical qubits working together,” explained Harry Levine, one of the authors of the paper and Assistant Professor of Physics at UC Berkeley. “And if you need thousands of logical qubits to do something useful, suddenly you’re talking about millions of physical qubits.”

That scaling problem has loomed over the field for decades. Building a quantum computer  large enough to be useful has seemed prohibitively far away.

The new publication revisits this core assumption. By analyzing emerging approaches to quantum error correction, particularly those enabled by neutral atom quantum computers, the team identified a dramatically more efficient path forward. “The key capability here is that instead of using qubits in a fixed pattern on a chip, our atoms can be reconfigured and moved around to directly connect arbitrary atoms together and entangle them,” said Levine. Instead of requiring hundreds or thousands of physical qubits per logical qubit, this capability of “non-local connectivity”  powered the development of new error correction strategies that only require as few as three or four physical qubits per logical qubit. That change cascades through the entire system.

“Once you reduce that overhead,” Levine said, “the whole picture changes. Instead of needing millions of qubits, it starts to look like you might be able to do meaningful computations with something like 10,000.”

Levine emphasized that this does not mean the problem is solved. While research groups have recently progressed to controlling arrays of hundreds to thousands of atoms, building and controlling a quantum system at the proposed scale remains an enormous technical challenge. But the nature of the challenge has shifted from one of sheer scale to one of engineering precision.

“It’s still very hard,” Levine noted. “But it now feels like something that could happen in years rather than decades.”

From theory to impact

The implications extend beyond abstract benchmarks. The study focuses in part on problems related to cryptography; in other words, areas where quantum computers could, in principle, outperform classical systems in ways that matter for security. The analysis suggests that cryptographically-relevant problems could be tackled by systems not dramatically larger than those already under development today, assuming continued advances in control and coherence.

At the same time, the work points to a broader class of applications. If utility-scale quantum computers can be built at smaller sizes, the range of problems they can address and the timeline for addressing them expands accordingly.

Just as importantly, the study highlights a growing dynamic within the field: ideas developed in one area of quantum research are increasingly influencing others. Techniques for error correction, once tied to specific hardware platforms, are now being adapted and reimagined across multiple systems.

“There’s a lot of cross-pollination happening,” Levine said. “One platform comes up with a new idea, and others realize they can incorporate it into their own designs. That back-and-forth is driving progress in a really exciting way.”

A broader ecosystem of discovery

For Berkeley’s quantum community, the work underscores the importance of a deeply interconnected research ecosystem.

Theoretical advances, such as those pursued at the Simons Institute for the Theory of Computing as well as multiple departments across campus, play a central role in identifying new computational possibilities and frameworks. Research into how to use quantum computers for applications of interest such as calculating properties of complex molecules ensures efficient use of existing and postulated computational resources. And experimental efforts across the Departments of Physics and Electrical Engineering and Computer Science translate those ideas into physical systems. And emerging industry partnerships, supported by initiatives like the Quantum Nexus, help bridge the gap between discovery and practical realization of this new technology.

“This work highlights the role that the basic research conducted by centers like the CIQC plays in identifying new, scalable approaches to quantum computing,” said Claire Cramer, Executive Director of the CIQC and Executive Director of Berkeley Quantum. “It’s a powerful example of how foundational ideas can evolve into practical pathways for technology.”

That perspective is echoed by Umesh Vazirani, research director for quantum computing at the Simons Institute, whose work sits at the intersection of theory and application: “Advances like this show how rethinking the fundamentals can dramatically change what is possible. It’s a reminder that theoretical insights are not abstract. They are often the key to unlocking real technological progress.”

A decade-long arc of discovery

For Levine, the story is also a personal one. The quantum platform at the center of this work is neutral atom arrays, a platform he helped to develop during his Ph.D, working in the lab of Mikhail Lukin at Harvard University. Work that, at the time, was considered purely exploratory.

“When I started, this was basic science, just devising creative new ways to control atoms with light and using them to learn about quantum physics,” he said. “This approach gained a lot of traction as a tool for scientific exploration, but for a while it wasn’t clear how the pieces were going to fit together.” Only in the past several years has it become clear how powerful that line of inquiry proved to be. “It’s actually shocking,” Levine reflected. “This non-local connectivity – it was just five years ago that we started playing with the idea of moving atoms around in this way. It seemed like a fun and exciting new ingredient, but we had no idea the implications this would have. And now this capability is at the heart of these major new advances in error correction and quantum computation”.

That trajectory, from curiosity-driven research to technological promise, is precisely what initiatives like CIQC are designed to support.

Across Berkeley and its partner institutions, researchers are pursuing a wide range of ideas in quantum science, many of which may not have obvious immediate applications. But as this work demonstrates, today’s exploratory research can quickly become tomorrow’s breakthrough. “It’s really exciting to imagine,” Levine said, “what ideas students are working on right now that will turn into entirely new technologies ten years from now.”

The path to utility-scale quantum computing remains challenging, requiring advances across hardware, control, and theory. But the direction is becoming clearer. By reducing one of the field’s most daunting barriers, this work reframes the central question from whether useful quantum computers are possible to how quickly they can be realized.

It also reinforces a central idea behind efforts like Quantum Nexus and its partners: that the ideas emerging from research labs today, shaped by students and early-career scientists, will define the next wave of quantum technologies.