Researchers Use Quantum-Centric Supercomputing to Simulate 12,635-Atom Protein Complex

Industry May 6, 2026

May 04, 2026 -- A team from Cleveland Clinic, RIKEN, and IBM used two IBM quantum computers and two supercomputers to model biologically relevant molecules at scale.

The scale of chemistry simulations with quantum computing has increased dramatically in just the last few months. In the latest milestone for the field, researchers from Cleveland Clinic, RIKEN, and IBM used a quantum-centric supercomputing (QCSC) framework to calculate the electronic structure of a pair of large protein-ligand complexes, reaching a scale of 12,635 atoms in the largest simulation.

The molecules were T4-Lysozyme, a protein from a family of proteins involved in the immune system degradation of peptidoglycans in bacterial membranes, and Trypsin, produced in the pancreas and used in digestion. The team simulated these proteins binding to molecules they interact with in nature and immersed in a liquid water solution, at scales of 11,608 atoms and 12,635 atoms respectively. Bringing together an international team of researchers from across the United States and Japan made it possible to develop the necessary algorithm and workflow enhancements to reach this milestone.

The researchers achieved this scale just four months after modeling the 303-atom miniprotein Trp-cage using quantum computing for the first time. Today’s new result not only demonstrates a 40-fold increase in system size compared to the Trp-cage result, it represents a 210-times improvement in accuracy from previous state-of-the-art QCSC approaches in a specific step of the workflow.

To reach these new heights of scale and accuracy, the researchers refined both classical and quantum methods used in the workflow. They performed quantum sampling on two 156-qubit IBM Quantum Heron r2 processors, and then processed the resulting data using the classical supercomputers Fugaku and Miyabi-G. High-performance computing experts from RIKEN joined the team and played a key role in the work.

While the method does not yet outperform the best classical approaches, it shows that quantum computing is a useful tool for scientific research today, and the trajectory points to still-better results ahead.

“[This result] is one of those things you dream about,” said Dr. Kenneth Merz, PhD, lead author on the paper and leader of the Merz lab at Cleveland Clinic.

Why researchers are pursuing quantum computing for chemistry

The universe runs on quantum mechanics. Chemistry in particular is governed directly by quantum mechanical processes. That means that a well-controlled, programmable quantum system will likely be the best tool for modeling it computationally.

Merz said he’s seen dramatic improvements in the ability of computers to model chemistry in his career. In the late 1980s, improvements in chip design drove hundredfold improvements in power and speed. Parallel processing and GPUs drove their own multi-order-of-magnitude improvements.

“But what we’re finding is, the pace of improvement in classical computing is really slowing down. If we want another order-of-magnitude-or-two bump, quantum computing is probably the way to go,” Merz said.

Accurate electronic structure calculations on classical computers become more challenging as system size increases. Classical methods alone can efficiently model certain aspects of protein behavior, but high-accuracy quantum-mechanical treatments of entire proteins remain impractical.

QCSC brings together those decades of progress in classical computing with the impressive capabilities of today’s quantum computers.

The initial Trp-cage result on which this new work builds relies on a technique called wave function-based embedding (EWF), which fragments the calculation into computationally tractable pieces called “clusters.” Classical computers solve the simpler clusters. Then, a quantum computer uses a method called sample-based quantum diagonalization (SQD) to solve the more complex clusters—those involving more entanglement between atoms in the miniprotein. The classical computers then stitch the molecule back together.

This workflow performs at a level comparable with established classical methods. And because it breaks large molecules into bite-sized pieces, it offers enormous potential for scaling. The researchers expect that as quantum technology improves in the next few years, this workflow could soon lead to better results than any classical method.

This is exciting, Merz said, because a better method for computational chemistry could offer enormous benefits to society. If researchers had a reliable method for predicting the behavior of new molecules before synthesizing and testing them in laboratories, the pace of pharmaceutical development, new materials science, and general chemistry research could all significantly accelerate.

“Better lifesaving drugs, faster. Better materials for the technology in your home or for national infrastructure. What I’m saying is: better chemistry workflows really mean ways to help you and future generations lead better, healthier lives,” Merz said.

How the team passed 12,000 atoms

To model the proteins T4-Lysozyme and Trypsin, the team used up to 94 qubits across two quantum computers, ran 9,200 circuits for over 100 hours, and collected 1.3 billion measurement outcomes. That makes this work the most resource-intensive known QCSC execution for quantum chemistry to date.

Where the Trp-cage simulation modeled the molecule alone, the T4-Lysozyme and Trypsin simulations captured protein-ligand pairs in solution, meaning each simulation included a binding molecule and the solution of water molecules that proteins work in, making the result a more realistic model of protein behavior.

Reaching this scale from 303 atoms required more than just additional compute time or bigger supercomputers. The researchers needed improved algorithm design and thoughtful input from HPC experts at RIKEN to reach the scale of Trypsin.

Breaking Trp-cage into workable clusters was a computationally intensive process, but achievable using limited HPC resources. At that scale, EWF relies on a partial understanding of how individual electrons interact across the molecule to find good break points.

In conventional implementations of the EWF method, creating the fragment “bath”—that is, selecting the highly entangled orbitals in the environment around individual fragments—is prohibitively expensive for molecules the size of Trypsin, said Mario Motta, IBM researcher and co-author of this work. That’s because the creation of the fragment bath is performed with Møller–Plesset second‑order perturbation theory or “MP2” calculations, which are computationally demanding.

Due to how MP2 calculations scale, if the size of the molecule doubles, the EWF method will require 25 times more classical computing resources to work. So, a 606‑atom molecule would require 32 times more computational effort to break into clusters than Trp‑cage. That’s not practical.

The good news is, any given electron in a molecule like Trypsin is “localized” to its local environs. Dr. Merz and his Cleveland Clinic colleagues figured so-called linear-scaling methods could be leveraged to simplify the calculation.

“Information that comes from more than 7-10 angstroms away doesn’t really affect the cluster at a quantum mechanical level, in this molecule. Entanglement is already dead and gone at that distance. So, one can restrict their MP2 bath expansion to a sphere centered around each atom,” Motta said.

By refining EWF to only consider those most important, local interactions, it became feasible to implement at the scale of Trypsin, Motta said.

At the same time as they refined the classical methods, the team implemented a novel approach to SQD, enabling them to scale beyond what was previously possible. This novel method they called TrimSQD.

SQD addresses one of the fundamental challenges of electronic structure calculations: the number of possible configurations of a molecule’s electrons grows combinatorially with the molecule’s size. The quantum computer samples this vast space, identifying key configurations for the classical computer to focus on. The classical computer uses the resulting information to find a solution. This is the innovation that has made a number of important quantum chemistry results possible in the last 18 months, including the integration of SQD with EWF which enabled the benchmark Trp-cage calculation.

TrimSQD improves on the EWF SQD workflow to better identify the useful pieces for the quantum computer to focus on. It works by breaking the search area into subspaces that can each be searched individually.

The ground state is a superposition of a very large number—combinatorially large, in fact—of electronic configurations. Some configurations contribute significantly and some do not. Theoretical chemists like Klaus Ruedenberg call the significant configurations “livewood” and the others “deadwood.” Searching for a significant configuration is “like trying to solve a very twisted puzzle,” Motta said.

Maybe you’re trying to assemble Jacques-Louis David’s painting “The Coronation of Napoleon” out of many similar-looking pieces, Motta said, as an example of livewood. But someone has mixed in pieces from Van Gogh’s “Starry Night” and Kahlo’s “The Two Fridas”—a non-optimal quantum circuit, or noise on quantum device may have introduced configurational deadwood. SQD would dig through a large pile of puzzle pieces to find relevant pieces. TrimSQD separates the problem into multiple smaller piles where Napoleon and Josephine’s “livewood” faces stand out more clearly against the clutter.

The improved EWF workflow and TrimSQD fed into a paradigmatic example of quantum-centric supercomputing at scale. The team distributed the quantum sampling work across two Heron r2s—ibm_cleveland, located at Cleveland Clinic, and ibm_kobe at RIKEN. Then, they split the task of diagonalizing the subspaces that the quantum computers returned between the Fugaku supercomputer at RIKEN and Miyabi-G, a GPU-accelerated supercomputer operated by the University of Tokyo and the University of Tsukuba. QPUs, GPUs, and CPUs all contributed as part of a problem-solving compute architecture, offering a vision of the future of supercomputing.

Where next?

“I think this study may get people off the sidelines,” Merz said, adding that results like this are coming years sooner than he would have predicted as recently as 2024.

This work shows that quantum computing can be a useful tool for chemistry today, he said. And as technology improves, this workflow will only grow more powerful. The methods in this research port easily to future, fault-tolerant quantum computers like IBM Quantum Starling, expected in 2029.

Already, his team is working with collaborators on related implementations in materials science, and there are clear opportunities for quantum exploration across biology, chemistry, and drug discovery.

“It’s amazing. They’ve developed a computer with 156 qubits you can entangle,” Merz said. “Nothing like that exists in nature. And it’s only going to get more and more sophisticated.”

He said he hopes to see other researchers, particularly chemists, take this work in new directions.

This work shows that QCSC advances best when quantum and HPC researchers work together. It was made possible by access to HPC resources at Cleveland Clinic, RIKEN, Michigan State University, and the University of Tokyo.