Argonne’s Aurora Supercomputer Helps Power Breakthrough Simulations of Quantum Materials
November 14, 2025 -- Quantum materials have the potential to transform future computing, energy, and electronics technologies. But understanding and controlling their unusual electronic and magnetic properties requires some of the world’s most powerful computing hardware and software.
Using three U.S. Department of Energy (DOE) supercomputers, researchers from the University of Southern California (USC) and DOE’s Lawrence Berkeley National Laboratory developed new ways to model these complex systems with greater precision than ever before. The team worked with Aurora at Argonne National Laboratory, Frontier at Oak Ridge National Laboratory, and Perlmutter at Berkeley Lab. Together, they improved the open-source BerkeleyGW software to achieve a new level of accuracy in simulating the behavior of quantum materials.
To uncover what drives the properties of the materials, the researchers are using the code to track how electrons move and interact inside the materials. This task pushes the limits of even the most advanced computers.
“We’re performing simulations at a scale that is unprecedented,” said Mauro Del Ben, a computational scientist at Berkeley Lab. “This is not just one specific type of calculation. We’re pushing the level of the theory beyond the state of the art, moving from static pictures of how electrons behave to dynamic simulations that couple the motion of the electrons with the motion of the nuclei.”
“This is a very important step for understanding phenomena like superconductivity or the performance of transistors and optical devices,” he added.
Their work was named a finalist for the Association for Computing Machinery’s 2025 Gordon Bell Prize, which honors outstanding achievement in high-performance computing.
Aurora played a key role in the team’s groundbreaking research. Its large memory capacity and scalable architecture made it possible to carry out memory-intensive simulations of systems with tens of thousands of atoms. These runs allowed the researchers to capture quantum effects across larger and more complex systems.
“A lot of our applications are very memory hungry,” said Benran Zhang, a Ph.D. student at USC and first author of the study. “Aurora provided the capacity and the software stack to run some of the dynamic simulations that would not have been possible otherwise.”
Capturing quantum effects at new scales
The unusual properties of quantum materials come from how their electrons move and interact with each other and with atomic vibrations called phonons. These combined effects, known as many-body interactions, control how a material conducts electricity, absorbs light and stores energy. Simulating these effects accurately has long been a major challenge.
“We’re now in an era where quantum materials research is focusing on a wide range of emerging systems and many-body phenomena,” said Zhenglu Li, assistant professor of materials science at USC. “As materials get more complex, our simulations need to be more accurate to help us understand how these materials will behave in the real world.”
Density functional theory (DFT) is one of the most widely used methods for studying electron behavior in materials, but because it relies on approximations that simplify complex interactions, it can miss important details and features. To simulate electron behavior more precisely, the USC–Berkeley Lab team uses the GW approach with the BerkeleyGW code. The name “GW” comes from the two quantities it calculates. G measures the motion of an electron through a material and W measures how electrons influence each other. Together, they provide a more realistic picture of how electrons interact, leading to more accurate predictions of a material’s properties.
“To give a sense of the difference, we often look at a material’s band gap, which is a fundamental property of semiconductors and insulators that determines how it absorbs light,” Li said. “For the band gap of silicon, standard DFT can be off by 50 percent or more. With the GW approach, the error is only a few percent compared to experiment. That level of accuracy matters because the band gap tells us which wavelengths of sunlight silicon can absorb, and silicon is one of the most important materials for solar energy.”
Building on the GW approach, the team developed a capability called GW perturbation theory, or GWPT, which couples the key quantum interactions within a single framework. This capability enables simulations that were previously out of reach, allowing the team to predict properties that are critical for designing nanodevices, including how materials conduct electricity and manage heat.
“This is a really unique feature that no other code has,” Li said. “We can now predict the so-called electron-phonon coupling phenomena at a level that agrees well with experiments, which is a big step toward designing materials directly from theory.”
The team’s work also expands the capability of the GW method to handle far larger and more complex systems. Using DOE’s exascale supercomputers, their simulations reached over one exaflops on Frontier and more than 0.7 exaflops on Aurora, establishing new benchmarks for performance and scale in quantum-mechanical calculations. One exaflops is equivalent to a quintillion, or a billion billion, calculations per second.
Building on a decade of progress
The researchers’ success builds on more than a decade of effort to make BerkeleyGW faster, more flexible and better suited to evolving computer architectures, including exascale systems equipped with graphics processing units (GPUs). Their earlier work with the code was recognized as a Gordon Bell Prize finalist in 2020.
“We started these efforts about 10 years ago, when GPU-based architectures were still exotic,” Del Ben said. “Over the years we’ve learned how to adapt to new systems so our software can make the best use of whatever hardware scientists have access to.”
That portability has allowed the team to optimize BerkeleyGW for vastly different computing platforms. The result is a code that performs efficiently on Intel (Aurora), AMD (Frontier) and NVIDIA (Perlmutter) GPUs, ensuring that it remains highly usable as architectures continue to evolve.
With the release of BerkeleyGW 4.0, the team’s improvements are now available to the broader research community, enabling more scientists to study complex materials on current and future supercomputers.
“It’s already a very impressive achievement for our developer community,” said Del Ben. “Being a finalist for the Gordon Bell twice—once at pre-exascale and now at exascale—is something very few groups have accomplished. And the real impact is that this work will continue enabling new levels of accuracy and entirely new kinds of science.”
The study, “Advancing Quantum Many-Body GW Calculations on Exascale Supercomputing Platforms,” was authored by Benran Zhang, Zhenglu Li, and Chih-En Hsu of USC; Mauro Del Ben, Daniel Weinberg, Steven Louie, and Jack Deslippe of Berkeley Lab; Aaron Altman, Yuming Shi and Felipe da Jornada of Stanford University; James White III of DOE’s Oak Ridge National Laboratory; and Derek Vigil-Fowler of DOE’s National Renewable Energy Laboratory.
Access to ALCF computing resources was awarded through DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program.


