EPFL Researchers Showcase Quantum Theory Breakthroughs at QIP 2026
January 26, 2026 -- The researchers, Yihui Quek and Thomas Vidick are both recent hires to the School of Computer and Communication Sciences (IC), showing the strong growth of the quantum field at EPFL. Works by Rotem Arnon, who will soon be joining EPFL IC from the Weizmann Institute of Science, and her students will also be presented at the conference.
1. Information-Computation gaps in quantum learning (Sitan Chen, Weiyuan Gong, Jonas Haferkamp, Yihui Quek)
Many quantum learning settings have a natural 'complexity' parameter that measures how hard it should be to learn. Some learning problems are possible to solve based on the amount of data available, but actually processing that data will never be efficient, even for the cleverest algorithms. In this paper, Quek and her co-authors work on identifying when quantum learning problems fall into this "possible but hard" regime, so that they do not waste time on what will ultimately be a futile search for efficient algorithms.
They show that many quantum learning problems become computationally impossible long before they become information-theoretically impossible, and introduce a framework to identify those regimes in order to save researchers from chasing algorithms that are unlikely to exist.
2. The Learning Stabilizers with Noise problem (Alexander Poremba, Yihui Quek, Peter W Shor)
Modern cryptography relies on certain problems that seem extremely hard to solve, even with powerful computers. One of these, the Learning Parities with Noise (LPN) problem, is a cornerstone of modern cryptography, which asserts that decoding a random error-correcting code under bitflip noise, meaning taking the message, encoding it in a random way, and then flipping some of the bits at random, is generically hard. Thus, LPN is a statement about the average-case complexity of classical decoding, making it useful for building cryptographic systems that are secure against quantum computers.
In conjunction with another paper Average-case complexity of quantum stabilizer decoding, Andrey Boris Khesin, Jonathan Z. Lu, Alexander Poremba, Akshar Ramkumar, Vinod Vaikuntanathan, Quek and her collaborators asked if it is as hard to decode a quantum error correcting code as a noisy classical one, and if that hardness could also be a basis for building cryptography. To explore this, they introduce the Learning Stabilizers with Noise (LSN) problem, a natural quantum analog of LPN. In this problem, you are given noisy quantum states that were encoded using a known, randomly chosen quantum error-correcting code, and your task is to recover the original quantum information. Together, this set of papers gives strong evidence that this problem is indeed hard. It also proves that decoding a random stabilizer code with even a single logical qubit is already at least as hard as decoding a random classical code at constant rate – the maximally hard regime – showing that the "easiest" random quantum decoding problem is already as hard as the "hardest" classical one.
3. Hamiltonian Decoded Quantum Interferometry, (Alexander Schmidhuber, Jonathan Z. Lu, Noah Shutty, Stephen Jordan, Alexander Poremba, Yihui Quek)
This paper is a response to a recent work by Jordan et al (2025) that introduced Decoded Quantum Interferometry (DQI), a promising new quantum algorithm which naturally interpolates between sampling and optimization by turning them into a kind of decoding task. This led to the question of whether this DQI framework can be extended for fully quantum systems of general, non-diagonal and non-commuting Pauli Hamiltonians, quantum models whose interactions can’t all be simplified at once, leading to complex, fully quantum behavior.
This work answers this question in the affirmative by introducing Hamiltonian Decoded Quantum Interferometry (HDQI). HDQI is a strict and powerful generalization of DQI that extends the decoding-based idea to a wide range of quantum many-body problems, including preparing thermal states and sampling low-energy states. For many important quantum systems, their approach also leads to efficient classical algorithms, sometimes improving on earlier quantum-only methods. Quek and her collaborators further show how HDQI can be applied to more general quantum systems, such as spin glasses and models with genuinely quantum interactions.
4. Derandomised tensor product gap amplification for quantum Hamiltonians (Thiago Bergamaschi, Tony Metger, Thomas Vidick, Tina Zhang)
This paper proposes a method for amplifying the hardness of a family of local Hamiltonians used to model many-body systems in condensed matter physics. In quantum complexity theory, problems involving local Hamiltonians play a role similar to the famously hard problems like 3SAT in classical computing, making them among some of the hardest problems that quantum computers can efficiently verify.
Although its exact computation is known to be intractable, it is currently unknown if there is an efficient, in quantum polynomial time, algorithm that approximates the minimal energy of a local Hamiltonian. The paper makes a step in the direction of showing that there is no such algorithm, by identifying classes of Hamiltonians that may exhibit a form of "approximation resistance." These results are inspired by the quantum PCP conjecture, a major open problem that asks whether quantum systems can be inherently hard to approximate, much like certain classical optimization problems.
5. Quantum Computational Entropies (Noam Avidan, Thomas A. Hahn, Joseph M. Renes, Rotem Arnon)
A major strength of quantum information theory is that it can sometimes reduce complex information-processing tasks to a few simple, entropy-based quantities. Two such quantities, the smooth min-entropy and smooth max-entropy, have become standard tools in quantum information theory, with concrete interpretations such as how well a secret can be guessed or how much entanglement can be distilled. Over the past decade, much of modern quantum information theory has been driven by a mathematical framework built around these entropies. In practice, however, real quantum devices are constrained by running time, circuit depth, and noise, motivating a complementary, computationally bounded perspective, leading to “pseudo” objects, which expose gaps between ideal information-theoretic guarantees and what realistic observers can achieve.
Because entropies sit at the heart of the field, a natural next step is to build computational counterparts of them. In this work, the researchers define and study computational min-and max- entropies. These new quantities are motivated by their operational meaning, and, crucially, they satisfy mathematical properties that earlier approaches could not establish, marking a step toward a computational quantum information theory that weaves practical efficiency constraints into the core concepts of the field.


