NIST Researchers Demonstrate That Superconducting Neural Networks Can Learn on Their Own
August 18, 2025 -- Using detailed simulations, researchers at the National Institute of Standards and Technology (NIST) and their collaborators have demonstrated that a class of neural networks – electronic circuits inspired by the human brain – can be programmed to learn new tasks on their own. After initial training, the NIST superconducting neural networks were 100 times faster at learning new tasks than previous neural networks.
Neural networks make decisions by mimicking the way neurons in the human body work together. By adjusting the flow of information among neurons, the human brain identifies new phenomena, learns new skills, and weighs different options in making decisions.
The NIST scientists focused their efforts on superconducting neural networks, which transmit information at high speed--in part because they allow current to flow without resistance. Once cooled to just 4 degrees above absolute zero, they consume much less energy than other networks, including neurons in the human brain.
In their new design, NIST scientist Michael Schneider and his colleagues found a way to manipulate the building blocks of a superconducting neural network so that it could perform a type of self-learning known as reinforcement learning. Neural networks use reinforcement learning to learn new tasks such as acquiring a new language.
A key element in any neural network is how preferences, or weights, are assigned to pathways in the electrical circuitry, akin to the way the brain assigns weights to different neural pathways. Weights are adjusted in a neural network using the proverbial carrot and stick method – strengthening the weighting of pathways that provide the correct answer and weakening those that lead to incorrect answers. In the NIST system, self-learning is acquired because hardware that makes up the circuitry determines the size and direction of these weight changes, requiring no external control or additional computations to learn new tasks.
The researchers reported their findings online March 4 in Unconventional Computing.
The NIST design offers two advantages. First, it enables the network to learn continually as new data becomes available. Without that capability, the entire network would have to be retrained from scratch each time researchers add data or alter the desired outcome.
Secondly, the design automatically adjusts the weighting of different pathways in the network to accommodate slight variations in the size and electrical properties of the hardware components may happen during fabrication. That flexibility is a huge benefit for training neural networks, which ordinarily require precise programming of weighted values.
The design also has the potential to dramatically speed up training of neural networks and use significantly less energy than training designs based on semiconductors or software, Schneider said. With simulations demonstrating the feasibility of the hardware approach, Schnieder and his colleagues now plan to build a small-scale self-learning superconducting neural network.