Quantinuum Is Developing New Frameworks for Artificial Intelligence
February 2, 2024 -- While recent years have seen incredible advancements in Artificial Intelligence (AI), no-one really knows how these ‘first-gen’ systems actually work. New work at Quantinuum is helping to develop different frameworks for AI that we can understand - making it interpretable and accountable and therefore far more fit for purpose.
The current fascination with AI systems built around generative Large Language Models (LLMs) is entirely understandable, but lost amid the noise and excitement is the simple fact that AI tech in its current form is basically a “black box” that we can’t look into or examine in any meaningful manner. This is because when computer scientists were starting to figure out how to make machines ‘human like’ and ‘think’, they turned to our best model for a thinking machine, the human brain. The human brain essentially consists of neural networks, and so computer scientists developed artificial neural networks.
However, just as we don’t fully understand how human intelligence works, it’s also true that we don’t really understand how current artificial intelligence works – neural networks are notoriously difficult to interpret and understand. This is broadly described as the “interpretability” issue in AI.
It is self-evident that interpretability is crucial for all kinds of reasons – AI has the power to cause serious harm alongside immense good. It is critical that users understand why a system is making the decisions it does. When we read and hear about ‘safety concerns’ with AI systems, interpretability and accountability are key issues.
At Quantinuum we have been working on this issue for some time – and we began way before AI systems such as generative LLM’s became fashionable. In our AI team based out of Oxford, we have been focused on the development of frameworks for “compositional models” of artificial intelligence. Our intentions and aims are to build artificial intelligence that is interpretable and accountable. We do this in part by using a type of math called “category theory” that has been used in everything from classical computer programming to neuroscience.
Category theory has proven to be a sort of “Rosetta stone”, as John Baez put it, for understanding our universe in an expansive sense – category theory is helpful for things as seemingly disparate as physics and cognition. In a very general sense, categories represent things and ways to go between things, or in other words, a general science of systems and processes. Using this basic framework to understand cognition, we can build new artificial intelligences that are more useful to us – and we can build them on quantum computers, which promise remarkable computing power.
Our AI team, led by Dr Stephen Clark, Head of AI at Quantinuum, has published a new paper applying these concepts to image recognition. They used their compositional quantum framework for cognition and AI to demonstrate how concepts like shape, color, size, and position can be learned by machines – including quantum computers.
“In the current environment with accountability and transparency being talked about in artificial intelligence, we have a body of research that really matters, and which will fundamentally affect the next generation of AI systems. This will happen sooner than many anticipate” said Ilyas Khan, Quantinuum’s founder.
This paper is part of a larger body of work in quantum computing and artificial intelligence, which holds great promise for our future - as the authors say, “the advantages this may bring, especially with the advent of larger, fault-tolerant quantum computers in the future, is still being worked out by the research community, but the possibilities are intriguing at worst and transformational at best.”