A powerhouse team has launched a new initiative called ‘Doing AI Differently,’ which calls for a human-centred approach to future development.
For years, we’ve treated AI’s outputs like they’re the results of a giant math problem. But the researchers – from The Alan Turing Institute, the University of Edinburgh, AHRC-UKRI, and the Lloyd’s Register Foundation – behind this project say that’s the wrong way to look at it.
What AI is creating are basically cultural artifacts. They’re more like a novel or a painting than a spreadsheet. The problem is, AI is creating this “culture” without understanding any of it. It’s like someone who has memorised a dictionary but has no idea how to hold a real conversation.
This is why AI often fails when “nuance and context matter most,” says Professor Drew Hemment, Theme Lead for Interpretive Technologies for Sustainability at The Alan Turing Institute. The system just doesn’t have the “interpretive depth” to get what it’s really saying.
However, most of the AI in the world is built on just a handful of similar designs. The report calls this the “homogenisation problem” and future AI development must overcome this.
Imagine if every baker in the world used the exact same recipe. You’d get a lot of identical, and frankly, boring cakes. With AI, this means the same blind spots, the same biases, and the same limitations get copied and pasted into thousands of tools we use every day.
We saw this happen with social media. It was rolled out with simple goals, and we’re now living with the unintended societal consequences. The ‘Doing AI Differently’ team is sounding the alarm to make sure we don’t make that same mistake with AI.
The team has a plan to build a new kind of AI, one they call Interpretive AI. It’s about designing systems from the very beginning to work the way people do; with ambiguity, multiple viewpoints, and a deep understanding of context.
The vision is to create interpretive technologies that can offer multiple valid perspectives instead of just one rigid answer. It also means exploring alternative AI architectures to break the mould of current designs. Most importantly, the future isn’t about AI replacing us; it’s about creating human-AI ensembles where we work together, combining our creativity with AI’s processing power to solve huge challenges.
This has the potential to touch our lives in very real ways. In healthcare, for example, your experience with a doctor is a story, not just a list of symptoms. An interpretive AI could help capture that full story, improving your care and your trust in the system.
For climate action, it could help bridge the gap between global climate data and the unique cultural and political realities of a local community, creating solutions that actually work on the ground.
A new international funding call is launching to bring researchers from the UK and Canada together on this mission. But we’re at a crossroads.
“We’re at a pivotal moment for AI,” warns Professor Hemment. “We have a narrowing window to build in interpretive capabilities from the ground up”.
For partners like Lloyd’s Register Foundation, it all comes down to one thing: safety.
“As a global safety charity, our priority is to ensure future AI systems, whatever shape they take, are deployed in a safe and reliable manner,” says their Director of Technologies, Jan Przydatek.
This isn’t just about building better technology. It’s about creating an AI that can help solve our biggest challenges and, in the process, amplify the best parts of our own humanity.
(Photo by Ben Sweet)
See also: AI obsession is costing us our human skills
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here