NVIDIA aims to solve AI’s issues with many languages

News Room

While AI might feel ubiquitous, it primarily operates in a tiny fraction of the world’s 7,000 languages, leaving a huge portion of the global population behind. NVIDIA aims to fix this glaring blind spot, particularly within Europe.

The company has just released a powerful new set of open-source tools aimed at giving developers the power to build high-quality speech AI for 25 different European languages. This includes major languages, but more importantly, it offers a lifeline to those often overlooked by big tech, such as Croatian, Estonian, and Maltese.

The goal is to let developers create the kind of voice-powered tools many of us take for granted, from multilingual chatbots that actually understand you to customer service bots and translation services that work in the blink of an eye.

The centrepiece of this initiative is Granary, an enormous library of human speech. It contains around a million hours of audio, all curated to help teach AI the nuances of speech recognition and translation.

To make use of this speech data, NVIDIA is also providing two new AI models designed for language tasks:

  • Canary-1b-v2, a large model built for high accuracy on complex transcription and translation jobs.
  • Parakeet-tdt-0.6b-v3, which is designed for real-time applications where speed is everything.

If you’re keen to dive into the science behind it, the paper on Granary will be presented at the Interspeech conference in the Netherlands this month. For the developers eager to get their hands dirty, the dataset and both models are already available on Hugging Face.

The real magic, however, lies in how this data was created. We all know that training AI requires vast amounts of data, but getting it is usually a slow, expensive, and frankly tedious process of human annotation.

To get around this, NVIDIA’s speech AI team – working with researchers from Carnegie Mellon University and Fondazione Bruno Kessler – built an automated pipeline. Using their own NeMo toolkit, they were able to take raw, unlabelled audio and whip it into high-quality, structured data that an AI can learn from.

This isn’t just a technical achievement; it’s a huge leap for digital inclusivity. It means a developer in Riga or Zagreb can finally build voice-powered AI tools that properly understand their local languages. And they can do it more efficiently. The research team found that their Granary data is so effective that it takes about half the amount of it to reach a target accuracy level compared to other popular datasets.

The two new models demonstrate this power. Canary is frankly a beast, offering translation and transcription quality that rivals models three times its size, but with up to ten times the speed. Parakeet, meanwhile, can chew through a 24-minute meeting recording in one go, automatically figuring out what language is being spoken. Both models are smart enough to handle punctuation, capitalisation, and provide word-level timestamps, which is required for building professional-grade applications.

By putting these powerful tools and the methods behind them into the hands of the global developer community, NVIDIA isn’t just releasing a product. It’s kickstarting a new wave of innovation, hoping to create a world where AI speaks your language, no matter where you’re from.

(Photo by Aedrian Salazar)

See also: DeepSeek reverts to Nvidia for R2 model after Huawei AI chip fails

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *