Business
Meta releases an AI model that can transcribe and translate close to 100 languages
In its quest to develop AI that can understand a range of different dialects, Meta has created an AI model, SeamlessM4T, that can translate and transcribe close to 100 languages across text and speech.
Available in open source along with SeamlessAlign, a new translation data set, Meta claims that SeamlessM4T represents a “significant breakthrough” in the field of AI-powered speech-to-speech and speech-to-text.
“Our single model provides on-demand translations that enable people who speak different languages to communicate more effectively,” Meta writes in a blog post shared with TechCrunch. “SeamlessM4T implicitly recognizes the source languages without the need for a separate language identification model.”
SeamlessM4T is something of a spiritual successor to Meta’s No Language Left Behind, a text-to-text machine translation model, and Universal Speech Translator, one of the few direct speech-to-speech translation systems to support the Hokkien language. And it builds on Massively Multilingual Speech, Meta’s framework that provides speech recognition, language identification and speech synthesis tech across more than 1,100 languages.
Meta isn’t the only one investing resources in developing sophisticated AI translation and transcription tools.
Beyond the wealth of commercial services and open source models already available from Amazon, Microsoft, OpenAI and a number of startups, Google is creating what it calls the Universal Speech Model, a part of the tech giant’s larger effort to build a model that can understand the world’s 1,000 most-spoken languages. Mozilla, meanwhile, spearheaded Common Voice, one of the largest multi-language collection of voices for training automatic speech recognition algorithms.
But SeamlessM4T is among the more ambitious efforts to date to combine translation and transcription capabilities into a single model.
In developing it, Meta says that it scraped publicly available text (in the order of “tens of billions” of sentences) and speech (4 million hours) from the web. In an interview with TechCrunch, Juan Pino, a research scientist at Meta’s AI research division and a contributor on the project, wouldn’t reveal the exact sources of the data, saying only that there was “a variety” of them.
Not every content creator agrees with the practice of leveraging public data to train models that could be used commercially. Some have filed lawsuits against businesses building AI tools on top of publicly available data, arguing that the vendors should be compelled to provide credit if not compensation — and clear ways to opt out.
But Meta claims that the data it mined — which might contain personally identifiable information, the company admits — wasn’t copyrighted and came primarily from open source or licensed sources.
Whatever the case, Meta used the scraped text and speech to create the training data set for SeamlessM4T, called SeamlessAlign. Researchers aligned 443,000 hours of speech with texts and created 29,000 hours of “speech-to-speech” alignments, which “taught” SeamlessM4T how to transcribe speech to text, translate text, generate speech from text and even translate words spoken in one language into words in another language.
Meta claims that on an internal benchmark, SeamlessM4T performed better against background noises and “speaker variations” in speech-to-text tasks compared to the current state-of-the-art speech transcription model. It attributes this to the rich combination of speech and text data in the training data set, which Meta believes gives SeamlessM4T a leg up over speech-only and text-only models.
“With state-of-the-art results, we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems,” Meta wrote in the blog post.
But one wonders what biases the model might contain.
A recent piece in The Conversation points out the many flaws in AI-powered translation, including different forms of gender bias. For example, Google Translate once presupposed that doctors were male while nurses were female in certain languages, while Bing’s translator translated phrases like “the table is soft” as the feminine “die Tabelle” in German, which refers a table of figures.
Speech recognition algorithms, too, often contain biases. A study published in The Proceedings of the National Academy of Sciences showed that speech recognition systems from leading businesses were twice as likely to incorrectly transcribe audio from Black speakers as opposed to white speakers.
Unsurprisingly, SeamlessM4T isn’t unique in this regard.
In a whitepaper published alongside the blog post, Meta reveals that the model “overgeneralizes to masculine forms when translating from neutral terms” and performs better when translating from the masculine reference (e.g., nouns like “he” in English) for most languages.
Moreover, in the absence of gender information, SeamlessM4T prefers translating the masculine form about 10% of the time — perhaps due to an “overrepresentation of masculine lexica” in the training data, Meta speculates.
Meta makes the case that SeamlessM4T doesn’t add an outsize amount of toxic text in its translations, a common problem with translation and generative AI text models at large. But it’s not perfect. In some languages, like Bengali and Kyrgyz, SeamlessM4T makes more toxic translations — that is to say, hateful or profane translations — about socioeconomic status and culture. And in general, SeamlessM4T is more toxic in translations dealing with sexual orientation and religion.
Meta notes that the public demo for SeamlessM4T contains a filter for toxicity in inputted speech as well as a filter for potentially toxic outputted speech. That filter’s not present by default in the open source release of the model, however.
The larger issue with AI translation not addressed in the whitepaper is the loss of lexical richness that can result from their overuse. Unlike AI, human interpreters make choices unique to them when translating one language into another. They might explicate, normalize, or condense and summarize, creating fingerprints known informally as “translationese.” AI systems might generate more “accurate” translations, but those translations could be coming at the expense of translation variety and diversity.
That’s probably why Meta advises against using SeamlessM4T for long-form translation and certified translations, like those recognized by government agencies and translation authorities. Meta also discourages deploying SeamlessM4T for medical or legal purposes, presumably an attempt to cover its bases in the event of a mistranslation.
That’s wise; there’s been at least a few of instances where AI mistranslations have resulted in law enforcement mistakes. In September 2012, police erroneously confronted a Kurdish man for financing terrorism because of a mistranslated text message. And in 2017, a cop in Kansas used Google Translate to ask a Spanish-speaker if they could search their car for drugs, but because the translation was inaccurate, the driver didn’t fully understand what he’d agreed to and the case was eventually thrown out.
“This single system approach reduces errors and delays, increasing the efficiency and quality of the translation process, bringing us closer to making seamless translation possible,” Pino said. “In the future, we want to explore how this foundational model can enable new communication capabilities — ultimately bringing us closer to a world where everyone can be understood.”
Let’s hope humans aren’t left completely out of the loop in that future.
-
Entertainment6 days ago
WordPress.org’s login page demands you pledge loyalty to pineapple pizza
-
Entertainment6 days ago
‘Mufasa: The Lion King’ review: Can Barry Jenkins break the Disney machine?
-
Entertainment5 days ago
OpenAI’s plan to make ChatGPT the ‘everything app’ has never been more clear
-
Entertainment4 days ago
‘The Last Showgirl’ review: Pamela Anderson leads a shattering ensemble as an aging burlesque entertainer
-
Entertainment5 days ago
How to watch NFL Christmas Gameday and Beyoncé halftime
-
Entertainment4 days ago
Polyamorous influencer breakups: What happens when hypervisible relationships end
-
Entertainment3 days ago
‘The Room Next Door’ review: Tilda Swinton and Julianne Moore are magnificent
-
Entertainment3 days ago
CES 2025 preview: What to expect