Nvidia shows AI model that can modify voices, generate novel sounds
Published by Jessica Weisman-Pitts
Posted on November 25, 2024
3 min readLast updated: January 28, 2026

Published by Jessica Weisman-Pitts
Posted on November 25, 2024
3 min readLast updated: January 28, 2026

By Stephen Nellis
(Reuters) – Nvidia on Monday showed a new artificial intelligence model for generating music and audio that can modify voices and generate novel sounds – technology aimed at the producers of music, films and video games.
Nvidia, the world’s biggest supplier of chips and software used to create AI systems, said it does not have immediate plans to publicly release the technology, which it calls Fugatto, short for Foundational Generative Audio Transformer Opus 1.
It joins other technologies shown by startups such as Runway and larger players such as Meta Platforms that can generate audio or video from a text prompt.
Santa Clara, California-based Nvidia’s version generates sound effects and music from a text description, including novel sounds such as making a trumpet bark like a dog.
What makes it different from other AI technologies is its ability to take in and modify existing audio, for example by taking a line played on a piano and transforming it into a line sung by a human voice, or by taking a spoken word recording and changing the accent used and the mood expressed.
“If we think about synthetic audio over the past 50 years, music sounds different now because of computers, because of synthesizers,” said Bryan Catanzaro, vice president of applied deep learning research at Nvidia. “I think that generative AI is going to bring new capabilities to music, to video games and to ordinary folks that want to create things.”
While companies such as OpenAI are negotiating with Hollywood studios over whether and how the AI could be used in the entertainment industry, the relationship between tech and Hollywood has become tense, particularly after Hollywood star Scarlett Johansson accused OpenAI of imitating her voice.
Nvidia’s new model was trained on open-source data, and the company said it is still debating whether and how to release it publicly.
“Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t,” Catanzaro said. “We need to be careful about that, which is why we don’t have immediate plans to release this.”
Creators of generative AI models have yet to determine how to prevent abuse of the technology such as a user generating misinformation or infringing on copyrights by generating copyrighted characters.
OpenAI and Meta similarly have not said when they plan to release to the public their models that generate audio or video.
(Reporting by Stephen Nellis in San Francisco; Editing by Will Dunham)
Artificial intelligence (AI) refers to the simulation of human intelligence in machines programmed to think and learn like humans.
Generative AI is a type of artificial intelligence that can create new content, such as text, images, or audio, based on training data.
Sound effects are artificially created or enhanced sounds used in various media, including films, music, and video games, to enhance the auditory experience.
Audio modification involves altering existing audio recordings to change aspects like pitch, tone, or speed, often used in music production.
Open-source data refers to data that is freely available for anyone to use, modify, and share, often used in training AI models.
Explore more articles in the Technology category











