Meta, the parent company of Facebook and Instagram, unveiled a collection of generative artificial intelligence (AI) models for making music from multiple inputs called AudioCraft, on August 2.
Generative AI tools include MusicGen and AudioGen, which generate new audio using text-based inputs, and EnCodec, which “allows for higher quality music generation with fewer artifacts.”
Meta stated in the announcement that its MusicGen model was trained using either music it owns or that was “specifically licensed.”
This comes amid a heated debate about teaching AI using works protected by intellectual property rights in various artistic domains, including a lawsuit against Meta for copyright violation during AI training.
MusicGen and AudioGen are now accessible to developers and the “research community” in various sizes, thanks to Meta. It claimed that as it creates more complex controls, it hopes that both experts and novices in the music business will find use for the models.
“We think MusicGen can transform into a new type of instrument, just like synthesizers when they first appeared,” the authors write.
Harvey Mason Jr., the CEO of the Recording Academy, compared the advent of AI-generated music to the early days of synthesizers entering the music scene in a recent interview with Cointelegraph.
In a short time after Google introduced MusicLM, a set of tools that convert text into music, Meta released its own set of generative AI music tools.
The business declared in May that its AI Test Kitchen platform was accepting “early testers” of the goods. Meta has been developing new AI tools to create and implement the most potent models and many other digital behemoths, such as Google and Microsoft.
Meta announced the release of new AI chatbots with personalities on August 1 so that users of its platforms may use them as search assistants and as “fun products to play with.”