[ad_1]
Meta Releases Open-Source AI Music Generator: MusicGen
Meta has recently launched MusicGen, its own AI-powered music generator, which is open-source. Unlike Google, Meta made the source code readily available for researchers and developers. MusicGen has the ability to generate 12 seconds of audio by turning a text description into an audio clip, and the tool can be “steered” with reference audio to follow both a melody and description.
MusicGen’s training and data source
Meta says that MusicGen is trained on 20,000 hours of music, including 10,000 “high-quality” licensed music tracks and 390,000 instrument-only tracks from Shutterstock and Pond5. Meta does not provide the code used to train the model but has made available pre-trained models that only require appropriate hardware – chiefly a GPU with around 16GB of memory – to run.
Performance Analysis
MusicGen’s songs are reasonably melodic, especially for basic prompts such as “ambient chiptunes music” and on a par with or slightly better than the results from Google’s AI music generator, MusicLM. However, the tool’s output is not yet superior enough to the point of potentially replacing human musicians. One example is the output generated for “jazzy elevator music”.
Next, MusicGen was prompted with a more complicated description, “Lo-fi slow BPM electro chill with organic samples,” and this time, it outperformed MusicLM in terms of musical coherence.
Furthermore, Google, in an effort to seek forestall copyright and other issues around generative music tools, has implemented filters on the public version of MusicLM, resulting in prompts that mention specific artists being blocked.
Ethical and Legal Concerns
Generative music is improving, clearly. Yet a lot of legal and ethical matters remain unresolved. AI like MusicGen “learns” from existing music to produce similar effects. There are growing concerns on the part of artists and generative AI users about whether the deepfake music produced violates the copyright of artists, labels, and other rights holders. Homemade tracks that use generative AI to produce familiar sounds that can be passed off as authentic or close enough have been going viral with music labels flagging them to streaming partners and citing intellectual property concerns.
There is no clarity on whether “deepfake” music violates the copyright of artists, labels, and other rights holders. Lawsuits presently making their way through the courts will likely have a bearing on music-generating AI, including one regarding the rights of artists whose work is used for AI model training without their knowledge or consent.
Frequently Asked Questions (FAQs)
What is MusicGen?
MusicGen is an AI-powered music generator developed by Meta. It can turn text into an audio clip that is 12 seconds long and can be steered with reference audio to follow both the melody and description.
Is MusicGen open-source?
Yes, Meta has made the source code for MusicGen readily available to researchers and developers.
What is MusicGen’s training source?
MusicGen is trained on 20,000 hours of music that includes 10,000 high-quality licensed music tracks and 390,000 instrument-only tracks sourced from Shutterstock and Pond5.
Can MusicGen replace human musicians?
Although MusicGen’s output is somewhat melodic, it is not close to the level of human musicians.
What are the ethical and legal concerns of generative music?
Generative music tools raise a lot of ethical and legal concerns. MusicGen learns to produce similar effects from existing music, and some artists and generative AI users are worried that deepfake music may infringe the copyright of artists, labels, and other rightsholders.
What are the lawsuits in progress that may affect music-generating AI systems?
Several lawsuits that are presently making their way through the courts may have a bearing on the use of music-generating AI systems. One of these lawsuits pertains to the rights of artists who did not give their consent for their work to be used to train AI models. MusicGen was trained on only legally licensed music.
[ad_2]
For more information, please refer this link