Meta (formerly Facebook) has recently unveiled its AI-powered music generator called 'MusicGen'. This innovative tool has the capability to convert text descriptions and melodies into audio compositions. As part of their commitment to openness and collaboration, Meta has made the code and models of MusicGen available to the public for research purposes and to foster reproducibility within the music community.
Felix Kreuk, a Research Engineer at Meta AI research, took to Twitter to introduce MusicGen as a simple and controllable music generation model. MusicGen stands out from previous approaches by adopting a single-stage transformerLM architecture built on top of the EnCodec audio tokenizer. This design incorporates efficient token interleaving patterns, eliminating the need for multiple cascaded models such as hierarchical or upsampling structures.
ALSO READ: Garena Free Fire Max: Redeem codes for today
To ensure the model's robustness and versatility, Meta trained MusicGen on an extensive dataset consisting of 20,000 hours of music. This dataset comprises 10,000 meticulously selected high-quality licensed music tracks, as well as a staggering 390,000 instrument-only tracks sourced from the ShutterStock and Pond5 stock media libraries.
While Meta's MusicGen brings a new dimension to AI-generated music, it is worth noting that other tech giants have also delved into this space. In May, Google introduced 'MusicLM', an experimental AI tool that generates high-fidelity music across various genres based on text descriptions. Initially announced in January, MusicLM has now been made available to the public. To access this text-to-music AI tool, users can utilize the AI Test Kitchen app, accessible through web browsers as well as Android and iOS devices.
The emergence of AI-powered music generators like MusicGen and MusicLM underscores the growing interest and potential of AI in the creative domain. These tools provide musicians, composers, and enthusiasts with powerful resources to experiment, inspire, and create new musical compositions. By making the code and models openly accessible, Meta and Google aim to encourage collaboration and further advancements in music generation research, ultimately enriching the music community and facilitating the exploration of AI's creative capabilities.
Inputs from IANS