Adobe’s latest generative AI experiment aims to help people create and customize music without any professional audio experience. Announced during the Hot Pod Summit in Brooklyn on Wednesday, Project Music GenAI Control is a new prototype tool that allows users to generate music using text prompts and then edit that audio without jumping over to dedicated editing software.
Users start by inputting a text description that will generate music in a specified style, such as “happy dance” or “sad jazz.” Adobe says its integrated editing controls then allow users to customize those results, adjusting any repeating patterns, tempo, intensity, and structure. Sections of music can be remixed, and audio can be generated as a repeating loop for people who need things like backing tracks or background music for content creation.
Adobe also says the tool can adjust the generated audio “based on a reference melody” and extend the length of audio clips if you want to make the track long enough for things like a fixed animation or podcast segments. The actual user interface for editing generated audio hasn’t been revealed yet, so we’ll need to use our imaginations for now.
Adobe says public domain content was uploaded for the public Project Music GenAI Control demo, but it’s not clear if the tool could allow any audio to be directly uploaded into the tool as reference material, or how long clips can be extended for. We have asked Adobe to clarify this and will update this article if we hear back.
While similar tools are already available or being developed — such as Google’s MusicLM and Meta’s open-source AudioCraft — these only allow users to generate audio via text prompts, with little to no support for editing the music output. That means you’d have to keep generating audio from scratch until you get the results you want or manually make those edits yourself using audio editing software.
“One of the most exciting things about these new tools is that they aren’t just about generating audio,” said Nicholas Bryan, a senior research scientist at Adobe Research, in a press release. “They’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music.”
Project Music GenAI is being developed in collaboration with the University of California and the School of Computer Science at Carnegie Mellon University. Adobe describes it as an “early-stage” experiment, so while these features may eventually be incorporated into the company’s existing editing tools like Audition and Premiere Pro, it’s going to take some time. The tool isn’t available to the public yet, and no release date has been announced. You can track Project Music GenAI’s development — alongside other experiments Adobe is working on — over at the Adobe Labs website.