YouTube is testing new AI capabilities that will allow you to create musical compositions with just text or humming.
Google is testing new artificial intelligence capabilities for YouTube that will allow users to create musical compositions simply by providing a text prompt or humming a melody.
The first feature, called Dream Track, has already been deployed to a small group of US-based content creators on the platform. It is designed to automatically generate 30-second audio clips mimicking the style of prominent musical artists who have partnered with YouTube.
Dream Track currently supports the styles of nine well-known musicians: Alec Benjamin, Charlie Puth, Charli XCX, Demi Lovato, John Legend, Papoose, Sia, T-Pain and Troye Sivan.
The demo video showed how inputting text like “A ballad about how opposites attract, upbeat acoustic” into Dream Track can produce a short track in Charlie Puth’s characteristic style.
Besides this, YouTube has also demonstrated Music AI tools that let creators craft 30-second musical pieces without using any instruments. As shown in a video, humming a melody while inputting “saxophone solo” as text can convert it into a saxophone tune.
Later this year, participants in YouTube’s Music AI incubator program will gain access to these creation tools. The features are powered by Lyria, a generative music model developed by Google’s DeepMind division. Tracks created with Lyria will contain an indiscernible SynthID watermark that persists even if the audio is modified, according to DeepMind, allowing identification of AI-generated content.
Music composition via artificial intelligence is part of a broader proliferation of AI-enabled generative content creation. While promising exciting possibilities, these technologies could also transform musical traditions and popular conceptions surrounding artistic talent. Their impacts remain to be seen.