![]() The same team also revealed that content published by the Lyria model is watermarked by a system called SynthID. Artists who wanted to completely change their presentation and style overnight could choose to go from folk music to heavy metal to see what that would do to their music. The same could be done with another segment of music produced by a MIDI keyboard, or another instrument, by changing it into a realistic choir, or using the model to match instrumental accompaniment to a vocal track.Īccording to the company, Lyria-powered music tools could be used to create entire new music segments or instrumentals from scratch, transform audio from one style or instrument to another. In a demonstration, Google revealed how Lyria could be used to take a hummed or sang section of music and transform it into an instrumental horn line. Google also previewed AI Music Tools, a set of experimental tools that it will roll out in the coming months to creators to assist with their creative process use Lyria that are powered only by the limits of their imagination. “This experiment will offer a small insight into the creative opportunities that could be possible and I’m interested to see what comes out of it.” ![]() “When I was first approached by YouTube, I was cautious and still am, AI is going to transform the world and the music industry in ways we do not yet fully understand,” said Chari XCX, whose song “Speed Drive” was featured in blockbuster “Barbie” movie. Users merely need to pick the artists that they want their AI-generated soundtrack to sound like and input their prompt and it will generate sound appropriate to genre and style. Joining the experiment, a number of famous musical artists lent their musical styles and voices to DeepMind, including John Legend, Sia, Charli XCX, Louis Bell and T-Pain. With the experiment, a limited set of users will gain access to the generative AI model, which will allow them to type in a text prompt of what type of music they want and it will produce a soundtrack, including harmony, melody and vocals, for their video. The initial trial of Lyria will be for Shorts, which are a maximum of 60 seconds long, making it a staging ground for AI-produced music, although the experiment only produces a 30-second soundtrack. ![]() “When generating long sequences of sound, it’s difficult for AI models to maintain musical continuity across phrases, verses, or extended passages.” “Music contains huge amounts of information - consider every beat, note, and vocal harmony in every second,” Google said in the announcement, noting where Lyria is designed to excel. ![]() It can also listen to and continue a piece that has already been produced. ![]() According to the research lab, Lyria is built to be superlative at generating music that includes instrumentals and vocals. Powering the whole experimental process is a new generative AI model for music created by DeepMind called Lyria. The company also unveiled an upcoming set of music AI tools, which Google says will help creators produce music in new ways. YouTube partnered with Google DeepMind, the company’s AI research laboratory, to use the advanced AI model Lyria to create the musical experiment Dream Track for Shorts. The creators of YouTube Shorts, very short video content on Google LLC’s video platform, can soon use tools powered by generative artificial intelligence to create music for their videos using the style of various famous artists. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |