Holy $#!% have things advanced in AI music these past few months!
For the past year, I've been deeply involved in learning, using, and putting together AI websites and programs to work for me as a product designer. The improvements are usually amazing and often happen in large leaps. Graphic generation alone has been mind-blowing. It is not perfect, so much is still hit-and-miss.
As for video generation, it is way too new and early -- and I'm not impressed at the overall look of these seconds-long videos. It needs to be less random; currently, there is no consistency. I find it annoying and boring. But enough about that, let's go back to the music . . .
Okay, here's a song I generated.
This means I wrote the lyrics, fully described what I wanted to be included in the song, and generally guided the direction the song was going bit by bit. The best way to describe creating AI music is that you are like a producer in a recording studio, keeping the stuff you like and generally saying "Well, that part's not right -- how about doing it this way instead?"
My first attempt at doing Crosby, Stills, and Nash song
with a cameo appearance of Paul McCartney towards the end
with a cameo appearance of Paul McCartney towards the end
I'm using Udio.com, which allows developing a full song by building half-minute sections at a time. Udio is currently in beta mode as of this writing, its still free to use, BUT, the $10/month is much more worth the time if you really want to generate your own songs. There is a huge random outcome to most of this process. I find I generally need about 12-32 takes to find the 30+ second clips I want.