Please forgive me for my ignorance on this topic! I'm just an AI enthusiast.
So, everyone knows one can take words and pictures and put them through AI like LLM's and ML Image generators, respectively, and crank out AI words and images; and at least from what I understand about Image generators, one can make Checkpoint's and Lora's to generate a model stylized by unique training data (Ex: Artists work etc..).
Is there AI development yet that takes clips of songs of someone's style, and then start cranking out music in the style? Does it even exist?
The Dudesy George Carlin AI special has on YouTube, I think, brought to light some powerful ways AI is and will be used and [sometimes] sued [Dudesy], and change the future of how content is reliably generated and consumed. (Dudesy thing is different because they took the likeness of someone else, it would have been better if they used only their material - which is kind of what I'm talking about with music generation in this thread.)
I'm not a great artist or anything, it's just a hobby, and I have a lot of recordings of music in my style, which are done completely for fun (personal use), made from scratch. I'm sure I'm not the only one. I've seen people take their own styles and make Lora's and Checkpoints with Images to generate their own likeness using tools like Stable Diffusion and it's pretty incredible.
I could see this being a service that musicians would use in the future as a way of learning about their own music.
Many people have made their own music from scratch (mp3) clips (so the data, I would think exists), and I'm just wondering if there's developments with AI in music similar to how Stable Diffusion works with images?
[link] [comments]