DeepMind releases Lyria 2 music generation model
Google DeepMind has updated its Music AI Sandbox, adding features like Lyria 2 for high-fidelity audio and Lyria RealTime for real-time creation, enhancing collaboration and creativity for U.S. musicians.
Read original articleGoogle DeepMind has announced updates to its Music AI Sandbox, enhancing its features and expanding access for musicians, producers, and songwriters in the U.S. The Music AI Sandbox, initially launched in 2023, is designed to integrate AI into the music creation process, allowing artists to explore new sounds and overcome creative blocks. The latest version includes Lyria 2, a music generation model that produces high-fidelity audio, and Lyria RealTime, which enables real-time music creation and performance. Key features of the Music AI Sandbox include tools for generating new musical ideas, extending existing pieces, and editing music with fine control over mood and style. The initiative emphasizes collaboration with musicians to ensure the tools are practical and beneficial. Artists have reported positive experiences using the Sandbox, noting its potential to inspire creativity and streamline the production process. Google aims to responsibly deploy these generative technologies while gathering feedback from the music community to refine their offerings.
- Google DeepMind has enhanced its Music AI Sandbox with new features and broader access for U.S. musicians.
- The updated Sandbox includes Lyria 2 for high-fidelity music generation and Lyria RealTime for interactive music creation.
- Key features allow users to generate, extend, and edit music, facilitating creative exploration.
- The initiative emphasizes collaboration with musicians to ensure the tools meet their needs.
- Artists have reported positive experiences, highlighting the Sandbox's potential to inspire creativity and aid in music production.
Related
YouTube in talks with record labels over AI music deal
YouTube is in talks with major record labels to license AI tools replicating artists' music. Some artists are wary of devaluation concerns. Negotiations aim to involve select artists for AI music generation.
Pushing the Frontiers of Audio Generation
Google DeepMind has advanced audio generation technology, enabling natural digital interactions and long-form dialogues. Their latest model improves efficiency and quality while emphasizing responsible AI development and future integration with other media.
CEO of AI Music Company Says People Don't Like Making Music
Mikey Shulman, CEO of Suno AI, believes music creation is often unenjoyable due to its challenges, aiming to make it accessible through AI, despite facing legal issues and criticism regarding authenticity.
YueAI – Create Professional Music with AI, No Musical Expertise Required
YueAI is an AI music creation platform that allows users to generate and customize professional-quality songs across various genres quickly, making music creation accessible and cost-effective for everyone.
Music Generation AI Models
AI music generation models are transforming production by simplifying sound creation. The market is projected to reach $2.8 billion by 2031, enhancing rather than replacing traditional artistry and fostering appreciation for live music.
- Many users express concern that AI-generated music may flood the market with low-quality content, overshadowing genuine artistic efforts.
- Some commenters appreciate AI tools for enhancing creativity and enabling those with physical limitations to create music.
- There is a philosophical debate about the value of art, questioning whether it is defined by the artist or the consumer.
- Several users criticize the lack of accessibility and hands-on experience with the AI tools, feeling that they are more about marketing than genuine innovation.
- Others highlight the importance of human touch and emotion in music, arguing that AI cannot replicate the unique qualities of live performances.
I want Al to do my laundry and dishes so that I can do art and writing, not for Al to do my art and writing so that I can do my laundry and dishes.
- Joanna Maciejewska
You could add music
“Music itself is going to become like running water or electricity. So take advantage of these last few years, because this will never happen again. Get ready for a lot of touring, because that's the only unique experience left.”
While Bowie had different reasoning for making that statement, it's interesting to think that with AI-generated music, his idea of "music like water or electricity" might finally come true.
Does one believe that the value of the art-piece (be is music, paintings, film, or whatever) is created in the mind of the artist, or is it created in the mind of the consumer?
If you believe only in the former, AI art is an oxymoron and pointless. If you believe only the later, you're likely to rejoice at all the explosion of new content and culture we can expect in the coming years.
As far as I can tell though, most regular people think that the truth is somewhere in between these two extremes, where both both the creator and the consumer's thoughts are important in unison. That culture is about where the two meet each other, and help each other grow. But most of the arguments I've seen online seem to ignore or miss this dichotomy of views entirely, which unfortunately reduces the quality of the debate considerably.
No one wants to hear other people's ai songs because they lack meaning and novelty.
AI image and short video generation can create novelty and interest. But when the medium require more from the person like reading a book or watching a movie the level of AI acceptance goes down. We'll accept an AI generated email or ad copy but not an ai generated playlist and certainly not a deepfake of someone from reality. That's what people want from AI, a blending of real life into a fantasy generator but no one is offering that yet.
The best use of Suno for has been the ease with which you can generate diss tracks: I ask Gemini to make a diss track lyrics related to specific topics, and then I have Suno generate the actual track. It's very cathartic when you're sitting at home in the dark because the power company continues to fail.
Anyway, I hope I can get access, I think it would be fun to vibe some new music. Although this UI looks severely limited in what capabilities it provides. Why aren't the people who build these tools innovating more? It would be cool if you could generate a song and then have it split into multiple tracks that you can remix and tweak independently. Maybe a section of track is pretty good but you want to switch out a specific instrument. Maybe describe what kind of beats you want to the tool and have it generate multiple potential interpretations, which you can start to combine and build up into a proper track. I think ideally I'd be able to describe what kind of mood or vibe I'm going for, without having to worry about any of the musical theory behind it, and the tool should generate what I want.
I've just recently re-discovered the joy of writing my own songs, and playing them with (actual) instruments. It's something I get immense pleasure from, and for once, I'm actually getting some earned traction. In another life, I may have been a musician, and it's something I fantasize about regularly.
With all these AI-generated music tools, the world is about to be flooded with a ton of low-effort, low-quality music. It's going to to absolutely drown out anyone trying to make music honestly, and kill budding musicians in their crib.
I suppose this is the same existential crisis that other professions/skills are also going through now. The feeling of a loss of purpose, or a loss of a fantasy in learning a new skill and switching careers, is pretty devastating.
Lyria 2 is currently available to a limited number of trusted testers
But have you ever attended live music shows ? Have you ever ‘felt’ the music ? Even someone at a local bar singing feels and hits different.
AI can never bring feelings. That will never change. Even science fiction agrees with that.
So bring all the AI you want everywhere, some things are irreplaceable by electronic world.
The 2-3 clips I listened to in the article sounded awful (my own subjective opinion).
"Country of residence (this current phase of the experiment is only available to users based in the U.S. for now, but feel free to submit interest and stay tuned for updates): "
Soon, hiring people for commercial background music might be rare. Think AI for jingles, voiceovers, maybe even the models and visuals. Cafes can use AI-generated music too – in a way, the owner curates or "creates" it based on their taste.
But there are still interesting parts to human music making: the unpredictability and social side of live shows, for example. Maybe future music releases could even be interactive, letting listeners easily tweak tracks? Like this demo: https://glicol.org/demo#ontherun
Now imagine, without mastering a specific instrument or skill, you can now create the music in/of your own mindspace, which for me is rarely the music I hear, and often a deviation of what I do hear.
I'm sure this isn't quite what's being offered yet, but every time I grasp my instruments with my trademark touch of inevitable futility, I hope I make it to a time when I can produce what my lack of virtues presently prohibits. It's not the physical acrobatics or mathematical showcasing of great music that I want - it's the end result of the music itself.
Music is a cultural practice, this is just organised sound.
Maybe one day AIs will be able to participate in cultural practices like humans do, as sentient beings, but current generative AI models do not.
a machine doing any of this would be not causing a meltdown by musicians in the 80s. A punk rock band would not feel threatened by this neither would be Prince.
the sad truth is human output is so averaged out now, that most of it will be replaced.
I don't think audio files are the right output for deep learning music models. It'd be more useful to pro musicians to describe some parameters for synths, or describe a MIDI baseline, or describe tunings for a plugin and then have the model generate these, which can then be tweaked similar to how we now code with LLMs. But generating muddy, poorly mixed WAVs with purple prose lyrics is only an interesting deep learning demo at this point, not an advancement in music itself.
Prompt: Hazy, fractured UK Garage, Bedroom Recording, Distorted and melancholic. Instrumental. A blend of fractured drum patterns, vocal samples that have been manipulated and haunting ambient textures, featuring heavy sub-bass, distorted synths, sparse melodic fragments.
https://www.youtube.com/watch?v=cNog4qB-mHQ&t=5s&pp=2AEFkAIB
It's pretty fun :)
They still haven't learned, wow.
Someone in there really wants to drive Google to the ground.
"We made something really fancy"
"Oh you wanted to try it out for yourself instead of just reading our self-congratulatory tech demos article? How about fuck you!"
Yeah fuck you too Google, this is why your AI competitors are eating you alive, and good riddance
Everyone wants the futuristic star trek future but we all forget that there is only one Captain Kirk and his small crew. Most of us will be sitting around at home doing laundry and cleaning the workplaces of the robots that is owned by large corporations.
Related
YouTube in talks with record labels over AI music deal
YouTube is in talks with major record labels to license AI tools replicating artists' music. Some artists are wary of devaluation concerns. Negotiations aim to involve select artists for AI music generation.
Pushing the Frontiers of Audio Generation
Google DeepMind has advanced audio generation technology, enabling natural digital interactions and long-form dialogues. Their latest model improves efficiency and quality while emphasizing responsible AI development and future integration with other media.
CEO of AI Music Company Says People Don't Like Making Music
Mikey Shulman, CEO of Suno AI, believes music creation is often unenjoyable due to its challenges, aiming to make it accessible through AI, despite facing legal issues and criticism regarding authenticity.
YueAI – Create Professional Music with AI, No Musical Expertise Required
YueAI is an AI music creation platform that allows users to generate and customize professional-quality songs across various genres quickly, making music creation accessible and cost-effective for everyone.
Music Generation AI Models
AI music generation models are transforming production by simplifying sound creation. The market is projected to reach $2.8 billion by 2031, enhancing rather than replacing traditional artistry and fostering appreciation for live music.