[Keeping Tempo With Music Biz] — Developing Ethical AI Tools To Support Songwriters & Composers: An Interview with DAACI’s Dr. Joe Lyske

Dr. Joe Lyske: I have been lucky enough to work as a media composer since the early 90s, writing firstly for the BBC doing radio packages, and then for TV advertising, film and documentaries. When you get into the advertising music scene, you realize two things: firstly, you have no time! Six hours was our usual turnaround time for a 30-second ad. Secondly, you can’t ever afford writer’s block, as you just don’t have the luxury of being able to chew on an idea and wait for inspiration to strike. So I started to experiment with algorithms to help give me ideas. I would roll these algorithmic “dice” to give me inspiration with chord schemes and melodic ideas, all designed to help me think outside of my own creative box, so to speak.

JL: Well, I was fortunate enough to study my Masters in Composition for Screen with Jo Horovitz at the Royal College of Music. Jo and I used to discuss the craft of writing and its rich cultural history, but he knew I was looking for something more: the kind of “Why are we here?” question of music. “What is it? Why do we feel so moved by it? How can we use this information to be better at writing it?” The nature of these questions led me to realize an Arts PhD in music itself wasn’t going to give me the answers I was looking for, so I obtained a PhD in Computer Science and Electronic Engineering, specializing in AI and music. 

During my studies, I was challenged to question my own reality on a scientific level. This took me down what, at the time, felt like a very dark hole. I actually gave everything up for a year at one point as, through academic discovery, I had come to the stark realization that music doesn’t, in fact, exist. “Music,” as interpreted by an individual, is a figment of our imagination and therefore scientifically unmeasurable. In short, everything we think we know about music – or worse, is taught to us about music – is wrong. It’s a very long and often challenging conversation I find myself still having with colleagues, musicians and scientists but in short, at the time, I felt like I had spent over 20 years of my life living a lie. This questioning, this rigor as I pulled everything apart, helped me with the breakthrough that has led to the success of DAACI. I realized that the true power of music lies in the minds of the musicians that create it, not the audio signals that those minds create. It’s the people, the creative minds, that I wanted to explore, define and enhance with DAACI.

JL: Music is a set of codes and conventions that communicate to specific audiences who are “in the know” about the language of their subculture. This is why parents tend to think their kids listen to noise, and why kids think their parents’ music is boring or twee. With DAACI, we can capture the aesthetic properties of how an individual composer communicates, thus allowing their mind to write through the AI and even collaborate with other AI composers’ minds. This gives DAACI the ability to communicate to different audiences in an authentic way. DAACI is also taught, not trained, so it does not consume compositions and recordings and then start generating. You have to tell it your intentions as a composer or analyst, teach it how you write and make aesthetic choices. Then, it writes “you” for you, and you know it genuinely is “you.”

JL: DAACI stands for Definable Aleatoric Artificial Composition Intelligence. Most people know what we mean when we say “Artificial Composition Intelligence;” it’s the “Definable Aleatoric” part that’s really interesting. Aleatoric means “chance” music, like the pizzicato strings when there are spiders all over the back of Indiana Jones in the opening of Raiders of the Lost Ark. “Definable Aleatoric” simply means we can define the direction of the “chanciness” of the output. Unlike other generative AI systems, we can be very specific with DAACI about what we want the composition to “feel” like or do for us, in order to tell a musical story across time. We can dial certain emotional connotations up and down, as well as be specific about every textural element within the output composition.

JL: The Natural Series has evolved from the development of all the patented AI and musician-led core technologies we’ve been working on since this all began. Each element in the Series meets the needs of different audiences and use cases, from creators who want to enhance their process with tools and plugins like Natural Drums, to music rightsholders looking for solutions like Natural Edits. 

Edits is all about using DAACI’s musical knowledge to edit works that are already written. This allows rightsholders to have works in their catalogs tracked, edited and repurposed by music supervisors, directors, and producers on the spot. So many times with music briefs, producers are sent a track and told, “Imagine the bit at 2 minutes 27 seconds and then add the bit at 7 minutes 33 seconds for the packshot!” This tends to get lost in translation. Natural Edits allows this to be done to the track on the spot so nothing needs to be left to the imagination. 

We’ve just released Natural Drums Beta to our new Open Beta Community, a place for people to learn more about our approach and test our tools. It’s our first VST plugin for standard digital audio workstations (DAWs) like Logic, GarageBand and Ableton, that helps composers with their creative flow. The feedback we’re getting from community members is so important, and I’m pleased to say, amazingly positive! Writers are saying they like how they can very quickly brief the plugin and it then performs for them just like a sophisticated session drummer. 

JL: These words are all related. You cannot have a responsible offering if it is not ethical, and it cannot be ethical unless you can trace the origins of the creative process and be transparent about the AI’s contribution, or what it has used to learn to write. DAACI is in effect a platform. It allows trained writers to both contribute to it and use it to write their own works. We can trace the origins of every note it generates to the original composer or analyst that taught it. This way we know how to pay composers for their contributions, as well as give users the security to know that their interaction with the AI is something they have full ownership of.

JL: Well, at DAACI we’re all composers and performing musicians, so we’ve never fallen into the mistake so many competitors have of creating generative music tools without considering rightsholders at the core of the company. We work with academic institutions, sample providers, musicians and performing rights organizations, and we’re designing tech that “bakes in” protection for rightsholders, as well as for  users and contributors to the AI.

JL: This was a fantastic achievement for us at TMC2, DAACI’s venture studio and the end of a very long road of fighting for intellectual rights recognition. The law in the UK meant AI architectures, the training method, and the final trained artifacts were not patentable. This meant that UK investors in AI could never achieve the level of protection afforded to US investors. We changed all that. 

On a more specific note, this ruling was on our Emotional Perception Patent that is in DAACI offerings. It is a significant step towards general AI in that it gives the AI the ability to perceive human emotion and human emotional responses to media and art. This means the AI can listen to a piece of music, or look at a picture or film, and then auto-tag the work, as well as find similar “feeling” works. This all happens in the same “space” so to speak. So, we can offer the AI a piece of film and ask, “What piece of music would go well with this? Or what piece would contrast this?” Or, we can give it a piece of music and say, “What film assets would make a good music video?” This will supercharge the music sync industry on a new level.

JL: I see DAACI empowering a new era where musicians and artists create and monetize compositions – and even textural elements of compositions – in ways that have never been possible before now. Imagine putting the principles for how you write a string texture into an AI, then proving how, over time, this has contributed revenue to you through its use in hundreds of compositions. You could now sell the publishing rights to that textural element through an exchange without ever asking permission of an A&R person, or indeed a publisher. You don’t need to be trusted, because the income from the textural element is written in public ledgers that record you clearly as the owner. You don’t need permission because the exchange is not going through some other company that you have to impress to be recognized. This is the freedom composers deserve, and where DAACI has a clear roadmap to the future.