The music industry’s worst fears materialized in 2023, and they sounded a lot like American rapper Drake. A song called “Heart on My Sleeve” appeared online, a convincing fake duet between Drake and The Weeknd.
It quickly racked up millions of streams before anyone could explain who made it or where it came from.
The track did not just go viral. It broke the illusion that anyone was still in control.
In the rush to respond, a new type of infrastructure is quietly emerging. It is not built to stop generative music altogether, but to make it traceable.
Also read: 5 things to know about Rwanda-Singapore AI “Playbook”
Generative music refers to songs created by artificial intelligence models, which use massive datasets to imitate the styles of real artists. Detection systems are now being embedded across the music pipeline, from the tools that train these AI models to the platforms where songs are uploaded, the databases that handle licensing rights, and the algorithms that recommend songs to listeners.
“I have seen AI-generated songs appearing across playlists, and honestly, it is getting out of hand,” said Anthony Keith Gakuru, a music curator based in Kigali.
“Fake artist names are created, their AI-generated tracks uploaded, and they collect royalties. This takes away from real artistes who are already sharing a limited pool of revenue. I am really hoping for solutions as AI keeps advancing so fast.”
Also watch: Video: Shaffy on preparing the next AI-savvy generation through education
The goal is not just to catch synthetic songs after they are released. The idea is to identify them early, tag them with metadata, and control how they move through the system.
Metadata refers to information embedded in the digital file, such as who made the song, what tools were used, and whether AI was involved.
Startups are building these detection tools directly into licensing workflows. Companies like YouTube and Deezer have developed internal systems to flag AI-generated audio as it is uploaded and influence how it appears in search results and recommendations.
Other companies, including Audible Magic, Pex, Rightsify, and SoundCloud, are expanding features that monitor, moderate, and label AI content across the industry. What is taking shape is a fragmented but growing network of companies treating AI detection as basic infrastructure needed to keep the music industry functioning fairly.
Some companies are focusing on tagging AI-generated music from the moment it is created. For instance, Vermillio and Musical AI are developing systems that scan finished tracks for signs of synthetic elements and automatically tag them in the metadata.
Vermillio’s TraceID system goes even further. It breaks songs into “stems,” which are the individual components of a track, such as vocal tone, melody, and lyrics.
TraceID can flag which parts of a song have been generated by AI, allowing rights holders to detect mimicry even when only small parts of a song borrow from original work.
This level of detection is more advanced than platforms like YouTube’s Content ID, which often fail to catch subtle imitations. Vermillio predicts that authenticated licensing powered by tools like TraceID could generate billions in new revenue by licensing AI-generated work before it is released, rather than policing it after it spreads.
The goal is not to shut down AI entirely but to make sure that creators get proper credit and compensation when their work influences AI-generated songs.
In some cases, companies are going even further back in the process by analyzing the datasets used to train AI models. By measuring how much these models borrow from specific artists, they hope to develop more precise licensing systems based on creative influence rather than legal battles after the fact.
This approach recalls past disputes, such as the “Blurred Lines” lawsuit, where artists argued over how much one song borrowed from another. The difference now is that AI attribution could happen before release, avoiding years of courtroom fights.
Musical AI is also building a system that tracks provenance, which means tracing where data comes from and how it is used. “Attribution should not start when the song is done. It should start when the model begins learning,” said Sean Power, Musical AI’s cofounder. “We are trying to quantify creative influence, not just catch copies.”
Platforms like Deezer have already put some of these tools to work. Deezer’s internal system flags fully AI-generated tracks at the time of upload and limits their visibility in playlists and recommendations, especially if the content appears spam-like or designed to game the system.
As of April, Deezer’s Chief Innovation Officer, Aurélien Hérault, said the system was detecting roughly 20 percent of new uploads each day as fully AI-generated, more than double the number from just a few months earlier. The tracks remain available on the platform but are not promoted. Hérault said Deezer plans to start labeling these tracks for users directly soon.
“We are not against AI at all,” Hérault said. “But a lot of this content is being uploaded in bad faith, not to create art, but to exploit the system. That is why we are paying so much attention.”
Some groups are taking detection to the training data itself. One such effort is Spawning AI’s DNTP, or Do Not Train Protocol. This is an opt-out system that allows artists and rights holders to label their work as off-limits for AI model training.
While visual artists already have similar tools, the music industry is still catching up. There is little agreement yet on how to standardize consent and licensing across the global industry. Regulation may eventually force consistency, but for now, adoption remains scattered.
Support from major AI companies has been inconsistent, and some critics warn that the system will only succeed if governed by independent organizations.
“The opt-out protocol needs to be nonprofit, overseen by several actors, to be trusted,” said tech researcher and musician Mat Dryhurst. “No one should trust something as sensitive as consent to a single private company that could disappear or do much worse.”
For now, the race is on to build this layer of technology fast enough to keep up with AI’s rapid evolution and protect human creativity’s place in the music industry. In the meantime, if you want to approach it like I do, whenever you see vocals that feel too perfect or unexpected collaborations paired with those odd, yellow-tinted AI cover arts popping up on your screen, just skip them.
I can see how AI might help creators break boundaries or invent entirely new genres, but I doubt I will ever want to listen to a machine sing. There is simply no soul.