Spotify announced on Thursday a series of updates to its AI policy designed to better showcase when AI is being used to make music and reduce spam.
The company has said it will adopt upcoming industry standards for identifying and labeling AI Music with credits known as DDEX and will soon roll out new music spam filters to catch worse actors.
In the DDEX system, labels, distributors, and music partners submit standardized AI disclosures of music credits. This solution provides detailed information on using AI. For example, whether it was used for vocals, instrumentation or post-production generated by AI.

“Using AI knows that artists and producers incorporate AI into different parts of their creative workflow,” Sam Duboff, global head of marketing and policy at Spotify, said at a press conference Wednesday. “This industry standard allows for more accurate and subtle disclosure. Songs do not force songs into false binary where the songs must be AI categorically or not AI at all,” he said.
As part of the same announcement, Spotify has clarified its AI-enabled personalization policy, and AI audio clones, deepfakes, and other forms of vocal replicas and spoofing are not permitted and will be removed from the platform.
While the DDEX standard is under development, Spotify has received commitments from 15 labels and distributors who plan to adopt the technology, and believes it’s time to adopt it as a move that can signal others.
Because AI tools make it easier for anyone to release music, Spotify also has new plans to reduce potential spam as a result. This fall, the company will be rolling out new music spam filters, addressing spam tactics, tagging them, and then stopping recommending those tracks to users.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025

“I know that AI has made it easier than ever to upload a lot of bad actors, create duplicates, and use SEO tricks to manipulate searches or recommendation systems. “But we know that AI is increasingly sophisticated and accelerated these issues, and that new types of mitigation are needed.”
The company said it will gradually deploy filters to make sure they are targeting the right signals and add more signals over time as the market evolves.

In this context, Spotify will work with distributors to address what is called a “profile mismatch.” This is a scheme in which someone fraudulently uploads music to other artist profiles across streaming services. The company said it hopes to prevent many of these before the music is released.
Despite the changes, Spotify executives highlighted that they still support the use of AI when used in non-conventional ways. “We are not here to punish artists for using AI genuinely and responsibly. We hope that artists can become more creative than ever by using AI production tools.” “But we are here to stop the bad actors who are betting on the system. If we actively protect against the downsides, we can only benefit from all the good aspects of AI,” he said.
Spotify updates are rapidly increasing the number of AI-generated music across the industry. This summer, an AI-generated band called Velvet Sundown went viral on the service, causing users to complain that they weren’t transparent about labeling AI tracks. Meanwhile, streaming rival Deezer recently shared that around 18% of the music was uploaded to the service every day, or that over 20,000 tracks are fully generated.
While Spotify doesn’t share its own metrics directly on the issue, Duboff told reporters, “The reality is that all streaming services have roughly the same catalogue.”
“People tend to bring music to all services,” he explained. He added that upload tracks don’t mean that anyone will listen to them or AI music makes money. “We know that using AI is increasingly not binary, but rather a spectrum of how artists and producers use it.”
Source link