
Spotify has announced a comprehensive set of measures to address the growing misuse of generative AI on its platform, strengthening protections for artists and ensuring transparency for listeners.
The company revealed that it had already removed more than 75 million spam tracks in the past year and will now roll out stricter policies and new technologies to tackle challenges such as voice cloning, fraudulent uploads, and music spam.
Among the initiatives is a reinforced impersonation policy, which prohibits vocal deepfakes or unauthorized use of an artist’s voice without their explicit consent. Spotify is also collaborating with distributors to prevent fraudulent content from being linked to legitimate artist profiles and is expanding resources to resolve content mismatch cases.
To curb platform abuse, Spotify is preparing to launch a new music spam filter this autumn. The system will detect and reduce the visibility of accounts involved in bulk uploads, duplicates, artificially short tracks, and other exploitative tactics that threaten royalty distribution.
In parallel, Spotify will support a new industry-standard AI disclosure system developed by DDEX. This will allow labels and rights holders to specify how AI was used in creating a track—whether in vocals, instrumentation, or post-production—with disclosures displayed on the platform once submissions begin.
The company emphasized that these measures are designed to safeguard royalties, protect artist identities, and increase transparency, rather than penalize responsible AI use. To encourage widespread adoption, Spotify is partnering with leading distributors including DistroKid, CD Baby, Believe, EMPIRE, and FUGA.
While acknowledging that AI is reshaping music creation, Spotify stressed that its mission remains clear: to ensure fairness for artists and preserve listener trust, treating all music equally regardless of the tools used in its production.
Recent Random Post:














