It is a challenge, acknowledges Manuel Moussallum, Deezer’s head of research, but the company’s detection technology has so far maintained a low false positive rate, he says, and research to better understand hybrid cases, where AI is only partially used, is ongoing.
Yet others see such concerns as a distraction.
“There is a lobbying message to say ‘we can’t draw the line, and therefore we shouldn’t do anything’,” says David Hoffman, a professor at Duke University in North Carolina who studies the impact of AI-generated music on artists’ livelihoods.
He argues platforms should at least label fully AI-generated tracks and assess the scale of the remaining issue from there.
And listeners appear to want labels: in the Deezer–Ipsos poll, around 80% of respondents said AI-generated music should be clearly labelled, though views on filtering were more divided.
“Listeners deserve awareness,” says singer-songwriter Tift Merritt, who works with Hoffman as a practitioner-in-residence at Duke, citing the way we provide nutritional labels on food or tell consumers if it is organic.
What may really be stopping Spotify from embracing labelling and filtering is economics, speculate many.
Spotify is trying to optimise for platform growth, says Prey from Oxford. Keeping recommendation systems as “unencumbered and free to operate as possible” helps with that.
Detecting AI-generated content would add cost, Hoffman notes, and it may also be cheaper to serve up AI music.
