a book about AI that I'm reading has this explanation about the algo -

“By 2018, however, it was clear that YouTube's new reward function also had problems. A longer viewing time didn't necessarily mean that viewers were happy with the suggested videos—it often meant that they were appalled, outraged, or couldn't tear themselves away. It turned out that YouTube's algorithm was increasingly suggesting disturbing videos, conspiracy theories, and bigotry. As a former YouTube engineer noted, the problem seemed to be that videos like these do tend to make people watch more of them, even if the effect of watching them is terrible. In fact, the ideal YouTube users, as far as the AI is concerned, are the ones who have been sucked into a vortex of YouTube conspiracy videos and now spend their entire lives on YouTube. The AI is going to start suggesting whatever they're watching to other people so that more people will act like them. In early 2019, YouTube announced that it was going to change its reward function again, this time to recommend harmful videos less often. What will change? As of this writing, it remains to be seen.”

source [aiweirdness.com]

//

whoisashygirl.10centuries.org.