YouTube is killing the ability to comment on most videos hosted on its platform featuring minors, the company announced earlier today. In a lengthy blog post authored by the company’s PR team, concerns stemming from predatory behavior were cited as the main reason behind the move, with only a few exceptions being confirmed.
A band-aid fix to a growing infection
Google’s subsidiary said the new commenting restrictions are launching simultaneously with a revised content classifier meant to be twice as effective at identifying questionable comments, all while not impacting any monetization efforts on the part of creators. Like always, the firm shared basically concrete details on the matter.
While “security through obscurity” is a highly contested concept in the industry, many prominent experts argue it doesn’t work. In the context of YouTube in particular, one doesn’t have to look far back to see the company promising sweeping changes of how it handles illegal, borderline illegal, and other questionable content, all thanks to the magic of artificial intelligence.
Yet not even AI managed to do much over the last 18 months, which is roughly when YouTube was hit with the last large advertiser boycott over extremist videos trying to recruit susceptible individuals by abusing the company’s platform.
YouTube’s nuclear way of dealing with an issue that has apparently been growing out of its control comes only a week after the company saw a major advertiser pull its resources from the platform over questionable videos of minors, mostly those involving some sort of a “prank” designed for Internet attention that endangered children.
While the initial comment-killing wave is set to focus on videos featuring young minors, content with older minors and any other video type suspected of being capable of “attracting” that kind of attention will be reviewed as well, YouTube said. Its guidelines and vague communications still leave plenty of room for such issues to be solved on a case-by-case basis, assuming they’re ever fully addressed, all without YouTube technically breaking its own terms of service and policies that are growing more convoluted by the week.
More to come
More recently, YouTube’s endeavors aimed at combating questionable content led it to move some of its focus away from problematic audiences to people who attract them, even in cases wherein creators could hardly be blamed for the existence of such communities based on the contents of their videos.
The unexpected strategy shift went as far as to cause a bizarre row between YouTube and filmmakers over thumbnails, i.e. the company’s insistence to start replacing carefully crafted preview images of videos with AI-generated imagery. The project never moved past the experimental phase.
YouTube already reacted to that development by deleting millions of equally questionable reactions to such content but Nestle, the company that prompted the response earlier this month, has yet to signal an intent to return to the world’s most popular video platform. Nestle had all of its massive subsidiaries such as KitKat, Nescafe, Gerber, and Nestea pull their advertising from YouTube as well, having ordered the change on a global level.
Moving forward, YouTube intends to work with a small number of channels featuring minors whose creators and teams are willing to go above and beyond to moderate their comments beyond just using YouTube’s automated services.
No specifics were given in this regard either, except for YouTube suggesting this status will be a highly volatile one. It’s hence unclear how the company could resist straightforward and illegal attempts of malicious creators to damage the engagement rates of their rivals by organizing online brigades aimed at spamming their comment sections with content YouTube is likely to find questionable. Critics warn this is yet another serious problem with the platform no amount of AI will solve for the time being.
The change itself only started rolling out and won’t be available on a global level until late spring, YouTube said.