X

YouTube The Enforcer Removed Millions Of Videos & Comments In Q3, 2018

YouTube has confirmed the removal of millions of comments and videos in the third quarter of 2018 due to what YouTube refers to as a breach of company and community guidelines. In total, the video-uploading and sharing site removed 7.8 million videos and 224 million comments between July and September.

In terms of the videos specifically, and in spite of headlines often referring to the more extreme side of YouTube, the company was quick to point out that the majority of videos taken down in the period did not involve violence, extremism, or child safety issues. Instead, YouTube states that in September alone, more than 80-percent of the videos that were taken down (from 90-percent of the channels that had videos removed) revolved around “spam or adult content.”

Speed matters and YouTube wants you to know it’s acting fast

YouTube, among others, has been under increasing pressure from various agencies and interests groups to ensure that it acts swiftly when it comes to removing content that’s deemed inappropriate and much of the announcement today focused on this point with YouTube keen to explain how it is not only acting faster than ever before, but the content in question is not particularly impacting on users. For example, YouTube has reiterated that its approach to content management is based on combination of real people and machines. In terms of the latter, YouTube explains machine detection accounted for 81-percent of the videos that were flagged as ‘spam or adult,’ and of that 81-percent, 74.5-percent had been removed before they had gotten a single view. While not quite to the same level, even the more serious content is apparently being taken down at speed. With YouTube stating that in September “well over 90-percent” of content taken down for infringing on either “Violent Extremism Or Child Safety” guidelines were removed before they had received ten views on the site.

YouTube also cracking down on comments

YouTube was equally keen to highlight its efforts to ensure the service remains a safe place in terms of comments. Which in itself is likely to be a tougher job than video monitoring as YouTube notes the 224 million comments that were taken down simply represent “a fraction” of the number of comments that are posted to the site during any given quarter. In an almost identical sentiment to that made for videos, YouTube points out that the “majority” of comments that were taken down were not actually to do with offense or extremism, but instead were spam-based. The company says the crackdown on spam is now having a positive effect on the overall community with YouTube citing daily users are 11-percent more likely to comment compared to the year before.

Spam proving to be an issue

In an effort to highlight extreme videos and comments are not as prevalent as some might anticipate, YouTube is clearly drawing attention to the spam problem it has. As in both instances the vast majority of the content was spam-based. This is an issue which seems to not only be resigned to YouTube either with a number of Google services seemingly suffering in the same respect. Google+ for example is notorious for spam and on that platform the problem has become increasingly worse as Google has lessened the support it provides. Something that’s likely to remain a high issue for the remaining few months the service is operational, and one that’s likely to become even more visible to users as others leave the platform before its impending closure. Unlike Google+, Google and YouTube do not have the luxury of allowing the spam to persist as an issue on the video-sharing platform and similar to the approach taken with videos, YouTube once again explains it relies heavily on the use of human and machine intervention to identify “bad actor” comments. In comparison to the videos, YouTube has also turned to a form of user-regulation as it notes it has introduced new tools that allow content creators to help with the suppression of bad comments. Tools which YouTube states are now used by more than one million creators.

Context still remains an issue for YouTube

One of the biggest issues YouTube faces with its monitoring approach, and whether it be for videos or comments, is actually identifying the more subtle instances where something is deemed to be inappropriate and by the same token, avoiding false positives where content is initially, but wrongly deemed bad. This is more of an issue due to the company’s dependence on machines as while YouTube is quick to point out how many times machines have flagged content, the same pointing out shows the far lesser impact the more contextually-accurate human intervention is having. For example, while 7.8 million videos were removed during the quarter, well over six million of those videos were flagged automatically, with less than 1.5 million flagged without the help of machines.

YouTube has only recently started providing detailed information on its removal figures and so comparisons with previous quarters can only be minimally made. However for comparison, in the previous quarter (April through June, 2018), YouTube removed just over 7 million videos and less than one million of them were flagged without the help of automatic identification. The numbers get even worse when you also remove the end user from the equation. So for example, while less than 1.5 million videos were flagged by humans during the third quarter, over 500,000 of those removals were based on user identification, bringing the total number identified by the company’s “individual trusted flaggers” down to less than one million.

For some, numbers like this might be enough to reignite the debate on whether enough of the right content is being taken down and whether the numbers cited by YouTube, especially those that heavily lend to spam being the main problem is an accurate reflection of the content that needs to be removed, or more simply the product of the identification methodology in use. As although spam is an annoying issue, it is not the main concern when it comes to unsavory content on a family-friendly platform like YouTube, or the main reason YouTube is now taking more proactive measures. In the end, while almost 1.5 million videos were removed due to human intervention, that still means those 500,000+ videos spotted by users, were viewed by users.