Google has responded to reports that it has benefited through adverts appearing on channels that supported child abuse after it closed channel Toy Freaks and announced that it had launched a subsequent investigation on the platform.
During a year where Google has responded to concerns by advertisers over brand safety, where it was found that adverts were running against terrorist propaganda video with the brand’s knowledge, it is now defending itself against further claims that the same has happened on channels featuring children.
The channel, Toy Freaks, which began two years ago with Greg Chism and his daughters in different situations, had 8.53 million subscribers making it one of YouTube’s top 100 most viewed channels when it was closed.
Google released a statement following the Times claim on Saturday that the channel was one that benefitted from featuring children in ‘abusive’ situations.
“We take child safety extremely seriously and have clear policies against child endangerment. We recently tightened the enforcement of these policies to tackle content featuring minors where we receive signals that cause concern. It’s not always clear that the uploader of the content intends to break our rules, but we may still remove their videos to help protect viewers, uploaders and children. We’ve terminated the Toy Freaks channel for violation of our policies. We will be conducting a broader review of associated content in conjunction with expert Trusted Flaggers,” the statement read.
Google responded to The Drum’s questions over what it was planning to do to ensure further examples of child exploitation were not found on the platform.
Asked how quickly it moved upon discovering the violations by Toy Freaks, a spokesman claimed it closed the account “as soon as we were notified, we took action.” The channel has also been developing machine learning capabilities to determine which channels do violate its policies, they also highlighted.
“We have strict policies against child endangerment on YouTube. We cannot always speak to the intent of families uploading content, but sometimes they may cross a line and violate our Community Guidelines even if they didn’t intend to do anything harmful. In these instances, we may still remove their videos to help protect the children or viewers. We are going to do a broader review of content such as these with expert Trusted Flaggers,” they continued to explain.
On brand safety and how YouTube was working with advertisers to avoid future brand safety concerns, it was claimed that it had “ramped up” its training to educate clients and agencies on the safety controls that had been introduced.
“In any situation whereby a brand’s ad runs against content they feel is inconsistent with their values, we work with them to review their current settings and put additional exclusions in place. We continue to be transparent on the process we’ve made with our machine learning algorithms and our dedication to making sure they are reaching the right audience on YouTube,” it was stated.
In August, YouTube revealed that it was planning to introduce more advertising filters to block against content featuring nudity, violence or political satire.
YouTube is set to host its annual BrandCast event in London later this week.
Powered by WPeMatico