YouTube is preparing to update its policy to crack down on creators’ ability to generate revenue from “fraud” content, such as mass-produced videos and other types of repetitive content.
On July 15th, the company will update its YouTube Partner Program (YPP) monetization policy, using more detailed guidelines on the types of content that can make money for creators and what they cannot.
The exact policy language itself has not been released yet, but the YouTube Help Document page explains that authors must always upload “original” and “real” content. According to the update, the new language will help creators better understand what “fraudulent” content looks like today.
Some YouTube creators were concerned that the update would limit their ability to monetize certain types of videos, such as videos featuring reaction videos and clips, but posts from YouTube heads of YouTube editor Liaison Rene Ritchie say they are not.
In a video update released Tuesday, Ritchie said the change was a “minor update” to YouTube’s long-standing YPP policy, designed to better identify cases where content is mass-produced or repeated.
Additionally, Richie adds that this type of content is not eligible for monetization for years as it is content that viewers often consider spam.
But what Richie isn’t saying is how easy it is to make such a video these days.
With the rise of AI technology, YouTube is flooded with AI Slops. This is a term that refers to low-quality media or content created using generated AI technology. For example, it is common to find AI audio overlaid on photos, video clips, or other reused content thanks to text-to-video AI tools. Some channels filled with AI Music have millions of subscribers. Like Diddy Trial, fake AI-generated videos about news events have gained millions of perspectives.
In another example, it turns out that YouTube’s true crime murder series has gone viral is completely generated by AI, 404 media reported earlier this year. Even the likeness of YouTube CEO Neal Mohan was used in a phishing scam generated by the site’s AI, despite having tools in place that allow users to report Deepfake videos.
YouTube may downplay future changes as “minor” updates or clarifications, but the reality is that this type of content will grow and its creators can benefit. Therefore, it is not surprising that the company has a clear policy in place that allows it to enact a massive ban on AI Slop creators from YPP.
Source link