YouTube Bans Two Popular Channels for AI-Generated Fake Movie Trailers Watched by Millions

Depositphotos
Our Editorial Policy.

Share:

YouTube has shut down two major channels that were creating AI-generated fake movie trailers.

The channels, Screen Culture and KH Studio, had amassed more than 2 million subscribers and over a billion views combined before being removed. Visitors to the pages now see a message saying, “This page isn’t available. Sorry about that. Try searching for something else.”

Screen Culture, based in India, and KH Studio, based in Georgia, did not provide a comment when contacted. Earlier this year, YouTube had already suspended ads on both channels after an investigation by Deadline highlighted the rise of AI-made fake trailers on the platform.

The channels briefly returned to monetization after labeling videos as “fan trailer,” “parody,” or “concept trailer.” However, those labels disappeared recently, raising concerns in the fan-made trailer community.

YouTube stated that the channels’ return to their old practices violated its spam and misleading-metadata rules, leading to their permanent termination.

Investigations revealed that Screen Culture combined official film footage with AI-generated images to make trailers that tricked many viewers.

Founder Nikhil P. Chaudhari said his team of a dozen editors used YouTube’s algorithm to their advantage by releasing trailers early and updating them frequently. By March, Screen Culture had created 23 versions of a trailer for The Fantastic Four: First Steps, some of which ranked higher than the official trailer in search results. Other examples included HBO’s Harry Potter series and Netflix’s Wednesday.

The investigation also uncovered that some Hollywood studios, including Warner Bros. Discovery and Sony, quietly ensured that ad revenue from these AI-heavy videos went to them instead of the creators.

Disney took a more direct approach, sending a cease-and-desist letter to Google last week, claiming that its AI tools and services violate copyrights on a “massive scale.”

This move by YouTube signals that the platform is taking AI-generated content seriously, especially when it misleads viewers or infringes on copyrights. It raises questions about the future of AI in creative spaces and the responsibility platforms have to manage it.

It’s a tough but necessary step. AI can be a tool for creativity, but when it misleads millions of viewers, action is required. I’m curious what you think—should YouTube clamp down on AI content this strictly, or allow more freedom? Share your thoughts in the comments.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments