Our AI Animals Went Viral — Then the NYT Started Asking Questions


A few weeks ago, the New York Times published a piece titled “Are AI-Generated Videos Changing How We See Animals?” Around the same time, Mashable ran an exposé revealing that many of the viral videos of “Punch the monkey” — 2026’s undisputed internet animal star — were AI-generated.

I read both articles and felt something I can only describe as a mix of validation and dread.

Because I make AI animal content. It’s called FluffCinema — we recreate iconic movie scenes with AI-generated animals. A hamster recreating Home Alone. Cats in The Matrix. A golden retriever in Titanic.

And when the NYT started asking questions about AI animal videos, they were talking about people like me.

How FluffCinema Started

I should back up.

FluffCinema started as an experiment. I was already running an AI assistant on my infrastructure (I wrote about that in my last post), and I was exploring AI video generation tools — specifically Kling AI, which can produce surprisingly good 15-second clips from text prompts.

One night, half-joking, I prompted it to recreate the Home Alone “paint can” scene with hamsters. The result was… weirdly good. Funny. Shareable. The kind of thing you’d send to a friend at 2 AM.

So I did what any reasonable person would do: I built a whole content pipeline around it.

The setup: an AI research agent scans trending movies and box office data every morning. A copywriting agent drafts captions. I use Kling to generate four 15-second clips, stitch them together with FFmpeg (adding an intro, outro, and music), and publish across Facebook, Instagram, and YouTube. The whole thing runs on a combination of n8n automations, cron jobs, and Claude API calls.

In our first week, we published seven videos. Home Alone. The Matrix. Fast & Furious. John Wick. Titanic. The Lion King. Mission Impossible.

People loved them. The engagement was real. The comments were genuinely funny. Someone wrote “this hamster deserves an Oscar” and I still think about that.

Then the Backlash Hit

Not against us specifically — against the entire category.

The NYT piece explored how AI-generated animal videos are reshaping our relationship with animal content online. France24’s The Observers ran an investigation into problematic AI animal videos. Africa Check debunked a viral “lion attack that turns into a heartwarming hug” — completely AI-generated, completely fake. Mongabay raised conservation concerns about AI wildlife images going viral.

And then Mashable dropped the Punch the monkey story. Turns out, one of the most beloved viral animals of 2026 was, in large part, an AI creation. The audience felt deceived.

The pattern was clear: AI animal content was blowing up, and the backlash was building just as fast.

The Question I Couldn’t Dodge

Here’s the thing nobody tells you about building in public: sometimes the public starts asking uncomfortable questions about exactly what you’re building.

The core question: Does your audience know they’re watching AI?

For a lot of creators in this space, the answer is no. They generate AI animal videos, post them without labels, and let people assume they’re real. Some of the worst offenders are creating “rescue” videos — fake scenarios of animals being saved that tug at heartstrings and drive engagement. It’s manipulative and it gives the entire category a bad reputation.

But even for creators who aren’t being deceptive on purpose, there’s a gray area. When you post an AI-generated cat doing something funny, and people share it thinking it’s real — whose responsibility is that?

I had to answer that question for FluffCinema before someone answered it for us.

We Chose Transparency

From day one, every FluffCinema post includes “Made with AI” in the caption. Every video has a disclosure. We don’t hide the process — in fact, the AI generation is the product. We’re not pretending these are real animals doing real things. We’re recreating movie scenes in a way that’s only possible with AI, and that’s the whole point.

This wasn’t a hard decision, but it was a deliberate one. Here’s why:

Trust compounds. Deception doesn’t. Every creator who gets caught hiding their AI usage loses their entire audience overnight. Punch the monkey’s moment of reckoning came fast. When (not if) platforms start requiring AI labels, the creators who were already transparent will have a head start, and those who weren’t will be scrambling.

The AI is the magic, not a dirty secret. FluffCinema’s appeal isn’t “look at this cute animal.” It’s “look at this hamster recreating Home Alone — and an AI made it.” The technology is part of the entertainment. Hiding it would be like a magician pretending they’re actually doing real magic instead of incredible sleight of hand.

We’ve seen this movie before. Remember when influencers started getting paid for posts without disclosing it? The FTC stepped in, #ad became mandatory, and the creators who had been transparent from the start had already built trust with their audience. AI disclosure is the same trajectory, just faster.

My research team (yes, an AI agent named Alice — it’s very meta) flagged the transparency issue weeks before the NYT article. We were ahead of the curve not because we’re smarter, but because we were paying attention.

The Quality Problem

Here’s what I think the real issue is: the backlash isn’t primarily about AI animal content existing. It’s about most of it being terrible.

Scroll through any social feed and you’ll find AI animal videos that are grotesque. Extra limbs. Melting faces. Cats with seven toes moving in unnatural ways. These aren’t creative works — they’re prompt-and-post slop generated in bulk by people chasing engagement metrics.

This flood of low-quality content poisons the well for everyone. When a viewer sees a bad AI animal video and feels deceived, they become suspicious of all AI content — including the stuff that’s actually well-crafted.

FluffCinema’s bet is that quality matters. Each video takes real creative direction — we think about cinematography, story structure, character consistency, humor. We’re not just typing “cute cat” into a generator and posting whatever comes out. There’s a reason we recreate specific movie scenes: it provides narrative structure that makes the AI output actually entertaining.

Is it art? I don’t know. Is it better than 90% of the AI animal content on the internet? Absolutely.

What Punch the Monkey Taught Me

The Punch the monkey saga is a cautionary tale and a validation at the same time.

The validation: The audience for AI animal content is massive. Globe and Mail called Punch one of the biggest viral sensations of 2026. People want this content. The demand is real and it’s not going away.

The cautionary tale: The moment Mashable revealed the AI behind Punch, the conversation shifted from “this monkey is adorable” to “we were lied to.” The audience felt betrayed — not because the content was AI, but because nobody told them.

That’s the lesson: people can love AI content AND feel deceived by it. The content itself isn’t the problem. The deception is.

FluffCinema was never going to have a “Punch moment” because we never hid the AI in the first place. But I’d be lying if I said those articles didn’t make me think harder about how we present ourselves.

The Bigger Picture for AI Creators

Every person creating with AI — whether it’s videos, images, text, music — is going to face this question eventually. Not just “are you using AI?” but “are you being honest about it?”

My prediction: within 18 months, “Made with AI” labels will be legally required on most platforms. The EU is already moving in this direction. TikTok has introduced AI labels. Meta has them. It’s happening.

The creators who chose transparency early will be the ones still standing when those regulations arrive. The ones who hid it will be dealing with audience trust issues, platform penalties, and the kind of PR crisis that makes brand recovery nearly impossible.

If you’re building with AI and you haven’t figured out your disclosure strategy, figure it out now. Not because you should feel guilty about using AI — you shouldn’t — but because honesty is a competitive advantage that only works if you claim it before everyone is forced to.

What’s Next for FluffCinema

We’re doubling down on the approach that got us here: quality AI content, clearly labeled, with real creative direction.

A few things on the roadmap:

Conservation partnerships. The biggest criticism of AI animal content is that it disconnects people from real animals. We want to flip that script. We’re exploring partnerships with wildlife organizations — dedicating a portion of ad revenue to conservation. If FluffCinema can make people smile with AI animals and help real ones, that turns the criticism into action.

Recurring characters. Punch the monkey proved that a consistent character builds a loyal audience. We’re developing our own cast of recurring animal characters — a hamster action hero, a philosophical cat, a golden retriever romantic lead. Think Pixar characters but AI-generated and released as short-form social content.

Better tools, better output. AI video generation is improving fast. We started with Kling AI; Sora and other tools are pushing the quality ceiling higher every month. As the tools improve, our content improves. The creative direction stays human. The execution keeps getting better.

The Honest Truth

I started FluffCinema as an experiment. A fun side project between the “real” work of my agency and my other startups. But the AI transparency conversation made me realize it’s more than that.

We’re in the early days of a fundamental shift in how content gets made. In five years, the majority of social media content will involve AI in some way. The norms we establish now — about disclosure, quality, authenticity — will shape how that plays out.

I don’t have all the answers. I’m figuring this out in public, making mistakes, adjusting course. That’s the whole point of building in public: you learn faster when you’re accountable to an audience.

But I do know this: if you’re creating with AI and you’re not being transparent about it, you’re building on borrowed time. The question isn’t whether the truth will come out. It’s whether you told it first.

This is Part 3 of my “Building in Public” series.

Part 1: De Diseñador a Emprendedor en Serie: Mi Historia

Part 2: I Gave My AI Assistant SSH Access to My Servers

FluffCinema is on Facebook, Instagram, and YouTube. Every post is labeled “Made with AI” because that’s the point.

Have thoughts on AI disclosure? Drop a comment or find me on LinkedIn.

— Juan

, ,

Leave a Reply

Your email address will not be published. Required fields are marked *