Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

We’re swimming in AI slop. Here’s how to tell the difference.

AI-generated video slop is filling feeds. It’s time to learn how we can tell if something is real.

Sora app icon displayed in AI folder on smartphone screen
Sora app icon displayed in AI folder on smartphone screen
Cheng Xin/Getty Images

If your feed isn’t already filled with AI-generated video slop, it’s only a matter of time.

Meta and OpenAI will make sure of it. Meta recently announced its endless slop-feed Vibes, made up entirely of AI-generated content: cats, dogs, and blobs. And that’s just in Mark Zuckerberg’s initial video post about it.

OpenAI’s new Sora app offers a different flavor of slop. Like TikTok, Sora has a For You page for vertically scrolling through content. But the scariest part of Sora is how real it looks. One feature, called Cameo, lets users make videos of themselves, their friends, and any public-facing profile that grants access. This means videos of Sam Altman hanging out with Charizard or grilling up Pikachu are making the rounds on social media. And, of course, Jake Paul videos are also starting to circulate.

It’s just the beginning, and the technology is only getting better. To help navigate it, we spoke with Hayden Field, senior AI reporter at The Verge. Field and Today, Explained co-host Sean Rameswaram discuss why these tech giants are doubling down on AI video, what to do with it, and we even get fooled by one.

Below is an excerpt of the conversation, edited for length and clarity. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts, Pandora, and Spotify.

What is Mark Zuckerberg trying to do with Vibes?

That is the million-dollar question. These companies, especially Meta right now, really want to keep us consuming AI-generated content and they really want to keep us on the platform.

I think it’s really just about Zuckerberg trying to make AI a bigger piece of the everyday person’s life and routine, getting people more used to it and also putting a signpost in the ground saying, “Hey, look, this is where the technology is at right now. It’s a lot better than it was when we saw Will Smith eating spaghetti.”

How did it get so much better so fast? Because yes, this is not Will Smith eating spaghetti.

AI now trains itself a lot of the time. It can get better and train itself at getting better. One of the big things standing in their way is really just compute. And all these companies are building data centers, making new deals every day. They’re really working on getting more compute, so that they can push the tech even more.

Related

Let’s talk about what OpenAI is doing. They just released something called Sora 2. What is Sora?

Sora is their new app and it’s basically an endless scroll AI-generated video social media app. So you can think of it as an AI-generated TikTok in a way. But the craziest part, honestly, is that you can make videos of yourself and your friends too, if they give you permission. It’s called a Cameo and you record your own face moving side to side. You record your voice speaking a sequence of numbers and then the technology can parody you doing any number of things that you want.

So that’s kind of why it’s so different than Meta’s Vibes and why it feels different when you’re scrolling through it. You’re seeing videos of real people and they look real. I was scrolling through and seeing Sam Altman drinking a giant juice box or any number of other things. It looks like it’s really Sam Altman or it looks like it’s really Jake Paul.

How does one know whether what they’re seeing is real or not in this era where it’s getting harder to discern?

These tips I’m about to give you aren’t foolproof, but they will help a bit. If you watch something long enough, you’ll probably find one of the telltale signs that something’s AI-generated.

“Taylor Swift, actually — some of her promo for her new album apparently had a Ferris wheel in the background and the spokes kind of blurred as it moved.”

One of them is inconsistent lighting. It’s hard sometimes for AI to get the vibes of a place right. If there’s a bunch of lamps — maybe it’s really dark in one corner, maybe it doesn’t have the realistic quality of sunlight — that could be something you could pick up on. Another thing is unnatural facial expressions that just don’t seem quite right. Maybe someone’s smiling too big or they’re crying with their eyes too open. Another one is airbrushed skin, skin that looks too perfect. And then finally, background details that might disappear or morph as the video goes on. This is a big one.

Taylor Swift, actually — some of her promo for her new album apparently had a Ferris wheel in the background and the spokes kind of blurred as it moved.

Anything else out there that we should be looking for?
I just wish we had more rules about this stuff and how it could be disclosed. For example, OpenAI does have a safeguard: Every video that you download from Sora has a watermark or at least most videos. Some pro users can download one without a watermark.

Oh, cool, so if you pay them money, you could lose the watermark. Very nice.

But the other thing is I’ve seen a bunch of YouTube tutorials saying, “Here’s how to remove the Sora watermark.”

Do companies like OpenAI or Meta care if we can tell if this is real or not? Or is that exactly what they want?

They say they care. So I guess that’s all we can say right now. But it’s hard because by the very nature of technology like this, it’s going to be misused. So you just have to see if you can stem that misuse as much as possible, which is what they’re trying to do. But we’re going to have to wait and see how successful they are at that. And right now, if history is any guide, I’m a little concerned.

More in Technology

Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Politics
OpenAI’s oddly socialist, wildly hypocritical new economic agendaOpenAI’s oddly socialist, wildly hypocritical new economic agenda
Politics

The AI company released a set of highly progressive policy ideas. There’s just one small problem.

By Eric Levitz
Future Perfect
Human bodies aren’t ready to travel to Mars. Space medicine can help.Human bodies aren’t ready to travel to Mars. Space medicine can help.
Future Perfect

Protecting astronauts in space — and maybe even Mars — will help transform health on Earth.

By Shayna Korol
Podcasts
The importance of space toilets, explainedThe importance of space toilets, explained
Podcast
Podcasts

Houston, we have a plumbing problem.

By Peter Balonon-Rosen and Sean Rameswaram
Technology
What happened when they installed ChatGPT on a nuclear supercomputerWhat happened when they installed ChatGPT on a nuclear supercomputer
Technology

How they’re using AI at the lab that created the atom bomb.

By Joshua Keating
Future Perfect
Humanity’s return to the moon is a deeply religious missionHumanity’s return to the moon is a deeply religious mission
Future Perfect

Space barons like Jeff Bezos and Elon Musk don’t seem religious. But their quest to colonize outer space is.

By Sigal Samuel