Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

What two years of AI development can tell us about Sora

If you want to know the future of OpenAI’s latest tool, take a look at Midjourney and DALL-E 2.

An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.
An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.
A screenshot of a video generated by Sora, OpenAI’s generative video model.
Sora/OpenAI CEO Sam Altman
Kelsey Piper
Kelsey Piper is a contributing editor at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

Remember when AI art generators became widely available in 2022 and suddenly the internet was full of uncanny pictures that were very cool but didn’t look quite right on close inspection? Get ready for that to happen again — but this time for video.

Last week, OpenAI released Sora, a generative AI model that produces videos based on a simple prompt. It’s not available to the public yet, but CEO Sam Altman showed off its capabilities by taking requests on X, formerly known as Twitter. Users replied with short prompts: “a monkey playing chess in a park,” or “a bicycle race on ocean with different animals as athletes.” It’s uncanny, mesmerizing, weird, beautiful — and prompting the usual cycle of commentary.

Some people are making strong claims about Sora’s negative effects, expecting a “wave of disinformation” — but while I (and experts) think future powerful AI systems pose really serious risks, claims that a specific model will bring the disinformation wave upon us have not held up so far.

Others are pointing at Sora’s many flaws as representing fundamental limitations of the technology — which was a mistake when people did it with image generator models and which, I suspect, will be a mistake again. As my colleague A.W. Ohlheiser pointed out, “just as DALL-E and ChatGPT improved over time, so could Sora.”
The predictions, both bullish and bearish, may yet pan out — but the conversation around Sora and generative AI would be more productive if people on all sides took into greater account all the ways in which we’ve been proven wrong these last couple of years.

What DALL-E 2 and Midjourney can teach us about Sora

Two years ago, OpenAI announced DALL-E 2, a model that could produce still images from a text prompt. The high-resolution fantastical images it produced were quickly all over social media, as were the takes on what to think of it: Real art? Fake art? A threat to artists? A tool for artists? A disinformation machine? Two years later, it’s worth a bit of a retrospective if we want our takes on Sora to age better.

DALL-E 2’s release was only a few months ahead of Midjourney and Stable Diffusion, two popular competitors. They each had their strengths and weaknesses. DALL-E 2 did more photorealistic pictures and adhered a little better to prompts; Midjourney was “artsier.” Collectively, they made AI art available at the click of a button to millions.

Much of the societal impact of generative AI then didn’t come directly from DALL-E 2, but from the wave of image models it led. Likewise, we might expect that the important question about Sora isn’t just what Sora can do, but what its imitators and competitors will be able to do.

Many people thought that DALL-E and its competitors heralded a flood of deepfake propaganda and scams that’d threaten our democracy. While we may well see an effect like that some day, those calls now seem to have been premature. The effect of deepfakes on our democracy “always seems just around the corner,” analyst Peter Carlyon wrote in December, noting that most propaganda continues to be of a more boring kind — for example, taking remarks out of context, or images of one conflict shared and mislabeled as being from another.

Presumably at some point this will change, but there should be some humility about claims that Sora will be that change. It doesn’t take deepfakes to lie to people, and they remain an expensive way to do it. (AI generations are relatively cheap, but if you’re going for something specific and convincing, that’s much pricier. A tsunami of deepfakes implies a scale that spammers mostly can’t afford at the moment.)

But the place where it seems most crucial to me to remember the last two years of AI history is when I read criticisms of Sora’s images for being clumsy, stilted, inhuman, or obviously flawed. It’s true, they are. Sora “does not accurately model the physics of many basic interactions,” OpenAI’s research release acknowledges, adding that it has trouble with cause and effect, mixing up left and right, and following a trajectory.

Related

Nearly identical criticisms were, of course, made of DALL-E 2 and Midjourney — at least at first. Early coverage of DALL-E 2 highlighted its incompetencies, from creating horrifying monstrosities whenever you asked for multiple characters in a scene to giving people claws instead of hands. AI experts argued that the inability of AI to handle “compositionality” — or instructions about how to compose the elements of a scene — reflected a shortcoming fundamental to the technology.

In practice, though, models got better at fulfilling highly specific prompts and users got better at prompting, and as a result it’s possible today to create images with complex and detailed scenes. Nearly all of the entertaining deficiencies were corrected in DALL-E 3, released last year, and in the latest updates to Midjourney. Today’s image generators can do hands and crowd scenes fine.

In the time between DALL-E 2 and Sora, AI image generation has gone from a party trick to a massive industry. Many of the things DALL-E 2 couldn’t do, DALL-E 3 could. And if DALL-E 3 couldn’t, a competitor often could. That’s a perspective that’s crucial to keep in mind when you read prognosticating on Sora — you’re likely looking at early steps into a major new capability, one that could be used for good or malicious purposes, and while it’s possible to oversell it, it’s also very easy to sell it short.

Instead of overcommitting to any particular perspective on what Sora and its successors will or won’t be able to do, it’s worth admitting some uncertainty about where this is headed. It’s much easier to say, “This technology will keep improving by leaps and bounds” than to guess the specifics of how that will play out.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar