Skip to main content

The context you need, when you need it

When news breaks, you need to understand what actually matters — and what to do about it. At Vox, our mission to help you make sense of the world has never been more vital. But we can’t do it on our own.

We rely on readers like you to fund our journalism. Will you support our work and become a Vox Member today?

Join now

Finally, a realistic roadmap for getting AI companies in check

It’s time for AI regulators to move fast and break things.

A smartphone screen shows GitHub’s Copilot tool, against a computer screen showing the AI GPT-4.
A smartphone screen shows GitHub’s Copilot tool, against a computer screen showing the AI GPT-4.
CFOTO/Future Publishing via Getty Images
Sigal Samuel
Sigal Samuel is a senior reporter for Vox’s Future Perfect. She writes primarily about the future of consciousness, tracking advances in artificial intelligence and neuroscience and their staggering ethical implications. Before joining Vox, Sigal was the religion editor at the Atlantic.

New AI systems are coming at us so fast and furious that it might seem like there’s nothing we can do to stop them long enough to make sure they’re safe.

But that’s not true. There are concrete things regulators can do right now to prevent tech companies from releasing risky systems.

In a new report, the AI Now Institute — a research center studying the social implications of artificial intelligence — offers a roadmap that specifies exactly which steps policymakers can take. It’s refreshingly pragmatic and actionable, thanks to the government experience of authors Amba Kak and Sarah Myers West. Both former advisers to Federal Trade Commission chair Lina Khan, they focus on what regulators can realistically do today.

The big argument is that if we want to curb AI harms, we need to curb the concentration of power in Big Tech.

To build state-of-the-art AI systems, you need resources — a gargantuan trove of data, a huge amount of computing power — and only a few companies currently have those resources. These companies amass millions that they use to lobby government; they also become “too big to fail,” with even governments growing dependent on them for services.

So we get a situation where a few companies get to set the terms for everyone: They can build hugely consequential AI systems and then release them how and when they want, with very little accountability.

“A handful of private actors have accrued power and resources that rival nation-states while developing and evangelizing artificial intelligence as critical social infrastructure,” the report notes.

What the authors are highlighting is the hidden-in-plain-sight absurdity of how much power we’ve unwittingly ceded to a few actors that are not democratically elected.

When you think about the risks of systems like ChatGPT and GPT-4-powered Bing — like the risk of spreading disinformation that can fracture democratic society — it’s wild that companies like OpenAI and Microsoft have been able to release these systems at their own discretion. OpenAI’s mission, for example, is “to ensure that artificial general intelligence benefits all of humanity” — but so far, the company, not the public, has gotten to define what benefiting all of humanity entails.

The report says it’s past time to claw back power from the companies, and it recommends some strategies for doing just that. Let’s break them down.

Related

The rise of artificial intelligence, explained

Concrete strategies for gaining control of AI

One of the absurdities of the current situation is that when AI systems produce harm, it falls to researchers, investigative journalists, and the public to document the harms and push for change. But that means society is always carrying a heavy burden and scrambling to play catch-up after the fact.

So the report’s top recommendation is to create policies that place the burden on the companies themselves to demonstrate that they’re not doing harm. Just as a drugmaker has to prove to the FDA that a new medication is safe enough to go to market, tech companies should have to prove that their AI systems are safe before they’re released.

That would be a meaningful improvement over existing efforts to better the AI landscape, like the burgeoning industry in “audits,” where third-party evaluators peer under the hood to get transparency into how an algorithmic system works and root out bias or safety issues. It’s a good step, but the report says it shouldn’t be the primary policy response, because it tricks us into thinking of “bias” as a purely technical problem with a purely technical solution.

But bias is also about how AI is used in the real world. Take facial recognition. “It is not social progress to make black people equally visible to software that will inevitably be further weaponized against us,” Zoé Samudzi noted in 2019.

Here, again, the report reminds us of something that should be obvious but so often gets overlooked. Instead of taking an AI tool as a given and asking how we can make it fairer, we should start with the question: Should this AI tool even exist? In some cases, the answer will be no, and then the right response is not an audit, but a moratorium or a ban. For example, pseudoscience-based “emotion recognition” or “algorithmic gaydar” tech should not deployed, period.

Related

The tech industry is nimble, often switching tactics to suit its goals. Sometimes it goes from resisting regulation to claiming to support it, as we saw when it faced a chorus calling for bans on facial recognition. Companies like Microsoft supported soft moves that served to preempt bolder reform; they prescribed auditing the tech, a much weaker stance than banning police use of it altogether.

So, the report says, regulators need to keep their eyes peeled for moves like this and be ready to pivot if their approaches get co-opted or hollowed out by industry.

Regulators also need to get creative, using different tools in the policy toolbox to gain control of AI, even if those tools aren’t usually used together.

When people talk about “AI policy,” they sometimes think of it as distinct from other policy areas like data privacy. But “AI” is just a composite of data and algorithms and computational power. So data policy is AI policy.

Once we remember that, we can consider approaches that limit data collection, not only to protect consumer privacy, but also as mechanisms to mitigate some of the riskiest AI applications. Limit the supply of data and you’re limiting what can be built.

Similarly, we might not be used to talking about AI in the same breath as competition law or antitrust. But we’ve already got antitrust laws on the books and the Biden administration has signaled that it’s willing to boldly and imaginatively apply those laws to target the concentration of power among AI companies.

Related

Ultimately, the biggest hidden-in-plain-sight truth that the report reveals is that humans are in control of which technologies we deploy and when. Recent years have seen us place moratoria and bans on facial recognition tech; in the past, we’ve also organized a moratorium and created bright-line prohibitions in the field of human genetics. Technological inevitability is a myth.

“There is nothing about artificial intelligence that is inevitable,” the report says. “Only once we stop seeing AI as synonymous with progress can we establish popular control over the trajectory of these technologies.”

Future Perfect
The tax code rewards generosity. But probably not yours.The tax code rewards generosity. But probably not yours.
Future Perfect

Why giving to charity is a better deal if you’re rich.

By Sara Herschander
Technology
The case for AI realismThe case for AI realism
Technology

AI isn’t going to be the end of the world — no matter what this documentary sometimes argues.

By Shayna Korol
Climate
The electric grid’s next power source might be sitting in your drivewayThe electric grid’s next power source might be sitting in your driveway
Climate

Batteries that could help drive the switch to renewable energy are already, well, driving.

By Matt Simon
Future Perfect
Am I too poor to have a baby?Am I too poor to have a baby?
Future Perfect

How society convinced us that childbearing is morally wrong without a fat budget.

By Sigal Samuel
Future Perfect
How Austin’s stunning drop in rents explains housing in AmericaHow Austin’s stunning drop in rents explains housing in America
Future Perfect

We finally have some good news about housing affordability.

By Marina Bolotnikova
Future Perfect
Ozempic just got cheap enough to change the worldOzempic just got cheap enough to change the world
Future Perfect

Why the $14 drug could reshape global health.

By Pratik Pawar