I host Vox’s science podcast, Unexplainable (which you should listen to!). We’re always looking for the open questions at the forefront of science, so you can imagine that we’ve spent a fair amount of time exploring the potential of AI. We did a whole series called the Black Box last year, which got into how no one really knows how AI works, how it’s hard to predict what it might look like someday, and how it’s even harder to say how it’ll impact our world.
What a practical approach to AI looks like, according to an expert
How Ethan Mollick uses AI to help him write.


But all that focus on the future misses some really interesting stuff happening right now. I love messing around with chatbots — I use them for fun, I use them at work — but when I talk to friends and coworkers, a lot of them don’t even know where to start with AI. So I wanted to talk to someone who has practical suggestions of how to use AI right now to help me with all kinds of things — brainstorming, editing, different kinds of creativity. And I couldn’t think of anyone better than Ethan Mollick. He’s a professor at Wharton, where he studies entrepreneurship and innovation, and he writes a lot about AI on his Substack, One Useful Thing, and in his recent bestseller, Co-Intelligence: Living and Working with AI. And while he’s very thoughtful about what the future might look like, he doesn’t let that get in the way of trying to understand how AI can be used in the present.
So, when I was asked to contribute something to The Highlight Podcast, where each month a Vox journalist calls up someone whose work we think the world should know a little more about, I knew exactly who to call. Below you’ll find an edited version of our conversation. Every month, we very thoughtfully remove a few of the extra good parts so that you’ll be moved to listen to the podcast episode, and enjoy this conversation in its natural habitat.
You can also add a private RSS feed in Apple Podcasts by taking the following actions:
- Open the Podcasts app
- Tap Library
- Tap Edit (three dots) in the upper right corner, and then tap “Follow a Show by URL”
- Enter your RSS feed URL
- Tap Follow
- The podcast will populate under the Listen Now and Library tabs
What follows is a partial transcript of our conversation, edited for clarity and length. For a longer version of the conversation, check out the podcast. You can listen here.
Noam Hassenfeld
There’s all this debate about what AI is, whether it’s super smart, whether it’s sentient, what it could be someday. But you also write a lot about practical stuff like, what it can do to help us right now. So I’m wondering if you can just tell me about your relationship with AI and maybe how you got interested in it in the first place.
Ethan Mollick
Sure. I’ve been AI-adjacent for a long time in my career. When I was getting my PhD, I also worked at the MIT Media Lab for their AI group, with one of the fathers of AI, a guy named Marvin Minsky. And I was the non-technical person who was like, how do we translate this stuff to the outside world? And I got very involved in thinking about how we use AI and tools like that for education. So my passion for the last 15 years has been, how do we help people at scale become better entrepreneurs, better managers? How do we teach at scale? So we were building games and simulations, and playing with AI for a long time.
When generative AI hit, I was one of the people thinking about this technologically, and it just turns out that a lot of people weren’t paying attention. So being someone who’s actively experimenting, is part of a university setting, who cares about testing stuff, who understands organizations and education, it just sort of spiraled. So I became somebody who was using these systems, which meant people were talking to me about these systems, when I started publishing on it and doing academic research, and sort of put me in the center of this space.
Noam Hassenfeld
How do you think most people are thinking about these kinds of AIs, and how do you think they should be thinking about them?
Ethan Mollick
So first of all, I don’t think most people are thinking about that very much at all. When I’m in groups of people, most people have not used these systems very much. They’ve kind of bounced off of them. The users are deep users of them. I think there’s a lot of ways to think of them. Again, this is a general-purpose technology in every sense we measure it. General-purpose technology is the rare, once-in-a-generation technologies or longer that influence every aspect of work and life. So part of what makes this a general-purpose technology is, it’s applicable to many different things. I think when people start using these systems, what it means is deeply dependent on context. As an educator, I think a lot about, what does this do for teaching? As somebody who likes building software, I think about this software deployment perspective. As someone who just likes to do creative stuff for fun, I think that there is a little bit of the blind men and the elephant problem happening here.
Noam Hassenfeld
One of the things that I found most fascinating reading your book is specifically how you use AIs to help you be creative. I’m wondering if you can just walk me through a little bit of what it is like, the actual process of communicating with an AI in order to help your creativity.
Ethan Mollick
For me, a lot of it is unsticking, so it’s less about generation of lots of ideas. It’s easy to get stuck on ideas. For example: I cannot make this transitional sentence work between these two topics. So I ask it for 30 versions of this sentence in radically different styles, and that would often unstick or inspire me. Another thing I use, there’s an analogy in the book about how AI training is like an amateur chef. I was stuck for analogies to use, and I was like, give me 25 analogies I could work with that might explain how AI trains in an unsupervised way. And then some of this is just about creative play.
Noam Hassenfeld
Yet there’s a part in your book where you talk about, the way to get a good idea is to have a ton of bad ideas, and you are talking about, give me 25 versions of this or 30 versions of this. I do this all the time when I’m trying to come up with a title for a podcast or something like that, I’ll throw out tons of bad ideas, and I find that maybe I’ve done 15 bad ideas and the 16th one is good, but those 15 were kind of hard. And there’s a sense in which if you use a chatbot, you can get those 15 bad ideas for free. And that can help. Is that sort of a part of what you’re thinking about when you’re saying, give me 25 ideas, 30 versions of this?
Ethan Mollick
So there’s a couple of reasons for doing this. The first is based on the science of ideation, which is volume matters. We actually have studies showing that you need lots of ideas to have a good idea because every idea has variance associated with it, right? And so in a lot of fields, variance is bad, right? But idea generation, it’s easy to reject all the bad ideas. So high variance means some of these ideas are really bad. Some of these are really good, you just need a lot of them so you can pluck the best idea out of that.
Noam Hassenfeld
And you can kind of get a sense, right, of the bad ideas. You can be like, oh, now I see that idea on paper. It’s bad. Go the other way.
Ethan Mollick
And we’re pretty good at filtering these, right? So having a volume of ideas is really valuable. That idea has to have variance though, right? If the ideas are all very similar to each other, a thousand similar ideas aren’t great. So the AI is really good at generating a lot of ideas, and with good prompting, it turns out it generates fairly high-variance ideas as well. Like, part of the reason why you want a group of people brainstorming is to have lots of people come up with ideas. So the AI can kind of work like an individual brainstorm, adding a lot of ideas to the conversation. And so that’s a really powerful technique.
Noam Hassenfeld
And then this, to me, just feels like it’s prompting interaction. Right? So it’s giving me one to 20 different ideas of things that I can figure out which work for me. And then I can maybe go back and forth with people on them. I can see what feels good or bad.
Ethan Mollick
Yes. I mean, this is the co-intelligence idea, right? The AI is really good, but it maxes out right now at the 89th percentile of performance. Even in its best categories, you’re better than that at whatever you’re really good at and care about. So this is a co-intelligence to help you figure it out. This is an advantage for you. It’s a prosthesis for your imagination, which is something we have never had before.
Noam Hassenfeld
Another thing I want to talk about here is not just idea generation, but the editing and shaping process of refining your ideas. And there’s a part in the book where you talk about creating AI personalities to edit your book. So you have Ozymandias and Mnemosyne and Steve, I think? I wonder if you can tell me about these personalities and how they work.
Ethan Mollick
So I gave the AI character to play with a specific goal in mind. You go through this task of making this sharper. You go through the task of looking for connections. But I also had this character, Steve, which was trying to act like a normal reader, so also trying to make sure, what would a reader think when they’re encountering this, who doesn’t have a lot of experience but reads a lot of popular science books? And that also gave me perspective because it’s an outside view. So part of this is they’re outside views. You can reject or accept them, but they’re valuable.
Noam Hassenfeld
Just so I can understand correctly, you had three different AI personalities functioning as editors. How helpful did you find that? Like what were the changes you noticed in your writing from that process?
Ethan Mollick
A lot of this is about getting yourself out of your own head and having connections to stuff. So to me, the hardest part is going back and kind of killing my darlings and doing the other loop, and especially not getting lost and making this too complicated. There are a lot of things to explain. I have a lot of ideas. I want to make sure that they have some clarity around them, and usually I accept about a third of the ideas and reject about two-thirds, which feels pretty good. So there’s agency over the decision-making, which I think is an important part of working with AI creatively.
Noam Hassenfeld
You can just, at 1 am you can be talking to the AI and be like, give me a bunch of edits. You don’t have to worry about bothering them at all.
Ethan Mollick
Absolutely. And I think one of the great things also, then, is the stuff that would have burned a human reader’s time. The more basic stuff; the AI is not great at the overall big picture. It’s getting better, but it’s not like an editor. But there’s a lot of stuff that the editor may miss, or would burn their time on because it’s less important, right? What about the basics? What about the simple idea that you never connected these two paragraphs together? What about the idea that this could be more vivid? Questions like, should this be chapter two or chapter three? The harder, more taste-based questions can stick with the editor and it saves them their time too.
Noam Hassenfeld
Okay. So you’re bringing up some of the limitations or flaws here. And I think that’s a huge thing to get into. I feel like most of the conversation I often hear about working with chatbots is just how much they get wrong all the time. I’m wondering if you worry about that when you’re using it as part of your process to write things like a book.
Ethan Mollick
The AI definitely gets stuff wrong, right? And it gets stuff wrong for a wide variety of reasons that we can go into, ranging from hallucination to limits on knowledge, bad prompting. A lot of different things are happening there. I think that part of what happens, though, is that people who are nervous about AI, for a variety of reasons, good and bad, will often jump on the flaws as a big issue. And they are an issue, right? But I think humans are flawed too. And my standard I often use is “best available human.” Is the AI better or worse than the best human I have access to at that moment? Are you okay with dealing with errors and omissions or not?
I also think that a lot of things you think the AI can’t do, it could actually do. As one example, I posted a few months back on Twitter that I found something AI can’t do. It couldn’t do crossword puzzles. And within a couple of hours, a computer science professor at Princeton was like, no, you just did bad prompting of this thing and they could do crossword puzzles. You just didn’t ask questions the right way.
Noam Hassenfeld
What do you think are the biggest perils with this? Maybe either the way the technology could develop or the way people could use this in a not-ideal way.
Ethan Mollick
It’s a general-purpose technology. Even if we leave aside all of the questions about what happens long-term, how smart does this get? There’s going to be a lot of negative effects. There already are. Deepfakes are unstoppable, right? We’re gonna have to live in a world where people are producing involuntary deepfakes of people. And I don’t know what we’re gonna do about that. We’re gonna have to think about how we deal with that legally. Just in the micro area, like school, everybody can now cheat really well. It turns out homework was actually very valuable. Essays were valuable to be assigned as homework. Now, anyone can generate essays on demand. That’s not great. Like, we have to rethink education about how we build stuff around that. You know, there’s the constant worry that 80 percent was pretty good. Most criminals, terrorists are pretty bad at their job, if they get up to the 80th percentile performance, what does that mean?
And that leaves aside future developments and worries about what these systems can do in the long term. That leaves aside job questions about how jobs get changed, and part of the role of government and organizations that care about this stuff, and the AI companies themselves, is to try and mitigate downside risk. I don’t have control over the downside risk. What I can do is try and find use cases that are positive, increase the ability of the world to do better, solve longstanding problems. So we have to be aware of the downside as well as the upside. I’m an optimist, but I’m a pragmatic optimist.











