<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Gary Marcus | Vox</title>
	<subtitle type="text">Our world has too much noise and too little context. Vox helps you understand what matters.</subtitle>

	<updated>2023-03-03T19:25:16+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.vox.com/author/gary-marcus" />
	<id>https://www.vox.com/authors/gary-marcus/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.vox.com/authors/gary-marcus/rss" />

	<icon>https://platform.vox.com/wp-content/uploads/sites/2/2024/08/vox_logo_rss_light_mode.png?w=150&amp;h=100&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Gary Marcus</name>
			</author>
			
			<title type="html"><![CDATA[Elon Musk thinks we’re close to solving AI. That doesn’t make it true.]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/future-perfect/23622120/elon-musk-solve-ai-chatgpt-wrong" />
			<id>https://www.vox.com/future-perfect/23622120/elon-musk-solve-ai-chatgpt-wrong</id>
			<updated>2023-03-03T14:25:16-05:00</updated>
			<published>2023-03-03T07:30:00-05:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Future Perfect" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="Technology &amp; Media" />
							<summary type="html"><![CDATA[Elon Musk is at or near the top of pretty much every AI influencer list I have ever seen, despite the fact he doesn&#8217;t have a degree in AI and seems to have only one academic journal article in the field, which received little notice.&#160; There&#8217;s not necessarily anything wrong with that; Yann LeCun was [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Musk speaks at the opening of a new Tesla plant in 2022. | Christian Marquardt/Getty Images" data-portal-copyright="Christian Marquardt/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24474156/GettyImages_1239417462.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Musk speaks at the opening of a new Tesla plant in 2022. | Christian Marquardt/Getty Images	</figcaption>
</figure>
<p>Elon Musk is at or near the top of pretty much every AI influencer list I have ever seen, despite the fact he doesn&rsquo;t have a degree in AI and seems to have <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/npqu.11427">only one</a> academic journal article in the field, which received little notice.&nbsp;</p>

<p>There&rsquo;s not necessarily anything wrong with that; Yann LeCun was trained in physics (the same field as one of Musk&rsquo;s <a href="https://www.thedp.com/article/2022/11/elon-musk-penn-grad-wharton-twitter-auction#:~:text=Musk%20graduated%20from%20the%20College%20and%20Wharton%20in%201997%20with,up%20company%2C%20according%20to%20Fortune.">two undergraduate degrees</a>) but is justifiably known for his pioneering work in machine learning. I&rsquo;m known for my AI work, too, but I trained in cognitive science. The most important paper I ever wrote for AI was in <a href="https://pubmed.ncbi.nlm.nih.gov/9892549/">a psychology journal</a>. It&rsquo;s perfectly fine for people to influence different fields, and Musk&rsquo;s work on <a href="https://www.nytimes.com/interactive/2022/11/14/technology/tesla-self-driving-flaws.html">driverless cars</a> has undoubtedly influenced the development of AI.&nbsp;</p>

<p>But an awful lot of what he says about AI has been wrong. Most notoriously, none of his forecasts about timelines for self-driving cars have been correct. In October 2016, he <a href="https://www.nbcnews.com/business/autos/driverless-tesla-will-travel-l-nyc-2017-says-musk-n670206">predicted</a> that a Tesla would drive itself from California to New York by 2017. (It didn&rsquo;t.) Tesla has deployed a technology called &ldquo;Autopilot,&rdquo; but everybody in the industry knows that name is a fib, <a href="https://www.reuters.com/business/autos-transportation/tesla-is-sued-by-drivers-over-alleged-false-autopilot-full-self-driving-claims-2022-09-14/">more marketing than reality</a>. Teslas are <a href="https://www.reuters.com/business/autos-transportation/tesla-flags-its-cars-not-ready-be-approved-fully-self-driving-this-year-2022-10-20/">nowhere close to being able to drive themselves</a>; the software is still so buggy seven years after Tesla started rolling it out that a human driver still must pay attention at all times.&nbsp;</p>

<p>Musk also seems to consistently misunderstand the relationship between natural (human) intelligence and artificial intelligence. He&rsquo;s repeatedly&nbsp;<a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DHM23sjhtk4Q&amp;d=DwMFaQ&amp;c=7MSjEE-cVgLCRHxk1P5PWg&amp;r=ct6dauPa_NTjdRjj1u6cIc96YmXo5720qGQ3u69psLE&amp;m=DjIWCPEZH4qwG56I6Jj_W2cpmQ6MrNjC2FVvs6GlQIXKe8kP2RD9d1ANQUPZv5K5&amp;s=jxov3hX4J_QdZBqojAVmlmE7w9TuBKw9PcvzY4WBlNQ&amp;e=">argued</a>&nbsp;that Teslas don&rsquo;t need&nbsp;<a href="https://en.wikipedia.org/wiki/Lidar">Lidar</a>&nbsp;&mdash; a sensing system that virtually every other autonomous vehicle company relies on &mdash; on the basis of a misleading <a href="https://twitter.com/elonmusk/status/1447588987317547014">comparison</a> between human vision and cameras in driverless cars.&nbsp;While it&rsquo;s true that humans don&rsquo;t need Lidar to drive, current AI doesn&rsquo;t seem anywhere close enough to being able to understand and deal with a full array of road conditions without it. Driverless cars need Lidar as a crutch precisely because they don&rsquo;t have human-like intelligence.</p>

<p>Teslas can&rsquo;t even consistently avoid <a href="https://www.cbsnews.com/news/tesla-cars-crashes-emergency-vehicles/">crashing into stopped emergency vehicles</a>, a problem that the company has failed to solve for more than five years. For reasons still not publicly disclosed, the perceptual and decision-making systems for the cars haven&rsquo;t managed to drive with sufficient reliability yet, without human intervention. Musk&rsquo;s claim is like saying that humans don&rsquo;t need to walk because cars don&rsquo;t have feet. If my grandmother had wheels, she&rsquo;d be a car.&nbsp;</p>
<h2 class="wp-block-heading">ChatGPT isn’t the profound AI advance that it seems</h2>
<p>Despite a spotty track record, Musk continues to make pronouncements about AI, and when he does, people take it seriously. His latest, first <a href="https://www.cnbc.com/2023/02/15/elon-musk-co-founder-of-chatgpt-creator-openai-warns-of-ai-society-risk.html?recirc=taboolainternal">reported</a> by CNBC and picked up widely thereafter, took place a few weeks ago at the World Government Summit in Dubai. Some of what Musk said is, in my professional judgment, spot-on &mdash; and some of it is way off.</p>

<p>What was most wrong was his implication that we are close to solving AI &mdash; or reaching so-called &ldquo;artificial general intelligence&rdquo; (AGI) with the flexibility of human intelligence &mdash; claiming that ChatGPT &ldquo;has illustrated to people just how advanced AI has become.&rdquo;</p>

<p>That&rsquo;s just silly. To some people, especially those who haven&rsquo;t been following the AI field, the degree to which ChatGPT can mimic human prose seems deeply surprising. But it&rsquo;s also deeply flawed. A truly superintelligent AI would be able to tell true from false, to reason about people and objects and science, and to be as versatile and quick in learning new things as humans are &mdash; none of which the current generation of chatbots is capable of. All ChatGPT can do is predict text that might be plausible in different contexts based on the enormous body of written work it&rsquo;s been trained on, but it has no regard for whether what it spits out is true.&nbsp;</p>

<p>That makes ChatGPT incredibly fun to play with, and if handled responsibly, sometimes <a href="https://www.nature.com/articles/d41586-023-00500-8">it can even be useful</a>, but it doesn&rsquo;t make it genuinely smart. The system has tremendous trouble telling the truth, <a href="https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination/">hallucinates</a> routinely, and sometimes struggles with basic math. It doesn&rsquo;t understand what a number is. In this example, sent to me by the AI researcher Melanie Mitchell, ChatGPT can&rsquo;t understand the relation between a pound of feather and two pounds of bricks, foiled by the <a href="https://twitter.com/GaryMarcus/status/1607023594957045761?s=20">ridiculous guardrail</a> system that prevents it from using hateful language but also keeps it from directly answering many questions, which <a href="https://twitter.com/elonmusk/status/1622625034538811395?s=20">Musk himself has complained about</a> elsewhere.&nbsp;</p>
<img src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/24473552/Picture1.png?quality=90&#038;strip=all&#038;crop=0,0,100,100" alt="" title="" data-has-syndication-rights="1" data-caption="" data-portal-copyright="" />
<p>Examples of ChatGPT fails like this are legion across the internet. Together with NYU computer scientist Ernest Davis and others, I have assembled <a href="https://garymarcus.substack.com/p/large-language-models-like-chatgpt">a whole collection of them</a>; feel free to contribute your own. OpenAI often fixes them, but new errors continue to appear. Here&rsquo;s one of my current favorites:</p>
<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter alignnone"><div class="wp-block-embed__wrapper">
<blockquote class="twitter-tweet" data-dnt="true" data-conversation="none"><p lang="en" dir="ltr">The suspense was killing me. <a href="https://t.co/7dCrfAcOUn">pic.twitter.com/7dCrfAcOUn</a></p>&mdash; DJ Strouse (@djstrouse) <a href="https://twitter.com/djstrouse/status/1605964340234010626?ref_src=twsrc%5Etfw">December 22, 2022</a></blockquote>
</div></figure>
<p>These cases illustrate that, despite superficial appearances to the contrary, ChatGPT can&rsquo;t reason, has no idea what it&rsquo;s talking about, and absolutely cannot be trusted. It has no real moral compass and has to rely on crude guardrails that try to prevent it from going evil <a href="https://garymarcus.substack.com/p/inside-the-heart-of-chatgpts-darkness">but can be broken without much difficulty</a>. Sometimes it gets things right because the text you type into it is close enough to something it&rsquo;s been trained on, but that&rsquo;s incidental. Being right <em>sometimes</em> is not a sound basis for artificial intelligence.</p>

<p>Musk is <a href="https://www.theinformation.com/articles/fighting-woke-ai-musk-recruits-team-to-develop-openai-rival">reportedly</a> looking to build a ChatGPT rival &mdash; &ldquo;TruthGPT,&rdquo; as he <a href="https://twitter.com/elonmusk/status/1626533667408596992">put it</a> recently &mdash; but this also misses something important: Truth just isn&rsquo;t part of GPT-style architectures. It&rsquo;s fine to want to build new AI that addresses the fundamental problems with current language models, but that would require a very different design, and it&rsquo;s not clear that Musk appreciates how radical the changes will need to be.</p>

<p>Where the stakes are high, companies are already figuring out that truth and GPT aren&rsquo;t the closest of friends. JPMorgan just <a href="https://www.wsj.com/articles/jpmorgan-restricts-employees-from-using-chatgpt-2da5dc34">restricted</a> its employees from using ChatGPT for business, and <a href="https://www.bloomberg.com/news/articles/2023-02-24/citigroup-goldman-sachs-join-chatgpt-crackdown-fn-reports">Citigroup and Goldman Sachs</a> quickly followed suit. As Yann LeCun <a href="https://twitter.com/ylecun/status/1621805604900585472">put it</a>, echoing what I&rsquo;ve been <a href="https://garymarcus.substack.com/p/some-things-garymarcus-might-say">saying</a> for years, it&rsquo;s an offramp on the road to artificial general intelligence because its underlying technology has nothing to do with the requirements of genuine intelligence.&nbsp;</p>

<p>Last May, Musk <a href="https://twitter.com/elonmusk/status/1531328534169493506?lang=en">said</a> he&rsquo;d be &ldquo;surprised if we don&rsquo;t have AGI by&rdquo; 2029. I registered my doubts then, <a href="https://twitter.com/GaryMarcus/status/1531752767835938816">offered</a> to bet him $100,000 (that&rsquo;s real money for me, if <a href="https://www.cnn.com/2023/02/27/tech/elon-musk-richest-man-again/index.html">not so much for him</a>), and wrote up a set of conditions. Many people in the field shared my sentiment that on predictions like these, Musk is all talk and no action. By the next day, without planning to, I&rsquo;d raised <a href="https://fortune.com/2022/06/03/elon-musk-artificial-intelligence-agi-tesla-500k-bet/">another $400,000</a> for the bet from fellow AI experts. Musk never got back to us. If he really believed what he&rsquo;s saying, he should have.</p>
<h2 class="wp-block-heading">We should still be very worried</h2>
<p>If Musk is wrong about when driverless cars are coming, <a href="https://garymarcus.substack.com/p/sub-optimal">naive</a> about what it takes to build human-like robots, and grossly off on the timeline for general intelligence, he <em>is</em> right about something: Houston, we do have a problem.</p>

<p>At the Dubai event last month, Musk told the crowd, &ldquo;One of the biggest risks to the future of civilization is AI.&rdquo; I still think nuclear war and climate change might be bigger, but these last few weeks, especially with the shambolic introductions of new AI search engines by <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">Microsoft</a> and <a href="https://www.vox.com/recode/2023/2/7/23590069/bing-openai-microsoft-google-bard">Google</a>, lead me to think that we are going to see more and more primitive and unreliable artificial intelligence products rushed to market.&nbsp;</p>

<p>That may not be precisely the kind of AI Musk had in mind, but it does pose clear and present dangers. New concerns are appearing seemingly every day, ranging from unforeseen <a href="https://theconversation.com/chatgpt-and-cheating-5-ways-to-change-how-students-are-graded-200248">consequences</a> in education to the <a href="https://www.scientificamerican.com/article/ai-platforms-like-chatgpt-are-easy-to-use-but-also-potentially-dangerous/">possibility</a> of massive, automated misinformation campaigns. Extremist organizations, like the alt-right social network Gab, have already <a href="https://twitter.com/jjvincent/status/1627971614653444097">begun</a> announcing intentions to build their own AI.&nbsp;&nbsp;</p>

<p>So don&rsquo;t go to Musk for specific timelines about AGI or driverless cars. But he still makes a crucial point: We have new technology on our hands, and we don&rsquo;t really know how this is all going to play out. When he <a href="https://www.reuters.com/business/autos-transportation/musk-ai-stresses-me-out-2023-03-02/">said</a> this week that &ldquo;we need some kind of, like, regulatory authority or something overseeing AI development,&rdquo; he may not have been at his most eloquent, but he was absolutely right.</p>

<p>We aren&rsquo;t, in truth, all that close to AGI. Instead, we are unleashing a seductive yet haphazard and truth-disregarding AI that maybe nobody anticipated. But the takeaway is still the same. We should be worried, no matter how smart (or not) it is.&nbsp;</p>

<p><a href="http://garymarcus.com/"><em>Gary Marcus</em></a><em><strong> </strong>(</em><a href="https://twitter.com/GaryMarcus"><em>@garymarcus</em></a><em>) is a<strong> </strong>scientist, bestselling author, and entrepreneur. He founded the startup Geometric Intelligence, which was acquired by Uber in 2016. His most recent book,&nbsp;</em><a href="http://rebooting.ai/">Rebooting AI</a><em>, co-authored with Ernest Davis, was named one of Forbes&rsquo; 7 Must-Read Books About Artificial Intelligence. His new podcast, Humans versus Machines, will launch this spring.</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
