<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">A.W. Ohlheiser | Vox</title>
	<subtitle type="text">Our world has too much noise and too little context. Vox helps you understand what matters.</subtitle>

	<updated>2024-06-20T19:27:35+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.vox.com/author/aw-ohlheiser" />
	<id>https://www.vox.com/authors/aw-ohlheiser/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.vox.com/authors/aw-ohlheiser/rss" />

	<icon>https://platform.vox.com/wp-content/uploads/sites/2/2024/08/vox_logo_rss_light_mode.png?w=150&amp;h=100&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Misinformation is winning the war on misinformation ]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/356010/misinformation-internet-online-data-ai" />
			<id>https://www.vox.com/?p=356010</id>
			<updated>2024-06-20T15:27:35-04:00</updated>
			<published>2024-06-21T07:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Big Tech" /><category scheme="https://www.vox.com" term="Meta" /><category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[Misinformation on the internet has never been worse. Or at least that’s my analysis of it, based on vibes.&#160; People on TikTok are eating up videos saying a bunch of inaccurate things about the dangers of sunscreen, while the platform’s on-app Shop propels obscure books containing bogus cures for cancer onto the Amazon bestseller list. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A smartphone screen held up against a computer monitor, both with social media platforms showing." data-caption="Online misinformation about everything from elections to vaccines still finds audiences online. | Chris Delmas/AFP via Getty Images" data-portal-copyright="Chris Delmas/AFP via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2024/06/gettyimages-1246499584.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Online misinformation about everything from elections to vaccines still finds audiences online. | Chris Delmas/AFP via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">Misinformation on the internet has never been worse. Or at least that’s my analysis of it, based on vibes.&nbsp;</p>

<p class="has-text-align-none">People on TikTok are eating up videos saying a bunch of inaccurate things about the <a href="https://www.npr.org/sections/shots-health-news/2024/06/17/nx-s1-5002030/sunscreen-tiktok-misinformation-melanoma">dangers of sunscreen</a>, while the platform’s on-app Shop propels <a href="https://www.vox.com/24152358/tiktok-shop-ads-lost-book-of-herbal-remedies-bestseller">obscure books</a> containing bogus cures for cancer onto the Amazon bestseller list. Meanwhile, the presumed Republican nominee for president is fresh off what <a href="https://www.nytimes.com/2024/03/17/us/politics/trump-disinformation-2024-social-media.html">appears to be a successful push</a> to neuter efforts to address disinformation campaigns about the election. Also, <a href="https://www.vox.com/technology/351189/google-ai-overview-section-230">Google’s AI Overview search</a> results told people to <a href="https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinations-glue-in-pizza">put glue on pizza</a>.&nbsp;</p>

<p class="has-text-align-none">But this is all anecdotal. Can I prove my hunch with data? Sadly, no. The data I — or more accurately, researchers with actual expertise on this — would need to do that is locked behind the opaque doors of the companies that run the platforms and services on which the internet’s worst nonsense is hosted. Evaluating the reach of misinformation, in the present day, is a grueling and indirect process with imperfect results.&nbsp;&nbsp;</p>

<p class="has-text-align-none">For my final newsletter contribution, I wanted to find a way to assess the state of misinformation online. As I’ve been covering this topic over and over for the past while, there’s one question that keeps popping into my head: Do companies like Google, Meta, and TikTok even <em>care</em> about meaningfully tackling this problem? </p>

<p class="has-text-align-none">The answer to this question, too, is imperfect. But there are some things that might lead to an educated guess.&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none"><strong>Ways to measure misinformation are disappearing</strong></h2>

<p class="has-text-align-none">One of the most important things a journalist can do while writing about the spread of bad information online is to find a way to measure its reach. There’s a huge difference between a YouTube video with 1,000 views, and one with 16 million, for instance. But lately, some of the key metrics used to put supposedly “viral” misinformation into context have been disappearing from public view.&nbsp;&nbsp;</p>

<p class="has-text-align-none">TikTok <a href="https://www.washingtonpost.com/technology/2024/02/08/tiktok-remove-data-criticism-gaza/">disabled view counts for popular hashtags</a> earlier this year, shifting instead to simply showing the number of <em>posts </em>made on TikTok using the hashtag. <a href="https://www.fastcompany.com/91067850/crowdtangles-former-ceo-has-questions-about-metas-decision-to-close-the-research-tool-in-an-election-year">Meta is shutting down</a> CrowdTangle, a once-great tool for researchers and journalists looking to closely examine how information spreads across social media platforms, in August. That’s just a couple months before the 2024 election. And Elon Musk <a href="https://www.theverge.com/2024/6/11/24176247/x-likes-hidden-private-rollout">decided to make “likes” private </a>on the platform, a decision that, to be fair, is bad for accountability but could have some benefits for normal users of X. </p>

<p class="has-text-align-none">Between all this and <a href="https://www.fastcompany.com/91040397/under-elon-musk-x-is-denying-api-access-to-academics-who-study-misinformation">declining access to platform APIs</a>, researchers are limited in how much they can really track or speak to what’s going on.&nbsp;</p>

<p class="has-text-align-none">“How do we track things over time? Apart from relying on the platform&#8217;s word,” said <a href="https://www.heinz.cmu.edu/faculty-research/profiles/sen-ananya/">Ananya Sen</a>, an assistant professor of information technology and management at Carnegie Mellon University, whose recent research looks at how companies inadvertently fund misinformation-laden sites when they use large ad tech platforms. </p>

<p class="has-text-align-none">Disappearing metrics is basically the opposite of what a lot of experts on manipulated information recommend. Transparency and disclosure are “key” components of reform efforts like the Digital Services Act in the EU, said <a href="https://yjernite.github.io/">Yacine Jernite</a>, machine learning and society lead for Hugging Face, an open-source data science and machine learning platform. </p>

<p class="has-text-align-none">“We&#8217;ve seen that people who use [generative AI] services for information about elections may get misleading outputs,” Jernite added, “so it&#8217;s particularly important to accurately represent and avoid over-hyping the reliability of those services.” </p>

<p class="has-text-align-none">It’s generally better for an information ecosystem when people know more about what they’re using and how it works. And while some aspects of this fall under media literacy and information hygiene efforts, a portion of this has to come from the platforms and their boosters. Hyping up an AI chatbot as a next-generation search tool sets expectations that aren’t fulfilled by the service itself. </p>

<h2 class="wp-block-heading has-text-align-none"><strong>Platforms don’t have much incentive to care&nbsp;&nbsp;</strong></h2>

<p class="has-text-align-none">Platforms aren’t just amplifying bad information, <a href="https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait/">they’re making money off it</a>. From TikTok Shop purchases to ad sales, if these companies take meaningful, systemic steps to change how disinformation circulates on their platforms, they might work against their business interests.&nbsp;&nbsp;</p>

<p class="has-text-align-none">Social media platforms are designed to show you things you want to engage with and share. AI chatbots are designed to give the illusion of knowledge and research. But neither of these models are great for evaluating veracity, and doing so often requires limiting the scope of a platform working as intended. Slowing or narrowing how a platform like this works means less engagement, which means no growth, which means less money.&nbsp;</p>

<p class="has-text-align-none">“I personally can&#8217;t imagine that they would ever be as aggressively interested in addressing this as the rest of us are,” said Evan Thornburg, a bioethicist who posts on TikTok as <a href="https://www.tiktok.com/@gaygtownbae?lang=en">@gaygtownbae</a>. “The thing that they&#8217;re able to monetize is our attention, our interest, and our buying power. And why would they whittle that down to a narrow scope?“ </p>

<p class="has-text-align-none">Many platforms begrudgingly began efforts to take on misinformation after the 2016 US elections, and again at the beginning of the Covid pandemic. But since then, there’s been kind of a pullback. Meta <a href="https://www.washingtonpost.com/technology/2023/05/23/meta-layoffs-misinformation-facebook-instagram/">laid off</a> employees from teams involved with content moderation in 2023, and rolled <a href="https://www.washingtonpost.com/politics/2023/06/16/meta-rolls-back-covid-misinformation-rules/">back its Covid-era rules</a>. Maybe they’re sick of being held responsible for this stuff at this point. Or, as technology changes, they see an opportunity to move on from it.   </p>

<h2 class="wp-block-heading has-text-align-none"><strong>So do they care?&nbsp;</strong></h2>

<p class="has-text-align-none">Again, it’s hard to quantify the efforts by major platforms to curb misinformation, which leaves me leaning once again on informed vibes. For me, it feels like major platforms are backing away from prioritizing the fight against misinformation and disinformation, and that there’s a general kind of fatigue out there on the topic more broadly. That doesn’t mean that nobody is doing anything.&nbsp;&nbsp;&nbsp;</p>

<p class="has-text-align-none"><a href="https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-early">Prebunking</a>, which involves preemptively fact-checking rumors and lies before they gain traction, is super promising, especially when applied to election misinformation. Crowdsourced fact-checking is also an interesting approach. And to the credit of platforms themselves, they do continue to update their rules as new problems emerge.   </p>

<p class="has-text-align-none">There’s a way in which I have some sympathy for the platforms here. This is an exhausting topic, and it’s tough to be told, over and over, that you’re not doing enough. But pulling back and moving on doesn’t stop bad information from finding audiences over and over. While these companies assess how much they care about moderating and addressing their platform’s capacity to spread lies, the people targeted by those lies are getting hurt.&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Your social media diet is becoming easier to exploit]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/354963/artificial-intelligence-media-misinformation" />
			<id>https://www.vox.com/?p=354963</id>
			<updated>2024-06-12T16:11:52-04:00</updated>
			<published>2024-06-13T09:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="Technology &amp; Media" />
							<summary type="html"><![CDATA[It’s weird and often a bit scary to work in journalism right now. Misinformation and disinformation can be indistinguishable from reality online, as the growing tissues of networked nonsense have ossified into “bespoke realities” that compete with factual information for your attention and trust. AI-generated content mills are successfully masquerading as real news sites. And [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A hand holding a cellphone displaying a ChatGPT logo." data-caption="" data-portal-copyright="Jaap Arriens/NurPhoto via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2024/06/gettyimages-2155035740.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p class="has-text-align-none">It’s weird and often a bit scary to work in journalism right now. Misinformation and disinformation can be indistinguishable from reality online, as the growing tissues of networked nonsense have ossified into “<a href="https://www.vox.com/technology/353958/online-lies-invisible-rulers-book-successful-misinformation">bespoke realities</a>” that compete with factual information for your attention and trust. AI-generated content mills are <a href="https://www.nytimes.com/2024/06/06/technology/bnn-breaking-ai-generated-news.html">successfully masquerading as real news sites</a>. And at some of those real news organizations (for instance, <a href="https://nymag.com/intelligencer/article/can-will-lewis-survive-the-big-mess-at-the-washington-post.html">my former employer</a>) there has been an exhausting trend of internal unrest, loss of confidence in leadership, and waves of layoffs.</p>

<p class="has-text-align-none">The effects of these changes are now coming into focus. The Pew-Knight research initiative on Wednesday <a href="https://www.pewresearch.org/journalism/2024/06/12/how-americans-get-news-on-tiktok-x-facebook-and-instagram/?utm_source=AdaptiveMailer&amp;utm_medium=email&amp;utm_campaign=24-06-12%20Cross%20Platforms%20GEN%20DISTRO&amp;org=982&amp;lvl=100&amp;ite=14159&amp;lea=3486942&amp;ctr=0&amp;par=1&amp;trk=a0DQm0000021clFMAQ">released a new report</a> on how Americans get their news online. It’s an interesting snapshot, not just of where people are seeing news — on TikTok, Instagram, Facebook, or X — but also who they’re trusting to deliver it to them. </p>

<p class="has-text-align-none">TikTok users who say they regularly consume news on the platform are just as likely to get news there from influencers as they are from media outlets or individual journalists. But they’re even more likely to get news on TikTok from “other people they don’t know personally.” </p>

<p class="has-text-align-none">And while most users across all four platforms say they see some form of news-related content regularly, only a tiny portion of them actually log on to social media in order to consume it. X, formerly Twitter, is now the only platform where a majority of users say they check their feeds for news, either as a major (25 percent) or minor (40 percent) reason for using it. By contrast, just 15 percent of TikTok users say that news is a major reason they’ll scroll through their For You page. </p>

<p class="has-text-align-none">The Pew research dropped while I was puzzling through how to answer a larger question: How is generative AI going to change media? And I think the new data highlights how complicated the answer is.&nbsp;</p>

<p class="has-text-align-none">There are plenty of ways that generative AI is already changing journalism and the larger information ecosystem. But AI is just one part of an interconnected series of incentives and forces that are reshaping how people get information and what they do with it. Some of the issues with journalism as an industry right now are <a href="https://deadspin.com/the-adults-in-the-room-1837487584/">more or less</a> <a href="https://www.404media.co/elon-musk-tweeted-a-thing/">own goals</a> that no amount of worrying about AI or fretting about subscription numbers will fix.</p>

<p class="has-text-align-none">Here are some of the things to look out for, however:&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none">AI can make bad information sound more legit<strong> </strong></h2>

<p class="has-text-align-none">It’s hard to fact-check an endless river of information and commentary, and rumors tend to spread much faster than verification, especially during a <a href="https://www.vox.com/technology/2023/10/12/23913472/misinformation-israel-hamas-war-social-media-literacy-palestine">rapidly developing crisis</a>. People turn to the internet in those moments for information, for understanding, and for cues on how to help. And that frantic, charged search for the latest updates has long been easy to manipulate for bad actors who know how to do it. Generative AI can make that even easier.  </p>

<p class="has-text-align-none">Tools like ChatGPT can mimic the voice of a news article, and the technology has a history of “hallucinating” <a href="https://www.theguardian.com/commentisfree/2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article">citations to articles</a> and reference material that doesn’t exist. Now, people can use an AI-powered chatbot to essentially cloak bad information in all the trappings of verified information. </p>

<p class="has-text-align-none">“What we’re not ready for is the fact that there are basically these machines out there that can create plausible sounding text that has no relationship to the truth,” Julia Angwin, the founder of Proof News and a longtime data and technology journalist, recently told the <a href="https://journalistsresource.org/media/ai-journalism-julia-angwin/">Journalists Resource</a>.</p>

<p class="has-text-align-none">“For a profession that writes words that are meant to be factual, all of a sudden you’re competing in the marketplace — essentially, the marketplace of information — with all these words that sound plausible, look plausible and have no relationship to accuracy,” she noted. </p>

<p class="has-text-align-none">A flood of plausible-sounding text has implications beyond journalism, too. Even for people who are pretty good at determining whether an email or an article is trustworthy or not, AI-generated text might mess with nonsense radars. <a href="https://www.vox.com/culture/23827325/publishing-ai-scams-large-language-models">Phishing emails</a> and <a href="https://www.vox.com/24141648/ai-ebook-grift-mushroom-foraging-mycological-society">reference books</a> — not to mention <a href="https://www.vox.com/technology/23746060/ai-generative-fake-images-photoshop-google-microsoft-adobe">photography and video</a> — are already fooling people with AI-generated writing.&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none">AI doesn’t understand jokes</h2>

<p class="has-text-align-none">It didn’t take very long for <a href="https://www.vox.com/technology/351189/google-ai-overview-section-230">Google’s AI Overview</a> tool, which generates automated responses to search queries right on the results page, to start creating some pretty questionable results.&nbsp;</p>

<p class="has-text-align-none">Famously, Google’s AI Overview told searchers to <a href="https://www.theverge.com/2024/5/30/24168344/google-defends-ai-overviews-search-results">put a little glue on pizza</a> to make the cheese stick better, drawing from a <a href="https://www.businessinsider.com/google-ai-glue-pizza-i-tried-it-2024-5">joke answer on Reddit</a>. Others found Overview answers instructing searchers to change their blinker fluid, referencing a joke that’s popular on car maintenance forums (blinker fluid does not exist). Another Overview answer encouraged eating rocks, apparently because of <a href="https://www.theonion.com/geologists-recommend-eating-at-least-one-small-rock-per-1846655112">an Onion article</a>. These errors are funny, but AI Overview isn’t just falling for joking Reddit posts. </p>

<p class="has-text-align-none">Google’s response to the Overview issues said that the tool’s inability to parse satire from serious answers is partially due to <a href="https://www.wired.com/story/the-complexity-of-simply-searching-for-medical-advice/">“data voids.”</a> That’s when a specific search term or question doesn’t have a lot of serious or informed content written about it online, meaning that the top results for a related query will probably be less reliable. (I’m familiar with data voids from writing about health misinformation, where bad results are a real problem.) One solution to data voids is for there to be more reliable content about the topic at hand, created and verified by experts, reporters, and other people and organizations who can provide informed and factual information. But as Google sweeps up more and more eyeballs to internal results, rather than external sources, the company’s also removing some incentives for people to create that content in the first place.&nbsp;&nbsp;</p>

<h2 class="wp-block-heading has-text-align-none">Why should a non-journalist care?</h2>

<p class="has-text-align-none">I worry about this stuff because I am a reporter who has covered information weaponization online for years. This means two things: I know a lot about the spread and consequences of misinformation and rumor, and I make a living by doing journalism and would very much like to continue to do that. So of course, you might say, I care. AI might be coming for my job! </p>

<p class="has-text-align-none">I’m a little skeptical of the idea that generative AI, a tool that does not do original research and doesn’t really have a good way of verifying the information it does surface, will be able to replace a practice that is, at its best, an information-gathering method that relies on doing original work and verifying the results. When they’re used properly and that use is disclosed to readers, I <a href="https://mashable.com/article/chatgpt-ai-and-adhd">don’t think</a> these tools are useless for <a href="https://www.vox.com/technology/2023/12/14/24000435/chatbot-therapy-risks-and-potential">researchers</a> and reporters. In the right hands, generative AI is just a tool. What generative AI can do, in the hands of bad actors and a phalanx of grifters — or when deployed to maximize profit without regard for the informational pollution it creates — is fill your feed with junky and inaccurate content that sounds like news but isn’t. Although AI-generated nonsense might be posing a threat to the media industry, journalists like me aren’t the target for it. It’s you.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Why lying on the internet keeps working]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/353958/online-lies-invisible-rulers-book-successful-misinformation" />
			<id>https://www.vox.com/?p=353958</id>
			<updated>2024-06-05T16:55:19-04:00</updated>
			<published>2024-06-06T06:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Tech policy" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[About a month ago, I wrote about a viral book of “Lost” herbal remedies that had, at the time, sold 60,000 copies on the TikTok Shop despite appearing to violate some of the app’s policies on health misinformation. The book’s sales were boosted by popular videos from wellness influencers on the app, some of which [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="A protester outside of the Supreme Court holds up a sign reading &quot;Hopkins Study: Fauci Lied&quot; as its holder participates in a demonstration." data-caption="Conservative demonstrators who allege that the government pressured or colluded with social media platforms to censor right-leaning content under the guise of fighting misinformation protest outside the US Supreme Court in Washington, DC, March 18, 2024. | Saul Loeb/AFP via Getty Images" data-portal-copyright="Saul Loeb/AFP via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2024/06/gettyimages-2087317578.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Conservative demonstrators who allege that the government pressured or colluded with social media platforms to censor right-leaning content under the guise of fighting misinformation protest outside the US Supreme Court in Washington, DC, March 18, 2024. | Saul Loeb/AFP via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">About a month ago, I <a href="https://www.vox.com/24152358/tiktok-shop-ads-lost-book-of-herbal-remedies-bestseller">wrote about</a> a viral book of “Lost” herbal remedies that had, at the time, sold 60,000 copies on the TikTok Shop despite appearing to violate some of the app’s policies on health misinformation. The book’s sales were boosted by popular videos from wellness influencers on the app, some of which had millions of views, who claimed inaccurately that the once obscure 2019 book contained natural cures for cancer and other ailments. </p>

<p class="has-text-align-none">The influencers, along with TikTok, made money off the sale of this misleading book. I brought all this to the attention of TikTok. The videos I flagged to a company spokesperson were removed after a review for violating TikTok’s policies banning health misinformation.&nbsp;</p>

<p class="has-text-align-none">The book remained for sale in the shop, and new influencers stepped in. Nonetheless, I haven’t stopped seeing TikTok Shop promotions for this book, <em>The Lost Book of Herbal Remedies,</em> since. </p>

<p class="has-text-align-none">“This right here is the reason they’re trying to ban this book,” said one TikTok Shop seller’s video, as he pointed to the book’s list of herbal cancer treatments. Later, he urged his viewers to click through on a link to the Shop listing and buy right away because “it probably won’t be around forever because of what’s inside.” </p>

<p class="has-text-align-none">The video got more than 2 million views in two days. Click through the link as instructed and you’ll see that sales for the book have doubled since my article came out. <em>The Lost Book of Herbal Remedies </em>has sold more than 125,000 copies through the TikTok Shop’s e-commerce platform on TikTok alone. The book’s popularity doesn’t stop there, though: as of June 5, it is the No. 6 bestselling book on Amazon and has been on Amazon’s bestseller list for seven weeks and counting.   </p>

<h2 class="wp-block-heading has-text-align-none"><strong>The “Invisible Rulers” of online attention</strong></h2>

<p class="has-text-align-none">I was thinking about my experience digging into the <em>The Lost Book of Herbal Remedies </em>while reading the forthcoming book <a href="https://www.hachettebookgroup.com/titles/renee-diresta/invisible-rulers/9781541703377/?lens=publicaffairs"><em>Invisible Rulers</em></a>, by Stanford Internet Observatory researcher Renee DiResta. The book examines and contextualizes how bad information and “<a href="https://www.nytimes.com/2023/11/30/opinion/political-reality-algorithms.html">bespoke realities</a>” became so powerful and prominent online. She charts how the “collision of the rumor mill and the propaganda machine” on social media helped to form a trinity of influencer, algorithm, and crowd that work symbiotically to catapult pseudo-events, Twitter Main Characters, and conspiracy theories that have captured attention and shattered consensus and trust. </p>

<p class="has-text-align-none">DiResta’s book is part history, part analysis, and part memoir, as it spans from pre-internet examinations of the psychology of rumor and propaganda to the biggest moments of online conspiracy and harassment from the social media era. In the end, DiResta applies what she’s learned in a decade of closely researching online disinformation, manipulation, and abuse, to her personal experience of being the target of a series of baseless accusations that, despite their lack of evidence, prompted Rep. Jim Jordan, as chair of the House subcommittee on Weaponization of the Federal Government, to <a href="https://www.propublica.org/article/jim-jordan-information-requests-universities-disinformation">launch an investigation</a>. </p>

<p class="has-text-align-none">There’s a really understandable instinct that, I think, a lot of people have when they read about online misinformation or disinformation: They want to know why it’s happening and who is to blame, and they want that answer to be easy. Hence, <a href="https://www.cjr.org/the_media_today/nature_study_trump_bots_twitter.php">meme-ified arguments</a> about “Russian bots” causing Trump to win the presidential election in 2016. Or, perhaps, pushes to deplatform one person who went viral by saying something wrong and harmful. Or the belief that we can content-moderate our way out of online harms altogether.  </p>

<p class="has-text-align-none">DiResta’s book explains why these approaches will always fall short. Blaming the “algorithm” for a dangerous viral trend might feel satisfying, but the algorithm has never worked without human choice. As DiResta writes, “virality is a collective behavior.” Algorithms can surface and nudge and entangle, but they need user data to do it effectively.  </p>

<h2 class="wp-block-heading has-text-align-none"><strong>Parables, panics, and prevention</strong></h2>

<p class="has-text-align-none">Writing about individual viral rumors, conspiracy theories, and products can sometimes feel like telling parables: The <em>Lost Book of Herbal Remedies</em> becomes instructive on the ability of anything to become a TikTok Shop bestseller, so long as the influencers pushing the product are good enough at it.</p>

<p class="has-text-align-none">Most of these parables in the misinformation space do not have neat or happy endings. Disinformation reporter Ali Breland,<a href="https://www.motherjones.com/politics/2024/06/how-q-became-everything-big-feature-wayfair-balenciaga/"> in his final piece for Mother Jones</a>, wrote about how QAnon became “everything.” To do so, Breland begins with the parable of Wayfair, the cheap furniture seller that became the center of a moral panic about pedophiles. </p>

<p class="has-text-align-none">This moment in online panic history, which also features heavily in DiResta’s book, happened in the summer of 2020, after many QAnon influencers and activity hubs had been banned from mainstream social media (which, incidentally, I interviewed DiResta about at the time <a href="https://www.technologyreview.com/2020/07/26/1005609/qanon-facebook-twitter-youtuube/">for a piece questioning </a>whether such a move happened too late to have any meaningful effect on QAnon’s influence). </p>

<p class="has-text-align-none">Here’s what happened: Somebody online noticed that Wayfair was selling expensive cabinets. The cabinets had feminine names. The person drew some mental dots and connected them: surely, these listings must be coded evidence of a child trafficking ring. The idea caught fire in QAnon spaces and quickly spread <a href="https://www.technologyreview.com/2020/08/26/1007611/how-qanon-is-targeting-evangelicals/">beyond</a> the paranoia enclaves. The wild and debunked idea co-opted a real hashtag used to raise awareness about actual human trafficking, which <a href="https://www.washingtonpost.com/dc-md-va/interactive/2021/wayfair-qanon-sex-trafficking-conspiracy/">interfered with real investigations</a>. </p>

<p class="has-text-align-none">Breland, in his Mother Jones<em> </em>piece, tracks how the central tenets of the QAnon conspiracy theory stretched way beyond its believers and stayed there. Now, “[W]e are in an era of obsessive, odd, and sprawling fear of pedophilia—one where QAnon’s paranoid thinking is no longer bound to the political fringes of middle-aged posters and boomers terminally lost in the cyber world,” he wrote. </p>

<p class="has-text-align-none">The Wayfair moral panic didn’t become a trend simply because of bad algorithms; it was evidence that the attention QAnon had grabbed previously had worked. Ban its hashtags and its influencers, but the crowd remained, and we were, to some degree, in it.&nbsp;</p>

<p class="has-text-align-none">The <em>Lost Book of Herbal Remedies</em> became a bestseller by flowing through some well-worn grooves. The influencers promoting it knew what they could and couldn’t say from a moderation standpoint, and when those who broke the rules were removed, new influencers stepped up to earn those commissions. My article, and my efforts to bring this trend to the attention of TikTok, didn’t really do anything to slow the demand for this inaccurate book. So, what would work?  </p>

<p class="has-text-align-none">DiResta’s ideas for this echo conversations that have been happening among misinformation experts for some time. There are some things platforms absolutely should be doing from a moderation standpoint, like removing automated trending topics, introducing friction to engaging with some online content, and generally giving users more control over what they see in their feeds and from their communities. DiResta also notes the importance of education and prebunking, which is a more preventative version of addressing false information that focuses on the <a href="https://www.npr.org/2022/10/28/1132021770/false-information-is-everywhere-pre-bunking-tries-to-head-it-off-early">tactics and tropes</a> of online manipulation. Also, transparency.  </p>

<p class="has-text-align-none">Would people be more likely to believe that there’s not a vast conspiracy to censor conservatives on social media if there was a public database of moderation actions from platforms? Would people be less enthusiastic to buy a book of questionable natural cures if they knew more about the commissions earned by the influencers promoting it? I don’t know. Maybe!&nbsp;</p>

<p class="has-text-align-none">I do know this, though: After a decade of covering online culture and information manipulation, I don’t think I’ve ever seen things as bad as they are now. It’s worth trying, at least, something. </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Congress’s online child safety bill, explained]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/352251/kosa-congress-online-child-safety-bill-explained" />
			<id>https://www.vox.com/?p=352251</id>
			<updated>2024-05-29T16:23:33-04:00</updated>
			<published>2024-05-30T07:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Big Tech" /><category scheme="https://www.vox.com" term="Congress" /><category scheme="https://www.vox.com" term="Facebook" /><category scheme="https://www.vox.com" term="Google" /><category scheme="https://www.vox.com" term="Instagram" /><category scheme="https://www.vox.com" term="Meta" /><category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="Privacy &amp; Security" /><category scheme="https://www.vox.com" term="Snapchat" /><category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Tech policy" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="TikTok" /><category scheme="https://www.vox.com" term="Twitter" /><category scheme="https://www.vox.com" term="WhatsApp" /><category scheme="https://www.vox.com" term="YouTube" />
							<summary type="html"><![CDATA[It’s tough to feel urgency about something that progresses in slow motion. Bear with me, though, because it is time, once again, to care about the Kids’ Online Safety Act, otherwise known as KOSA, a federal bill that was designed to protect children from online harms.&#160; The bill has been hanging around in Congress in [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="The logos for Facebook, Messenger, and Instagram’s mobile apps are seen on a close up photograph of a mobile phone screen. " data-caption="The bill, with versions in the House and Senate, would require online platforms to take actions to protect the safety of users under the age of 17. | Nikolas Kokovlis/NurPhoto via Getty Images" data-portal-copyright="Nikolas Kokovlis/NurPhoto via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2024/05/gettyimages-2150468146.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	The bill, with versions in the House and Senate, would require online platforms to take actions to protect the safety of users under the age of 17. | Nikolas Kokovlis/NurPhoto via Getty Images	</figcaption>
</figure>
<p class="has-text-align-none">It’s tough to feel urgency about something that progresses in slow motion. Bear with me, though, because it is time, <a href="https://www.vox.com/recode/2023/2/15/23599879/congress-children-safety-online-big-tech">once again</a>, to care about the Kids’ Online Safety Act, otherwise known as <a href="https://www.congress.gov/bill/118th-congress/senate-bill/1409/text">KOSA</a>, a federal bill that was designed to protect children from online harms.&nbsp;</p>

<p class="has-text-align-none">The bill has been hanging around in Congress in some form since 2022, when Sens. Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) <a href="https://www.washingtonpost.com/technology/2022/02/16/kids-online-safety-act-unveiled-blackburn-blumenthal/">introduced their bipartisan response</a> to a series of congressional hearings and investigations into online child safety. While KOSA’s specific provisions have changed in the years since, the central goal of the legislation remains the same: legislators want to make platforms more responsible for the well-being of kids who use their services, and provide tools to parents so that they can manage how younger people use the internet.&nbsp;</p>

<p class="has-text-align-none">The dangers posed to minors by the&nbsp;internet has long been simultaneously a <a href="https://www.apa.org/topics/social-media-internet/health-advisory-adolescent-social-media-use">real threat</a> and a <a href="https://www.washingtonpost.com/technology/2024/02/01/online-safety-hearing-opposition/">moral panic</a>. It&#8217;s a political issue that has bipartisan support, while also appearing to be extremely difficult to govern without infringing on First Amendment protections.&nbsp;</p>

<p class="has-text-align-none">KOSA was born after Facebook whistleblower Frances Haugen revealed, among other things, <a href="https://www.vox.com/recode/2021/10/3/22707940/frances-haugen-facebook-whistleblower-60-minutes-teen-girls-instagram">that Meta had evidence</a> its platforms were harming the mental health of teens, and did nothing to mitigate those harms (Facebook has <a href="https://www.nytimes.com/2021/10/03/technology/whistle-blower-facebook-frances-haugen.html">previously said</a> that they believe Haugen’s claims are misleading). The environment in which the bill’s sponsors sought support, however, is rife with evidence of how such legislation might be misused for partisan reason.</p>

<p class="has-text-align-none">The conservative think tank Heritage Foundation <a href="https://x.com/Heritage/status/1660111875818790913?s=20">has said directly</a> that they would seek to use measures like KOSA to restrict access to content about sexual and gender identity online. And while revised versions of the bill seek to address this concern, Fight for the Future, a digital rights advocacy group, has gathered <a href="https://www.fightforthefuture.org/news/2024-05-20-new-letter-lgbtq-and-reproductive-rights-groups-tell-congress-to-take-their-concerns-about-kosa-seriously-trans-youth-and-abortion-seekers-are-at-risk/">a coalition of organizations</a> that believe the current version of the bill <a href="https://19thnews.org/2024/03/why-some-lgbtq-groups-oppose-the-current-kids-online-safety-act/">still leaves LGBTQ+ youth vulnerable</a> to censorship and harm, by limiting self expression and cutting off minors from access to information.</p>

<h2 class="wp-block-heading"><strong>So, what is KOSA, exactly?&nbsp;</strong></h2>

<p class="has-text-align-none">Here’s why we’re talking about KOSA now: The latest <a href="https://www.congress.gov/bill/118th-congress/senate-bill/1409/text">Senate version</a> of the bill has enough votes to pass. And recently, legislators in the <a href="https://www.congress.gov/bill/118th-congress/house-bill/7891/text">House introduced their own version</a> of the bill, which differs in some ways from the Senate version, but is <a href="https://www.nbcnews.com/politics/congress/congress-online-data-privacy-bills-kosa-big-tech-rcna153576">on track</a> to go before the full Energy and Commerce committee in June.&nbsp;</p>

<p class="has-text-align-none">The House bill is progressing alongside <a href="https://energycommerce.house.gov/posts/the-american-privacy-rights-act-puts-people-in-control-of-their-data">another privacy measure</a> that more generally addresses data security standards. The two KOSA bills have <a href="https://www.vox.com/recode/2023/2/15/23599879/congress-children-safety-online-big-tech">bipartisan support</a>, and follow a <a href="https://www.vox.com/technology/24139556/internet-without-tiktok-biden-signs-ban">successful push to pass a law</a> that could ban the short-video platform TikTok.&nbsp;</p>

<p class="has-text-align-none">Both KOSA bills aim to achieve their goals by requiring the following:&nbsp;&nbsp;</p>

<ul class="wp-block-list">
<li><strong>Online services covered by the bill would need to take measures to prevent harm to users under the age of 17</strong>. The House and Senate bills have different definitions of the platforms and harms to which this provision would apply. Both have language requiring platforms to mitigate harm related to certain mental health disorders, compulsive social media usage, physical violence, sexual exploitation, and drug use.&nbsp;</li>
</ul>

<ul class="wp-block-list">
<li><strong>Covered sites would have to introduce limitations into the design of their platform on how minors use it.</strong> For instance, KOSA would require platforms to limit the ability of other users to communicate with minors, limit personalized recommendation features for minors, limit features that encourage minors to spend more time on the app — including infinite scrolling and auto plays, features that are characteristic of TikTok’s For You page and widely imitated by other social media platforms.&nbsp;</li>
</ul>

<ul class="wp-block-list">
<li><strong>These platforms would also need to offer parental tools</strong> that allow management of a minor user’s privacy, ability to purchase in-app items, and time spent on the platform. Platforms would also need to have a reporting system specifically for content that may cause harm to a minor.&nbsp;</li>
</ul>

<h2 class="wp-block-heading"><strong>The downsides of prioritizing online safety</strong></h2>

<p class="has-text-align-none">If it passes — and that’s still a big if — KOSA would be the first major reform to rules governing online child safety since the Children’s Online Privacy Protection Act (COPPA), a 1998 law that regulates how a wide range of sites must handle information collected on users under 13. While COPPA does allow companies to collect information on these users with parental consent, the rules have, <a href="https://www.axios.com/2021/12/09/instagram-tik-tok-age-verification-child-safety">practically speaking</a>, led many major platforms to simply ban users under 13 from having an account at all.&nbsp;&nbsp;</p>

<p class="has-text-align-none">No matter the intent, however, many privacy advocates are skeptical of KOSA. While some recent changes won over the support of a few national organizations, the measure has struggled to gain support from LGBTQ+ organizations, who are concerned that the provisions could be used to restrict younger people’s access to resources about their identity. And while KOSA has undergone a few major revisions to address those fears, <a href="https://19thnews.org/2024/03/why-some-lgbtq-groups-oppose-the-current-kids-online-safety-act/">not all advocates are convinced</a>.&nbsp;</p>

<p class="has-text-align-none">The ACLU remains skeptical of KOSA, for instance. In <a href="https://www.aclu.org/press-releases/revised-kids-online-safety-act-is-an-improvement-but-congress-must-still-address-first-amendment-concerns">a statement earlier this year</a>, the civil liberties organization said that the bill would still harm the First Amendment rights of adults by incentivizing the removal of anonymous browsing on wide swaths of the internet, and by encouraging platforms to “censor protected speech” in order to ensure compliance with the bill’s provisions. Likewise, the <a href="https://www.eff.org/deeplinks/2024/02/dont-fall-latest-changes-dangerous-kids-online-safety-act">Electronic Frontier Foundation</a> has called the revised KOSA measure an “unconstitutional censorship bill” that would provide too much power to state attorneys general to determine how these provisions are actually enforced.&nbsp;</p>

<p class="has-text-align-none">Pushes to regulate the internet have a <a href="https://www.vox.com/recode/2022/3/14/22971618/earn-it-sesta-fosta-children-safety-internet-laws">deep connection</a> to calls to protect children from its harms. That makes sense in some ways: The internet hosts a great deal of things that can be harmful to kids and adults alike, from privacy violations to networked harassment to incentivizing sensationalist and inaccurate content. But online access is always a both/and situation: the internet is both harmful to and a lifeline for young people. And it seems that organizations representing the interests of many marginalized communities aren’t convinced that KOSA will balance this. </p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[You searched Google. The AI hallucinated an answer. Who’s legally responsible?]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/351189/google-ai-overview-section-230" />
			<id>https://www.vox.com/?p=351189</id>
			<updated>2024-05-23T13:45:12-04:00</updated>
			<published>2024-05-23T09:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Big Tech" /><category scheme="https://www.vox.com" term="Google" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[Google’s shift toward using AI to generate a written answer to user searches instead of providing a list of links ranked algorithmically by relevance was inevitable. Before AI Overview — introduced last week for US users — Google had Knowledge Panels, those information boxes that appear toward the top of some searches, incentivizing users to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="Google CEO Sundar Pichai stands on a stage in front of a photo gallery at Google&#039;s I/O developer conference. " data-caption="Google CEO Sundar Pichai speaks at Google I/O in Mountain View, California, on May 14. At the developer conference, everything revolved around the topic of artificial intelligence (AI). | &lt;p&gt;Christoph Dernbach/picture alliance via Getty Images&lt;/p&gt;" data-portal-copyright="&lt;p&gt;Christoph Dernbach/picture alliance via Getty Images&lt;/p&gt;" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/2024/05/gettyimages-2152425718.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Google CEO Sundar Pichai speaks at Google I/O in Mountain View, California, on May 14. At the developer conference, everything revolved around the topic of artificial intelligence (AI). | <p>Christoph Dernbach/picture alliance via Getty Images</p>	</figcaption>
</figure>
<p class="has-text-align-none">Google’s shift toward using AI to generate a written answer to user searches instead of providing a list of links ranked algorithmically by relevance was inevitable. Before <a href="https://developers.google.com/search/docs/appearance/ai-overviews">AI Overview</a> — introduced last week for US users — Google had Knowledge Panels, those information boxes that appear toward the top of some searches, incentivizing users to get their answers directly from Google, rather than clicking through to a result.&nbsp;</p>

<p class="has-text-align-none">AI Overview summarizes search results for a portion of queries, right at the top of the page. The results draw from multiple sources, which are cited in a drop-down gallery under the summary.&nbsp;As with any AI-generated response, these answers vary in quality and reliability.&nbsp;</p>

<p class="has-text-align-none">Overview has told users to <a href="https://qz.com/google-ai-overview-blinker-fluid-1851488465">change their blinker fluid</a> — which does not exist — seemingly because it picked up on joke responses from forums where users seek car advice from their peers. In a test I ran on Wednesday, Google was able to correctly generate instructions for doing a pushup, drawing heavily from the instructions in a<a href="https://www.nytimes.com/2022/05/18/well/move/how-to-master-the-push-up.html"> New York Times article</a>. Less than a week after launching this feature, Google announced that they are <a href="https://mashable.com/article/google-ai-overviews-ads">trying out ways to incorporate ads</a> into their generative responses.&nbsp;</p>

<p class="has-text-align-none">I’ve been writing about Bad Stuff online for years now, so it’s not a huge surprise that, upon gaining access to AI Overview, I started googling a bunch of things that might cause the generative search tool to pull from unreliable sources. The results were mixed, and they seemed to rely a lot on the exact phrasing of my question.&nbsp;</p>

<p class="has-text-align-none">When I typed in queries asking for information on two different people who are widely associated with dubious natural “cures” for cancer, I received one generated answer that simply repeated the claims of this person uncritically. For the other name, the Google engine declined to create generative responses.&nbsp;</p>

<p class="has-text-align-none">Results on basic first aid queries — such as how to clean a wound — pulled from reliable sources to generate an answer when I tried it. Queries about “detoxes” repeated unproven claims and were missing important context.&nbsp;</p>

<p class="has-text-align-none">But rather than try to get a handle on how reliable these results are overall, there’s another question to ask here: If Google’s AI Overview gets something wrong, who is responsible if that answer ends up hurting someone?&nbsp;</p>

<h2 class="wp-block-heading">Who’s responsible for AI?</h2>

<p class="has-text-align-none">The answer to that question may not be simple, according to Samir Jain, the vice president of policy at the Center for Democracy and Technology. <a href="https://www.vox.com/recode/2020/5/28/21273241/section-230-explained-supreme-court-social-media">Section 230</a> of the<a href="https://firstamendment.mtsu.edu/article/communications-decency-act-and-section-230/#:~:text=Congress%20enacted%20the%20Communications%20Decency,explicit%20materials%20on%20the%20internet."> 1996 Communications Decency Act</a> largely protects companies like Google from liability over the third-party content posted on its platforms because Google is not treated as a publisher of the information it hosts.</p>

<p class="has-text-align-none">It’s “less clear” how the law would apply to AI-generated search answers, Jain said. AI Overview makes Section 230 protections a little messier because it’s harder to tell whether the content was created by Google or simply surfaced by it.&nbsp;</p>

<p class="has-text-align-none">“If you have an AI overview that contains a hallucination,&nbsp; it&#8217;s a little difficult to see how that hallucination wouldn&#8217;t have at least in part been created or developed by Google,” Jain said. But a hallucination is different from surfacing bad information. If Google’s AI Overview quotes a third party that is itself providing inaccurate information, the protections would still likely apply.&nbsp;&nbsp;</p>

<p class="has-text-align-none">A bunch of other scenarios are stuck in a gray area for now: Google’s generated answers are drawing from third parties but not necessarily directly quoting them. So is that original content, or is it more like the snippets that appear under search results?&nbsp;</p>

<p class="has-text-align-none">While generative search tools like AI Overview represent new territory in terms of Section 230 protections, the risks are not hypothetical. Apps that say they can <a href="https://www.washingtonpost.com/technology/2024/03/18/ai-mushroom-id-accuracy/">use AI to identify mushrooms</a> for would-be foragers are already available in app stores, despite evidence that these tools aren’t super accurate. Even in Google’s demo of their new video search, a&nbsp;factual error was generated, <a href="https://www.theverge.com/2024/5/14/24156729/googles-gemini-video-search-makes-factual-error-in-demo">as The Verge noticed</a>.&nbsp;&nbsp;</p>

<h2 class="wp-block-heading">Eating the source code of the internet</h2>

<p class="has-text-align-none">There’s another question here beyond when Section 230 may or may not apply to AI-generated answers: the incentives that AI Overview does or does not contain for the creation of reliable information in the first place. AI Overview relies on the web continuing to contain plenty of researched, factual information. But the tool also seems to make it harder for users to click through to those sources.&nbsp;</p>

<p class="has-text-align-none">“Our main concern is about the potential impact on human motivation,” Jacob Rogers, associate general counsel at the Wikimedia Foundation, said in an email. “Generative AI tools must include recognition and reciprocity for the human contributions that they are built on, through clear and consistent attribution.”&nbsp;&nbsp;</p>

<p class="has-text-align-none">The Wikimedia Foundation hasn’t seen a major drop in traffic to Wikipedia or other Wikimedia projects as a direct result of AI chatbots and tools to date, but Rogers said that the foundation was monitoring the situation. Google has, in the past, relied on Wikipedia to populate its Knowledge Panels, and draws from its work to provide fact-check pop-up boxes on, for instance, YouTube videos on controversial topics.&nbsp;&nbsp;&nbsp;</p>

<p class="has-text-align-none">There’s a central tension here that’s worth watching as this technology becomes more prevalent. Google has an incentive to present its AI-generated answers as authoritative. Otherwise, why would you use them?&nbsp;</p>

<p class="has-text-align-none">“On the other hand,” Jain said, “particularly in sensitive areas like health, it will probably want to have some kind of disclaimer or at least some cautionary language.”&nbsp;</p>

<p class="has-text-align-none">Google’s AI Overview contains a small note at the bottom of each result clarifying that it is an experimental tool. And, based on my unscientific poking around, I’d guess that Google has opted for now to avoid generating answers on some controversial topics.&nbsp;&nbsp;</p>

<p class="has-text-align-none">The Overview will, with some tweaking, generate a response to questions about its own potential liability. After a couple dead ends, I asked Google, “Is Google a publisher.”&nbsp;</p>

<p class="has-text-align-none">“Google is not a publisher because it doesn’t create content,” begins the reply. I copied that sentence and pasted it into another search, surrounded by quotes. The search engine found 0 results for the exact phrase.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Teletherapy can really help, and really hurt]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/24158103/betterhelp-online-therapy-privacy-issues" />
			<id>https://www.vox.com/technology/24158103/betterhelp-online-therapy-privacy-issues</id>
			<updated>2024-05-16T14:23:53-04:00</updated>
			<published>2024-05-16T14:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Health" /><category scheme="https://www.vox.com" term="Mental Health" /><category scheme="https://www.vox.com" term="Privacy &amp; Security" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[The US has a therapist shortage. Even if you can find one, good luck snagging an appointment with a therapist who is affordable or covered by your insurance.&#160; That&#8217;s where online therapy platforms like BetterHelp come in. Even if you&#8217;re not a user, you&#8217;ve almost certainly heard of them. For years, BetterHelp blanketed the online [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Jakub Porzycki/NurPhoto via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25450890/1541927546.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>The US has a <a href="https://data.hrsa.gov/topics/health-workforce/shortage-areas">therapist shortage</a>. Even if you can find one, <a href="https://www.vox.com/2023/8/4/23815827/mental-health-therapy-services-health-insurance">good luck snagging an appointment</a> with a therapist who is affordable or covered by your insurance.&nbsp;</p>

<p>That&rsquo;s where online therapy platforms like BetterHelp come in. Even if you&rsquo;re not a user, you&rsquo;ve almost certainly heard of them. For years, BetterHelp blanketed the online world with advertisements and enlisted an army of podcasters, <a href="https://www.vox.com/influencers" data-source="encore">influencers</a>, and creators to produce sponsored content that promotes their service as a solution.</p>

<p>According to podcast analytics firm Magellan AI,&nbsp;BetterHelp <a href="https://www.magellan.ai/research/quarterly-benchmark-report">spent an estimated </a>$24.6 million on podcast ads in the first quarter of 2024, more than any other company in the audio space. These ads, sometimes personal and direct to camera, sometimes funny, and always earnest, have turned BetterHelp into probably the most recognizable sponsor of stuff people listen to on the internet.&nbsp;(Disclosure: BetterHelp advertises on several <a href="https://corp.voxmedia.com/" data-source="encore">Vox Media</a> podcasts.)</p>

<p>But the efficiency behind BetterHelp&rsquo;s infinite expansion,&nbsp;it seems, can come at a cost to patients, as about 800,000 current or former clients are learning this week. <a href="https://www.theverge.com/2023/3/2/23622227/betterhelp-customer-data-advertising-privacy-facebook-snapchat">In 2023</a>, the FTC said that BetterHelp shared the sensitive data it collected on its users with advertisers, seemingly without their consent, or with provisions in place to limit how that data was then used. <a href="https://apnews.com/article/betterhelp-ftc-health-data-privacy-befca40bb873661d1f8986bb75d8df07">According to the AP</a>, BetterHelp has said it was simply adhering to practices that were &ldquo;standard for the industry.&rdquo;</p>

<p>BetterHelp and the FTC eventually reached a settlement, and now anyone who signed up for BetterHelp between August 1, 2017, and December 31, 2020, <a href="https://www.ftc.gov/news-events/news/press-releases/2024/05/betterhelp-customers-will-begin-receiving-notices-about-refunds-related-2023-privacy-settlement-ftc">is being notified of their eligibility</a> for a refund.<a href="https://www.betterhelp.com/ftc-settlement/"> In a recent statement</a>, BetterHelp said that the settlement was &ldquo;not an admission of wrongdoing,&rdquo; and clarified that the company did not share the &ldquo;private information&rdquo; of its members, such as their names or clinical data from sessions, with third parties.</p>

<p>There&rsquo;s a lesson here about combining the optimization and efficiency tactics of any tech startup in order to provide something as vital and as sensitive as <a href="https://www.vox.com/mental-health" data-source="encore">mental health</a> care to people who may be in crisis. When these strategies intersect with a field that requires expertise and treats people in vulnerable moments, something&rsquo;s almost bound to go awry. If you&rsquo;ve been paying attention to the race to optimize therapy with technology, this is<a href="https://www.statnews.com/2022/12/13/telehealth-facebook-google-tracking-health-data/"> a lesson you&rsquo;ve seen taught before</a>.&nbsp;&nbsp;&nbsp;</p>
<h2 class="wp-block-heading">Mental health care goes remote</h2>
<p>The start of the <a href="https://www.vox.com/coronavirus-covid19" data-source="encore">Covid-19 pandemic</a> in 2020 caused an abrupt shift in therapy access. Regulations and insurance policies restricting the feasibility of teletherapy were loosened. Some therapists and patients found that meeting online worked better for them. As <a href="https://www.vox.com/science-and-health/21427156/what-is-teletherapy-mental-health-online-pandemic">Brian Resnick wrote for Vox at the time</a>, the onset of the pandemic was a &ldquo;much-needed kick forward into the 21st century&rdquo; for mental <a href="https://www.vox.com/health-care" data-source="encore">health care</a>.&nbsp;</p>

<p>For those seeking therapy, though, the new ease of finding virtual care meant sorting through waves of targeted ads on social media from companies that might not prioritize the quality of care over growing their businesses. Cerebral, founded in early 2020, enticed patients with easy, subscription-based access to virtual psychiatric treatment, including prescriptions for medication treating ADHD. But mental health professionals and patients raised worrying questions about the quality of that treatment. Underpaid providers were, <a href="https://www.bloomberg.com/news/features/2022-03-11/cerebral-app-over-prescribed-adhd-meds-ex-employees-say">according to Bloomberg</a>, pressured to meet the expectations of patients who signed up for the service after seeing ads on social media promising quick and seamless access to medications.&nbsp;</p>

<p>These concerns led to investigations. The New York attorney general&rsquo;s office <a href="https://ag.ny.gov/press-release/2023/attorney-general-james-secures-740000-online-mental-health-provider-its">fined Cerebral $740,000</a> over its &ldquo;burdensome&rdquo; cancellation policies and for manipulating online reviews. Cerebral, like BetterHelp, <a href="https://www.ftc.gov/news-events/news/press-releases/2024/04/proposed-ftc-order-will-prohibit-telehealth-firm-cerebral-using-or-disclosing-sensitive-data">also recently settled with the FTC</a>, which has accused the company of disclosing sensitive user data for advertising purposes, misleading customers over its cancellation policies, and of violating the Opioid Addiction Recovery Fraud Prevention Act. As a result of the settlement, Cerebral has agreed to pay more than $7 million.&nbsp;</p>

<p>The experience of being a patient through services like these will vary. Plenty of people who sign up for teletherapy through services like BetterHelp will have their needs met. I currently use a telehealth service provided by my insurance that matched me with a local practitioner in order to access medication, and I&rsquo;m happy with it.&nbsp;</p>

<p>But during a more acute mental health crisis in 2021, I signed up for another teletherapy and medication platform that I&rsquo;d seen advertised. While it was indeed easy for me to gain access to treatments for my anxiety, I ended up dropping my practitioner, and the service, months later after she missed an emergency appointment I&rsquo;d made, and then spent the majority of the make-up appointment venting to me about a personal crisis in her life.&nbsp;</p>
<h2 class="wp-block-heading">When things go wrong</h2>
<p>Mistakes happen, but when services in something as vital as mental health go wrong, people get hurt. I still think a lot about a <a href="https://www.wsj.com/articles/the-failed-promise-of-online-mental-health-treatment-11671390353">2022 story in the Wall Street Journal</a>, which detailed what this can look like.&nbsp;</p>

<p>Caleb Hill, a young adult who had been kicked out of his family home after coming out as gay, signed up for BetterHelp, requesting a therapist who specializes in <a href="https://www.vox.com/lgbtq" data-source="encore">LGBTQ</a>+ issues. He was instead matched with a therapist whose private practice offers Christian counseling, and who told Hill that &ldquo;either you sacrifice your family or you sacrifice being gay,&rdquo; Hill told the paper. A former BetterHelp employee told the Journal that part of the issue was how the company&rsquo;s focus on growth leads to a minimal training and oversight process for their therapists:&nbsp;</p>

<p>&ldquo;I felt they were treated like Uber drivers,&rdquo; Sonya Bruner, BetterHelp&rsquo;s first clinical director and later a consultant to the company, told the Journal. &ldquo;There are a lot of good counselors on there,&rdquo; she said, &ldquo;but you also find counselors who aren&rsquo;t, who do the minimum. They don&rsquo;t get paid a lot, so they&rsquo;ll phone it in.&rdquo;</p>

<p>I&rsquo;ve thought a lot about the roles that technology can and can&rsquo;t play in increasing access to mental health care like therapy. I&rsquo;ve tried whenever possible to keep an open mind. I can see, for example, that there&rsquo;s a pretty compelling argument to be made that,<a href="https://www.vox.com/technology/2023/12/14/24000435/chatbot-therapy-risks-and-potential"> deployed ethically</a>, chatbots and other implementations of AI can help improve patient access and results when seeking mental health care.&nbsp;</p>

<p>Even at their best, though, these tools and services are basically patches in an <a href="https://www.vox.com/23890764/healthcare-insurance-marketplace-open-enrollment-employer-sponsored-united-blue-cross-shield-aetna">expensive system </a>that creates its own barriers to care. And when technological solutions to mental health access do go wrong, the cost is steep: private, sensitive data is sacrificed to feed growth. And people, often in vulnerable situations, get hurt.&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[How TikTok Shop ads turned an obscure, inaccurate book into a bestseller]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/24152358/tiktok-shop-ads-lost-book-of-herbal-remedies-bestseller" />
			<id>https://www.vox.com/24152358/tiktok-shop-ads-lost-book-of-herbal-remedies-bestseller</id>
			<updated>2024-05-08T18:13:13-04:00</updated>
			<published>2024-05-09T09:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="TikTok" />
							<summary type="html"><![CDATA[If you&#8217;ve spent enough time scrolling through TikTok, you might have seen a video from an account like @tybuggyreviews, a handle with half a million followers that exclusively posts videos selling products through the TikTok Shop.&#160; The creator, whose verified Instagram account identifies him as Tarik Garrett, used the @tybuggyreviews account to pitch viewers on [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25439470/1786821049.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>If you&rsquo;ve spent enough time scrolling through <a href="https://www.vox.com/tiktok" data-source="encore">TikTok</a>, you might have seen a video from an account like @tybuggyreviews, a handle with half a million followers that exclusively posts videos selling products through the TikTok Shop.&nbsp;</p>

<p>The creator, whose<a href="https://www.instagram.com/itstybuggy/"> verified Instagram account</a> identifies him as Tarik Garrett, used the @tybuggyreviews account to pitch viewers on supplements, water flossers, earbuds, workout machines, bible study guides, probiotics for women to help &ldquo;that smell down there,&rdquo; watch bands, inspirational hoodies, inspirational T-shirts, face massagers, foot massagers, rhinestone necklaces, oil pulling kits, and colon cleanses.&nbsp;</p>

<p>In the TikTok Shop, creators earn a commission for each sale linked to their account. Garrett&rsquo;s product videos got tens of thousands of views. A few even topped a million views.&nbsp; But nothing from his account took off quite like his sales pitch for an obscure 2019 publication called <em>The</em> <em>Lost Book of Herbal Remedies</em>.&nbsp;</p>

<p>&ldquo;Now I see why they&rsquo;re trying to remove TikTok. This book right here? This book of herbal remedies? They do not want us to see this book,&rdquo; Garrett said at the beginning of one Shop video, referring to a new <a href="https://www.vox.com/technology/24139556/internet-without-tiktok-biden-signs-ban">US law</a> that requires TikTok&rsquo;s Chinese parent company to either sell the app or face a ban. TikTok is <a href="https://www.nbcnews.com/tech/tech-news/tiktok-sues-us-government-says-ban-violates-first-amendment-rcna151059">challenging the law in court,</a> arguing that lawmakers citing <a href="https://www.vox.com/defense-and-security" data-source="encore">national security</a> concerns as a reason to pass the bill did not adequately argue why those concerns should supersede the First Amendment. The law, to be clear, does not cite the <em>Lost Book of Herbal Remedies</em>&rsquo;s availability on the TikTok Shop as a reason for banning the platform.</p>

<p>Garrett posted his pitch for the book on April 15. As of May 7, the video had more than 16 million views. Garrett opened the book and showed pages of its recommendations, urging users to take screenshots (and purchase a copy of their own) before it&rsquo;s too late.&nbsp;</p>

<p>The camera lingered on a list of plants that, the book claimed, were treatments for cancer, drug addiction, heart attacks, and herpes. As of Wednesday, the listing for <em>The Lost Book of Herbal Remedies</em> that Garrett linked to has more than 60,000 sales on the TikTok Shop. To put that number in perspective, appearing on a bestseller list <a href="https://www.vox.com/culture/2017/9/13/16257084/bestseller-lists-explained">generally requires 5,000&ndash;10,000 sales in a week</a>.</p>

<p>And that interest isn&rsquo;t staying exclusively on TikTok. <a href="https://www.vox.com/google" data-source="encore">Google</a> search interest in the book&rsquo;s title <a href="https://trends.google.com/trends/explore?date=today%203-m&amp;geo=US&amp;q=lost%20book%20of%20herbal%20remedies&amp;hl=en">spiked </a>on the same day Garrett posted his video. <em>The Lost Book of Herbal Remedies</em> was, as of Wednesday, May 8, ranked 10 on <a href="https://www.vox.com/amazon" data-source="encore">Amazon</a>&rsquo;s bestseller list for books, and has appeared toward the top of Amazon&rsquo;s bestseller rankings for the past three weeks.&nbsp;</p>

<p>I sent a handful of Garrett&rsquo;s videos advertising the book, along with about a half dozen additional widely viewed videos from other creators promoting <em>The Lost Book of Herbal Remedies</em>, to TikTok for comment. A spokesperson for TikTok said that videos linking to Shop products must abide by both the community guidelines, which ban medical misinformation, and Shop <a href="https://www.vox.com/policy" data-source="encore">policies</a>, which do not allow misleading content. If a video violates only the Shop policies, they said, they&rsquo;ll simply remove the link to the Shop but keep the content up. If it violates community guidelines, the video comes down.&nbsp;</p>

<p>The violations were enough for TikTok to remove his product review account. Garrett did not respond to a series of emailed questions.&nbsp;</p>
<h2 class="wp-block-heading">How e-commerce took over TikTok</h2>
<p>TikTok has long been good at guessing what its users might want to see, but less good at monetizing that trick. When the platform <a href="https://www.vox.com/technology/2023/9/14/23872449/tiktok-shop-for-you-page-ads">launched its Shop feature in the United States last fall</a>, the For You page shifted, pushing video after video like those made by @tybuggyreviews in the hope that its users will start buying the products that go viral on TikTok directly from their store.&nbsp;</p>

<p>The result became a For You page with constant interruptions from random product pitches. Right now, for instance, my For You page shows me a bunch of creators dancing to a <a href="https://www.tiktok.com/@bodowartke/video/7360660825725619489?lang=en">German song about rhubarb</a>, a bunch of pet birds behaving poorly, chaotic nonbinary people, and lots of ads from alternative <a href="https://www.vox.com/health" data-source="encore">wellness</a> creators trying to sell me oils, mushrooms, and books.&nbsp;</p>

<p>The Shop ads I see, like much of the content pushed to me on TikTok, are personalized, though my TikTok Shop recommendations are heavily influenced by my reporting on stories like this one. Your results may differ. And yet, it is clear that TikTok has catapulted the <em>Remedies </em>book into relevance beyond a niche audience. The company earns money off of the explosion of sales on the shop, some of which come from creators who are explicitly promoting unproven cancer &ldquo;cures&rdquo; and conspiracy theories about the platform.&nbsp;</p>

<p>Like the Shadow Work Journal, a workbook that went super viral on TikTok Shop several months ago as a <a href="https://www.vox.com/mental-health" data-source="encore">mental health</a> tool &mdash; <a href="https://www.theatlantic.com/technology/archive/2023/09/shadow-work-journal-popularity-tiktok-diy-self-help/675483/">despite its dubious effectiveness</a> &mdash; <em>The Lost Book of Herbal Remedies</em> is part of a swell of wellness creators, brands, and products that have found success reaching new audiences on TikTok Shop.&nbsp;&nbsp;</p>

<p>Shop videos have become a sort of &ldquo;loophole&rdquo; for health misinformation on TikTok, said Evan Thornburg, a bioethicist who posts on TikTok as <a href="https://www.tiktok.com/@gaygtownbae?lang=en">@gaygtownbae</a> and studies mis/disinformation and <a href="https://www.vox.com/public-health" data-source="encore">public health</a>. Creators, and those with something to sell, know that <a href="https://www.vox.com/technology/23902094/tiktok-shop-wellness-trend-castor-oil">Shop videos will get privileged on For You pages</a>. Some creators may use those videos to promote dangerous health claims. In other cases, Thornburg noted, &ldquo;the creator promoting the material isn&rsquo;t necessarily spouting off disinformation, but the material that they&rsquo;re convincing people to purchase is.&rdquo;&nbsp;</p>
<h2 class="wp-block-heading">A recipe for misinformation</h2>
<p><em>The Lost Book of Herbal Remedies </em>appears to be a case of both: The book contains misleading information, and creators are circulating misleading health claims in order to sell books. A video with nearly 1 million views promoting the book&rsquo;s TikTok Shop listing is basically a series of ominous, <a href="https://www.vox.com/2023/4/28/23702644/artificial-intelligence-machine-learning-technology" data-source="encore">AI</a>-generated images with an AI voiceover. The video claims that the book contains secrets previously locked away in an ancient book located in the &ldquo;Vatican library,&rdquo; and that <em>The Lost Book of Herbal Medicine</em> was previously only available on the &ldquo;dark web&rdquo; before surfacing on TikTok. (Not true: The book is for sale on Amazon, the author&rsquo;s website, and appears to be available through some academic and public library systems.) Another Shop video with more than 1 million views is captioned, &ldquo;Cure for over 550 diseases, even cancer.&rdquo;&nbsp;</p>

<p>I scanned through a copy of <em>The Lost Book of Herbal Remedies</em> this week. The 300-page book contains a disclaimer noting that it&rsquo;s intended to &ldquo;provide information about natural medicine, cures, and remedies that people have used in the past,&rdquo; that it is not medical advice, and that some of the &ldquo;remedies and cures found within do not comply with FDA guidelines.&rdquo; It&rsquo;s split into two parts: an alphabetical listing of ailments and conditions alongside the plants that the authors believe can cure or treat them, and an alphabetical list of plants, sorted by region, with instructions on how to prepare them.&nbsp;</p>

<p>The list of ailments the book includes proposed treatments for cancer, several STDs, mental health disorders, and digestive issues, among many other things. A few stand out: The book lists cures for smallpox, strep, and staph infections. There&rsquo;s an emergency medicine section that includes plant remedies for serious medical conditions like internal bleeding and poisoning.&nbsp;</p>

<p>Flip to the entries for the plants and you&rsquo;ll find lists of claims referring to research that is not cited. An entry promoting Ashwagandha&rsquo;s &ldquo;anti tumor effects&rdquo; and ability to &ldquo;kill &#8230; cancerous cells&rdquo; refers to &ldquo;research,&rdquo; but does not note that, while there is some indication that Ashwagandha can slow the growth of cancer cells, <a href="https://www.mskcc.org/cancer-care/integrative-medicine/herbs/ashwagandha">these studies </a>were conducted on rodents and have yet to be replicated on humans.&nbsp;</p>

<p>Nicole Apelian, one of the book&rsquo;s authors, did not reply to an emailed request for comment. While active on TikTok, it&rsquo;s not her main social media presence. Her TikTok bio encourages her 17,000 followers there to check her out on <a href="https://www.vox.com/instagram-news" data-source="encore">Instagram</a>, where she has 100,000 followers. Apelain also runs Nicole&rsquo;s Apothecary, an herbal shop mentioned in the book that sells some of the tinctures she recommends, sells memberships to an online &ldquo;Academy&rdquo; for fans of her book, and advertises her paid appearances and workshops.&nbsp;</p>
<h2 class="wp-block-heading">The endless whack-a-mole</h2>
<p>As a journalist, there&rsquo;s a pattern that becomes evident when writing about health misinformation on social media: something gets views, you assess the real or potential harm and try to understand its context, you contact the company to ask about the harmful thing. Maybe the video or post or group is taken down, maybe it&rsquo;s not. The company gives you a statement, refers you to their policies on misinformation, and then you publish the article. This happens <a href="https://www.vox.com/technology/23902094/tiktok-shop-wellness-trend-castor-oil">over</a> and <a href="https://www.vox.com/technology/2023/7/29/23811639/tiktok-borax-challenge-dangerous-laundry-detergent">over</a> and <a href="https://www.technologyreview.com/2020/05/07/1001469/facebook-youtube-plandemic-covid-misinformation/">over </a>and <a href="https://www.washingtonpost.com/lifestyle/style/they-turn-to-facebook-and-youtube-to-find-a-cure-for-cancer--and-get-sucked-into-a-world-of-bogus-medicine/2019/06/25/6df3ddae-7cdc-11e9-a5b3-34f3edf1351e_story.html">over</a> because writing about misleading health information is a game of whack-a-mole that feels <a href="https://www.vox.com/technology/24127540/huberman-lab-science-misleading-information-andrew-huberman-podcasts-joe-rogan-health-medicine">harder and harder to win</a>.&nbsp;</p>

<p>Thornburg, the bioethicist, noted a couple reasons why I can&rsquo;t climb out of this purgatory. First, meaningful moderation of a platform like TikTok is somewhat implausible. Social <a href="https://www.vox.com/media" data-source="encore">media companies</a> are &ldquo;never going to prioritize the amount of labor that would need to consistently be put into misinformation management,&rdquo; they said.&nbsp;</p>

<p>Most sites rely on a combination of human moderators and AI, and it&rsquo;s difficult to create automated moderation tools that don&rsquo;t also censor allowed content. For example: health misinformation targeting minority communities often taps into<a href="https://www.technologyreview.com/2020/12/28/1015613/covid-vaccine-doctor-tiktok-instagram/"> legitimate distrust of medical professionals and institutions</a> that have roots in recent history. An AI tool designed to moderate keywords associated with this sort of targeted misinformation might also sweep up criticism of <a href="https://www.vox.com/health-care" data-source="encore">health care</a> systems in general.&nbsp;</p>

<p>And second, the creators who profit off health misinformation are really good at figuring out what they can say where, and what Thornburg calls &ldquo;life boating&rdquo; their audiences from one platform to another as needed. &ldquo;You will have people who will drive interest in something through TikTok because the virality and the algorithm are aggressive,&rdquo; Thornburg said. Then, their profile will link out to their Instagram or Linktree or <a href="https://www.vox.com/youtube" data-source="encore">YouTube</a> channel.&nbsp;&nbsp;</p>

<p>Health misinformation on social media is a million cross-pollinating moving targets. TikTok Shop is a hot spot right now. Later, it might be something else on another platform. Chasing this content from platform to platform, harm to harm, viral video to viral video, is exhausting. I am exhausted.&nbsp;</p>

<p>At the end of our interview, Thornburg shared the question that drives a lot of their work in this space, &ldquo;Who do we consider accountable for these things that are harmful and regulate them or hold them to certain standards?&rdquo; Often, it&rsquo;s not really the person behind the individual piece of content driving the incentives for making it.&nbsp;</p>

<p>As a result of my reporting, Garrett&rsquo;s account was taken down, along with a few other popular videos advertising a book that has already sold tens of thousands of copies. As long as the incentives remain, it won&rsquo;t be long until the next product promising a miracle starts polluting my For You page.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[The misleading information in one of America’s most popular podcasts]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/24127540/huberman-lab-science-misleading-information-andrew-huberman-podcasts-joe-rogan-health-medicine" />
			<id>https://www.vox.com/technology/24127540/huberman-lab-science-misleading-information-andrew-huberman-podcasts-joe-rogan-health-medicine</id>
			<updated>2024-05-02T13:12:22-04:00</updated>
			<published>2024-05-02T13:05:00-04:00</published>
			<category scheme="https://www.vox.com" term="Podcasts" /><category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[Sometimes, misleading information is easy to spot, traveling in the same conspiracy-theory-slicked grooves it has for decades. The same ideas that undermined belief in the safety of Covid-19 vaccines have been around for more than a century, adapting the same message to suit new media formats, new epidemics, and new influential endorsements. In a way, [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Andrew Huberman, a neurobiology professor and host of the Huberman Lab podcast, attending INBOUND 2023 in Boston, Mass. | Photo by Chance Yeh/Getty Images for HubSpot" data-portal-copyright="Photo by Chance Yeh/Getty Images for HubSpot" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25386746/1666644893.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Andrew Huberman, a neurobiology professor and host of the Huberman Lab podcast, attending INBOUND 2023 in Boston, Mass. | Photo by Chance Yeh/Getty Images for HubSpot	</figcaption>
</figure>
<p>Sometimes, misleading information is easy to spot, traveling in the same conspiracy-theory-slicked grooves it has for decades. The same ideas that undermined belief in the safety of Covid-19 vaccines have been around for <a href="https://www.washingtonpost.com/news/wonk/wp/2015/02/05/meet-the-crunchy-chemical-hating-anti-vaccine-conspiracy-theorists-from-100-years-ago/">more than a century</a>, adapting the same message to suit new media formats, new epidemics, and new <a href="https://www.thecut.com/2022/02/anti-vaxx-celebrities-are-coming-out-of-the-woodwork.html">influential endorsements</a>. In a way, George Bernard Shaw&rsquo;s<a href="https://sites.utexas.edu/ransomcentermagazine/files/2014/10/Q_RM_281_S528_001.jpg"> outspoken opposition</a> to the smallpox <a href="https://www.nytimes.com/1931/07/27/archives/shaw-letter-calls-vaccination-crime-author-of-doctors-dilemma-is.html">vaccine</a> in the first half of the 20th century is not unlike that of, say, <a href="https://www.nytimes.com/2021/11/05/sports/football/coronavirus-aaron-rodgers.html">Aaron Rodgers&rsquo;s misleading statements</a> about the Covid-19 vaccines.&nbsp;&nbsp;</p>

<p>Such misleading information is relatively easy to see. But spotting other kinds of misleading information is more like <a href="https://exoplanets.nasa.gov/alien-worlds/ways-to-find-a-planet/#">identifying planets in other star systems</a>. It&rsquo;s difficult to find such a planet by just taking a direct image; the radiation from the star the planet orbits can obscure it. Instead, you might look for the shadow in front of the star or the &ldquo;wobble&rdquo; of a star caused by the gravitational pull of an orbiting planet. You find it by looking around it.</p>

<p>Over time, with this kind of misleading information, you learn to spot the wobble, the tells that something might not be right. This is what happened for me when I began to listen to <a href="https://www.hubermanlab.com/"><em>Huberman Lab</em></a><em> </em>last fall.&nbsp;</p>

<p><em>Huberman Lab</em> is one of the most popular podcasts in the country, led by Stanford neuroscientist Andrew Huberman. His most ardent fans &mdash; and there <a href="https://vidiq.com/youtube-stats/channel/UC2D2CMWXMOVWx7giW1n3LIg/">are millions</a> &mdash;&nbsp; tend to be fitness enthusiasts, self-optimizers, and crossover listeners who heard about his podcast from other influencers in the Joe Rogan Extended Universe. Huberman looms large in the minds of his biggest fans. If you&rsquo;re outside of that circle, perhaps you heard of his work after a <a href="https://nymag.com/intelligencer/article/andrew-huberman-podcast-stanford-joe-rogan.html#/">New York magazine profile</a> earlier this year detailed his personal conduct.&nbsp;</p>

<p>The podcast&rsquo;s premise is simple: presenting science-based overviews and conversations on a broad range of topics, from <a href="https://www.youtube.com/watch?v=n9IxomBusuw">longevity</a> to <a href="https://www.youtube.com/watch?v=CJIXbibQ0jI">mental health</a> to <a href="https://www.youtube.com/watch?v=q37ARYnRDGc">nutrition</a>. <a href="https://time.com/6290594/andrew-hubman-lab-podcast-interview/">A fawning profile in Time magazine</a> last summer credited Huberman with getting America to care about science again. More than anything, though, the episodes I listened to conveyed a promise: If you want to optimize your body and mind, science has the answers, and all we need to do is listen. It&rsquo;s a riveting promise, one that Huberman is not alone in making.&nbsp;</p>

<p>Silicon Valley, in particular, is filled with <a href="https://www.nytimes.com/2024/01/12/business/bryan-johnson-longevity-blueprint.html">wellness guides</a><strong> </strong>and<a href="https://www.theguardian.com/science/2022/feb/17/if-they-could-turn-back-time-how-tech-billionaires-are-trying-to-reverse-the-ageing-process"> well-funded laboratories </a>seeking the secret to living the best and longest life. There are other well-credentialed promises of cures and solutions circulating, especially on podcasts, a format that seems to lend itself to this slippage between the reputable and the freewheeling.&nbsp;</p>

<p>Huberman&rsquo;s rise to popularity during the Covid-19 pandemic should have been a win for information: Huberman, an associate professor of neurobiology at Stanford with an active lab, it seemed, was a respected researcher in his field of visual neuroscience, and he filled his multi-hour podcast episodes with citations and caution.</p>

<p>Popular science communication isn&rsquo;t always the best science communication. The implicit pact that Huberman&rsquo;s podcast makes with its audience &mdash; that it will, if you listen and follow, help you optimize your life &mdash; has turned the podcast into a powerful force that shapes how his audience of millions understands science. But listeners of <em>Huberman Lab</em> may be, at times, hearing what some call an illusion.&nbsp;</p>
<h2 class="wp-block-heading">When good communication goes bad</h2>
<p>In late March, <a href="https://nymag.com/intelligencer/article/andrew-huberman-podcast-stanford-joe-rogan.html#/">New York magazine reported</a> that Huberman&rsquo;s Stanford laboratory &ldquo;barely exists&rdquo; and that, according to multiple women who dated him during his rise to fame, Huberman had manipulated and lied to his partners (Huberman&rsquo;s spokesperson denied both of these allegations to the magazine, which shares a corporate owner with Vox).&nbsp;</p>

<p>The profile was one tell &mdash; obscuring aspects of his personal and professional lives. But even before it came out, the same subject experts on the topics Huberman covered had been questioning some of the science of the podcast itself.&nbsp;</p>

<p>This liminality, or in-betweenness, of <em>Huberman Lab</em> is key to its success. When speaking about vaccines, Huberman is no Alex Jones or Aaron Rodgers. He&rsquo;s a real scientist who cites real studies. He approaches topics that might end up drawing scrutiny with a great deal of caution.</p>

<p>For example, Huberman never <em>tells </em>his audience to avoid the flu vaccine. All he&rsquo;s saying is that he doesn&rsquo;t take it himself. And yet, the subtext is there. &ldquo;Now, personally, I don&rsquo;t typically get the flu shot. And the reason for that is that I don&rsquo;t tend to go into environments where I am particularly susceptible to getting the flu,&rdquo; Huberman said in an <a href="https://www.hubermanlab.com/episode/how-to-prevent-treat-colds-flu">episode earlier this year </a>on avoiding and treating the cold and flu.</p>

<p>He went on: &ldquo;When you take the flu shot, you&rsquo;re really hedging a bet. You&rsquo;re hedging a bet against the fact that you will be or not be exposed to that particular strain of flu virus that&rsquo;s most abundant that season, or strains of flu virus that are most abundant that season, and that the flu shot that you&rsquo;re taking is directed at those particular strains.&rdquo; Make the choice that&rsquo;s right for you, Huberman says. Talk to your doctor.&nbsp;</p>

<p>&ldquo;He&rsquo;s a good communicator, right? That&rsquo;s why he&rsquo;s a star,&rdquo; <a href="https://www.ualberta.ca/law/faculty-and-research/health-law-institute/people/timothycaulfield.html">Tim Caulfield</a>, a professor of health law and science policy at the University of Alberta, told me in late 2023. Huberman often does a &ldquo;very good job&rdquo; talking about the science behind a topic he&rsquo;s exploring in an episode, Caulfield added, but &ldquo;in the end, the overall takeaway, I think, is less supported by the science than the impression you&rsquo;re given listening to the episode.&rdquo;&nbsp;</p>

<p>Instead of recommending a flu shot, Huberman introduces his listeners to a series of other ideas. <a href="https://www.immunologic.org">Andrea Love</a>, a microbiologist, immunologist, and science communicator herself, wrote a four-part <a href="https://immunologic.substack.com/p/hubermans-cold-and-flu-podcast-hypes?r=58g9p&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">newsletter series</a> addressing Huberman&rsquo;s claims in greater detail. She says he promoted possibly using a sauna to improve immune function, <a href="https://immunologic.substack.com/p/hubermans-cold-and-flu-podcast-hypes?r=58g9p&amp;utm_campaign=post&amp;utm_medium=web&amp;triedRedirect=true">citing a study</a> that had just 20 participants and <a href="https://www.tandfonline.com/doi/full/10.1080/02656736.2023.2179672">did not directly measure</a> immune function. She says he promoted the potential use of <a href="https://www.mcgill.ca/oss/article/critical-thinking-health-and-nutrition/you-probably-dont-need-green-ag1-smoothie">unproven supplements</a>, including those sold by AG1, a company that partners with Huberman and <a href="https://www.hubermanlab.com/sponsors">sponsors his podcast.</a> Huberman and his spokesperson did not respond to a request for comment on Love&rsquo;s characterization of this episode.</p>

<p>For Love, it was easy to see <em>Huberman Lab</em> as sleight of hand even before the New York magazine story was published. The ingredients were there: Huberman is a magnetic personality capable of capturing attention with implied promises of the secrets to longevity, a perfect body, a perfect mind, even perfect sleep &mdash; much of which he says can be achieved with the help of the supplements that he himself advertises.&nbsp;</p>

<p>Love was part of a cohort of<a href="https://twitter.com/LabMuffin/status/1628913655113986048"> scientists</a> and <a href="https://www.conspirituality.net/episodes/163-the-huberman-paradox-jonathan-jarry">public health communicators</a> who raised concerns about Huberman&rsquo;s wildly popular podcast over several months. When Huberman had Robert Lustig on as a guest, those concerns grew louder. Lustig is a pediatric endocrinologist at the University of California San Francisco (UCSF), but he&rsquo;s perhaps best known for <a href="https://www.theguardian.com/lifeandstyle/2014/aug/24/robert-lustig-sugar-poison">arguing that sugar</a>, particularly fructose, is a &ldquo;toxin.&rdquo; Love, who said that Lustig&rsquo;s claims about the uniquely causal relationship between fructose and childhood obesity remain <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4822166/">unproven</a>, listened to the conversation between the two scientists. (Disclosure: I recently accepted a contract for non-editorial freelance work at <a href="https://www.ucsfhealth.org/">UCSF Health</a>.)</p>

<p>&ldquo;I was floored with how many different types of misinformation he was able to shove into a single episode,&rdquo; Love said earlier this year, after listening to the majority of Huberman&rsquo;s 3-hour interview with Lustig. Like many of Huberman&rsquo;s lengthy episodes, this one racked up <a href="https://www.youtube.com/watch?v=n28W4AmvMDE">millions of views</a> on YouTube alone. <a href="https://www.theverge.com/2023/11/29/23981468/apple-replay-spotify-wrapped-podcasts-rogan-crime-junkie-alex-cooper">In 2023,</a> <em>Huberman Lab</em> was the eighth most listened to podcast on Apple Podcasts, and the third most popular on Spotify.&nbsp;</p>

<p>As she listened, she took notes, marking moments where she felt the podcast omitted important facts, misinterpreted the progression of disease, or provided confusing information to listeners.&nbsp;</p>

<p>At one point, Lustig cited a study that he said &ldquo;showed&rdquo; ultra-processed foods inhibit bone growth &mdash; one that, according to Huberman&rsquo;s exchange with Lustig, used human subjects in Israel to test its claims. Love tracked down the <a href="https://www.nature.com/articles/s41413-020-00127-9">2021 paper easily</a>. &ldquo;This was in vivo &#8211; IN RODENTS,&rdquo; she wrote in her notes.</p>

<p>In her view, the podcast was &ldquo;outright LYING to listeners.&rdquo;&nbsp;</p>

<p>A spokesperson for Andrew Huberman responded to a request for comment by noting that the podcast team &ldquo;review studies mentioned on the podcast by guests, however the conclusions drawn by guests are their own and our guests are the foremost experts in their fields.&rdquo; The show links to referenced studies in the show notes for each episode.</p>
<h2 class="wp-block-heading">Misleading information can be hard to see</h2>
<p>Nailing down Huberman&rsquo;s beliefs is, likewise, tricky, straddling the line between endorsement and implication. In October, Huberman commented on an<a href="https://www.instagram.com/p/CthQLWUuKIn/?hl=en"> Instagram post</a> by his friend Joe Rogan promoting an interview with Robert F. Kennedy Jr., <a href="https://www.vox.com/politics/24112624/rfk-jr-kennedy-shanahan-running-mate-2024-election">the presidential candidate</a> who was once a respected environmental lawyer but is now perhaps best known for promoting conspiracy theories about vaccines, including those for Covid-19.&nbsp;</p>

<p>&ldquo;I&rsquo;m eager to listen to this and to learn more about Robert&rsquo;s stance on a number of issues. Whenever I run into him at the gym, he is extremely gracious and asks lots of questions about science and, by my observation, trains hard too!&rdquo; Huberman&rsquo;s verified Instagram account posted.&nbsp;</p>

<p>When I told Caulfield about this post, he described it as &ldquo;infuriating.&rdquo; Huberman and his spokesperson did not respond to a request for comment on his post about Kennedy.&nbsp;</p>

<p>&ldquo;Any kind of legitimization and normalization of that rhetoric, especially by someone who professes to be informed by science and has the credentials of a renowned institution behind him should be ashamed of doing that,&rdquo; he said.&nbsp;</p>

<p>Huberman&rsquo;s relationship to the information in his podcast can be viewed through a series of glancing blows; through the subtext of deciding not to take the flu vaccine himself and telling that to his audience; through serious questions about how he handles himself in romantic relationships; and through the selection of his guests, the framing of his episodes, and his friends.&nbsp;</p>

<p>Although Huberman has not directly responded to the New York magazine piece after its publication, his friends in the podcasting world, along with several more right-leaning media personalities, have called it a hit piece, and dismissed criticism of Huberman as either<a href="https://www.youtube.com/watch?v=i6UNeIC6PQk"> sloppy </a>or mean-spirited. &ldquo;Andrew should be celebrated. Period,&rdquo; <a href="https://twitter.com/lexfridman/status/1772381127971627402">wrote Lex Fridman</a>, a computer scientist and podcaster who has long been one of Huberman&rsquo;s friends.&nbsp;And it appears his podcast viewers <a href="https://www.theverge.com/2024/4/3/24119336/andrew-huberman-spotify-audiobook-apple-nyt-audio">are still tuning in</a>.&nbsp;</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[Imagining an internet without TikTok]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/24139556/internet-without-tiktok-biden-signs-ban" />
			<id>https://www.vox.com/technology/24139556/internet-without-tiktok-biden-signs-ban</id>
			<updated>2024-04-24T17:48:37-04:00</updated>
			<published>2024-04-25T07:00:00-04:00</published>
			<category scheme="https://www.vox.com" term="Privacy &amp; Security" /><category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Tech policy" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[The bill to require TikTok to separate from its Chinese parent company or face a nationwide ban made it to President Joe Biden&#8217;s desk on Wednesday as part of a huge foreign aid package that passed through Congress this week. And Biden, as he previously promised, signed the bill into law.&#160;&#160; ByteDance now has nine [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Photo illustration by Joe Raedle/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25416552/2150026435.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>The bill to require <a href="https://www.vox.com/tiktok" data-source="encore">TikTok</a> to separate from its Chinese parent company or<a href="https://www.vox.com/politics/24094839/tiktok-ban-bill-congress-pass-biden"> face a nationwide ban </a>made it to <a href="https://www.vox.com/joe-biden" data-source="encore">President Joe Biden</a>&rsquo;s desk on Wednesday as part of a <a href="https://www.vox.com/politics/2024/4/22/24137598/ukraine-aid-tiktok-ban-house-mike-johnson">huge foreign aid package</a> that passed through <a href="https://www.vox.com/congress" data-source="encore">Congress</a> this week. And Biden, as he previously promised, <a href="https://www.theverge.com/2024/4/24/24139036/biden-signs-tiktok-ban-bill-divest-foreign-aid-package">signed the bill into law</a>.&nbsp;&nbsp;</p>

<p>ByteDance now has nine months to sell TikTok, a deadline that Biden can opt to extend once by 90 days. And while TikTok could avoid a ban with a successful sale or<a href="https://www.platformer.news/tiktok-ban-bill-senate-legal-challenge-first-amendment/"> court challenge</a>, the new law means Americans might want to start imagining an online world without TikTok.&nbsp;</p>

<p>The push to either <a href="https://www.vox.com/2023/3/23/23653325/tiktok-ban-us-china-congress" data-source="encore">ban TikTok</a> or excise the platform from its owner has been around for years. For instance, then-<a href="https://www.vox.com/donald-trump" data-source="encore">President Trump</a><a href="https://www.washingtonpost.com/technology/2020/07/31/tiktok-trump-divestiture/"> announced plans to ban the app</a> in the summer of 2020, although Trump now says <a href="https://www.washingtonpost.com/politics/2024/04/22/trump-tiktok-ban-reversal-biden/">he thinks banning TikTok is a bad idea </a>and that people should be mad at Biden about it.&nbsp;</p>

<p>The threat of a TikTok ban has always been a little weird and complicated, drawing from a mixture of valid concerns and questionable moral panics about the ills of social media. As I&rsquo;ve written previously, TikTok&rsquo;s moderation failures and <a href="https://www.vox.com/privacy" data-source="encore">data privacy</a> concerns are <a href="https://www.vox.com/technology/24100104/banning-tiktok-us-senate-ineffective-and-harmful-bill">hardly unique</a>, even as some lawmakers seem to persist in holding <a href="https://www.theverge.com/2023/3/24/23654831/tiktok-congressional-hearing-xenophobia-china">TikTok uniquely responsible</a> for perpetuating them.&nbsp;</p>

<p>With that in mind, let&rsquo;s break down the implications of this new law, why it&rsquo;s happening, and what the internet would look like if TikTok disappeared.&nbsp;</p>
<h2 class="wp-block-heading">What you need to know about the ban<strong> </strong></h2>
<p>Now that Biden has signed the bill into law, ByteDance has at least nine months &mdash; and possibly one year &mdash; to sell TikTok. It&rsquo;s not clear, though, whether the law will survive a court challenge, which TikTok has already vowed to do.&nbsp;</p>

<p>The government is likely prepared for this, as the new law was the result of<a href="https://www.nytimes.com/2024/04/24/technology/tiktok-ban-congress.html"> years of planning</a> by lawmakers, triggering waves of opposition from TikTok executives and the app&rsquo;s huge user base, which includes 170 million Americans, according to TikTok.&nbsp;</p>

<p>Congress made one earlier attempt to pass such a ban in March. That bill, which passed the House but didn&rsquo;t make it through the Senate, gave ByteDance just six months to sell TikTok. The new version&rsquo;s extended deadline may have helped sway some people in the Senate to vote for the bill. It certainly didn&rsquo;t hurt that the TikTok ban <a href="https://apnews.com/article/tiktok-ban-congress-bill-1c48466df82f3684bd6eb21e61ebcb8d">was attached to a $95 billion aid deal </a>that would provide support to Ukraine and <a href="https://www.vox.com/israel" data-source="encore">Israel</a>.&nbsp;</p>

<p>TikTok CEO Shou Chew <a href="https://www.tiktok.com/@tiktok/video/7361448925972155679">said Wednesday</a> that the app wasn&rsquo;t &ldquo;going anywhere&rdquo; and that the company believes the courts will ultimately find the ban unconstitutional, <a href="https://www.vox.com/politics/24094839/tiktok-ban-bill-congress-pass-biden">violating the First Amendment</a>. The US government would need to meet a high standard to prove that a ban is necessary to protect the nation&rsquo;s security and privacy in order to prevail. Montana&rsquo;s statewide ban of TikTok was <a href="https://www.npr.org/2023/11/30/1205735647/montana-tiktok-ban-blocked-state">blocked by a federal judge </a>late last year as a likely violation of the First Amendment. The state is <a href="https://www.reuters.com/legal/montana-appealing-ruling-that-blocked-state-barring-tiktok-use-2024-01-03/">appealing that decision</a>.&nbsp;</p>

<p>ByteDance could also, you know, sell. However, the Chinese government <a href="https://www.axios.com/2023/03/23/china-tiktok-bytedance-forced-sale">has previously said</a> that it would oppose a forced sale of TikTok.&nbsp;</p>
<h2 class="wp-block-heading">Why is this happening?  </h2>
<p>Great question!&nbsp;</p>

<p>The lawmakers leading the charge on this ban have <a href="https://www.cbsnews.com/news/tiktok-ban-congress-reasons-why/">cited national security concerns</a> stemming from the app&rsquo;s Chinese ownership. Specifically, they&rsquo;ve mentioned the possibility of the Chinese government accessing the data of American users and using the app to spread propaganda or influence foreign elections. Members of Congress have referred to information they learned in security briefings about the potential for TikTok to harm American interests, but <a href="https://www.bloomberg.com/opinion/articles/2024-04-23/if-tiktok-is-such-a-threat-show-us-the-receipts?leadSource=uverify%20wall">the contents of those briefings are not public</a>.</p>

<p>Arguments in favor of banning TikTok note that the app&rsquo;s Chinese ownership puts user data at risk of access by an unfriendly foreign government; critics note that the Chinese government could access a lot of the same data by simply <a href="https://www.scientificamerican.com/article/tiktok-ban-data-privacy-security/">buying it from a data broker</a>.&nbsp;</p>

<p>There&rsquo;s another driving force here, though. As we&rsquo;ve <a href="https://www.vox.com/technology/24100104/banning-tiktok-us-senate-ineffective-and-harmful-bill">previously noted</a>, the current push for a ban in Congress <a href="https://www.washingtonpost.com/technology/2023/11/13/tiktok-facebook-instagram-gaza-hastags/">gained a lot of attention </a>after a viral but unfounded accusation spread that TikTok was<a href="https://www.vox.com/culture/23997305/tiktok-palestine-israel-gaza-war"> brainwashing the youth of America </a>with anti-Israel content in the first days of the Israel-Hamas war. That narrative seemed to rekindle a lot of fears about the power of TikTok to become a propaganda tool.&nbsp;</p>
<h2 class="wp-block-heading">What changes if TikTok goes away in the US?</h2>
<p>This isn&rsquo;t the first time a major force in <a href="https://www.vox.com/internet-culture" data-source="encore">internet culture</a> has <a href="https://www.technologyreview.com/2022/12/15/1065013/twitter-brain-death/">faced extinction</a> (<a href="https://www.washingtonpost.com/technology/2023/09/28/extremely-online-vine-secret-meeting/">RIP Vine</a>). In fact, this churn is increasingly part of being online now. But TikTok, arguably, is the most influential online platform in the US, and it <a href="https://www.technologyreview.com/2020/08/06/1006079/instagram-reels-byte-triller-clash-tiktok-ban/">won&rsquo;t be easily </a>replaced. If TikTok goes away, other platforms will try to jump into the void it will leave.&nbsp;&nbsp;</p>

<p>As the <a href="https://www.washingtonpost.com/technology/2024/04/24/tiktok-ban-benefits-meta-google/">Washington Post&rsquo;s Will Oremus wrote</a>, a TikTok ban would provide an open space for <a href="https://www.vox.com/meta" data-source="encore">Meta</a> and <a href="https://www.vox.com/google" data-source="encore">Google</a> to move in. Meta has already adapted a lot of TikTok&rsquo;s features via Reels, and Google&rsquo;s <a href="https://www.vox.com/youtube" data-source="encore">YouTube</a> has its Shorts video format, but neither quite has the cultural force behind them that TikTok has right now.&nbsp;</p>

<p>A lot of bigger creators &mdash; those with resources, managers, and huge followings &mdash; will be able to switch to another platform, if they haven&rsquo;t already. Everyone else might see things shift more dramatically.&nbsp;</p>

<p>Earlier this year, Zari A. Taylor, a doctoral candidate at the University of North Carolina at Chapel Hill who studies media and culture, <a href="https://www.vox.com/technology/24100104/banning-tiktok-us-senate-ineffective-and-harmful-bill">explained to me </a>that the biggest loss to online culture if TikTok goes away will be in the uniqueness of how the platform promotes videos into user feeds. TikTok is good at recommending videos by accounts with small followings, whose makers are often not professional content creators. These creators &ldquo;don&rsquo;t have the audience that could help them evolve into other areas of the entertainment industry,&rdquo; she said, and will likely lose their audience should the ban stand.</p>

<p>In some ways, the constant threat of a ban has already taken a toll on TikTok&rsquo;s appeal for creators. After the first wave of TikTok ban threats back in 2020,<a href="https://www.technologyreview.com/2020/08/14/1006875/tiktok-ban-influencers-ryan-beard-hank-green/"> I spoke with Ryan Beard,</a> a creator who at the time had nearly 2 million TikTok followers. The threat of a ban from President Trump sent his livelihood into a spiral, and he accelerated his efforts to get views and followers on other apps. These days, he&rsquo;s all but stopped posting on TikTok and has instead become a <a href="https://www.youtube.com/c/mrbeard6001">commentary YouTuber</a>.&nbsp;</p>

<p>When TikTok rose in influence, it was better than any other app at showing users what they wanted to see, for better or for worse. Short, vertical-format videos might be available on any old platform these days, but the format doesn&rsquo;t replicate what keeps people scrolling.&nbsp;&nbsp;</p>

<p>Some, like Beard, will turn their modest TikTok success into views on another platform. For many others, even the threat of the ban is a harsh reminder of the realities of making content on the internet: Your livelihood is tied to the success and attention of platforms you don&rsquo;t control.&nbsp;</p>

<p><em>A version of this story was published in the Vox Technology newsletter. </em><a href="https://www.vox.com/pages/newsletters"><em><strong>Sign up here</strong></em></a><em> so you don&rsquo;t miss the next one!</em></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>A.W. Ohlheiser</name>
			</author>
			
			<title type="html"><![CDATA[The slow death of Twitter is measured in disasters like the Baltimore bridge collapse]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/technology/24113765/twitter-x-misinformation-baltimore-bridge-collapse" />
			<id>https://www.vox.com/technology/24113765/twitter-x-misinformation-baltimore-bridge-collapse</id>
			<updated>2024-03-27T16:03:35-04:00</updated>
			<published>2024-03-28T07:30:00-04:00</published>
			<category scheme="https://www.vox.com" term="Social Media" /><category scheme="https://www.vox.com" term="Technology" /><category scheme="https://www.vox.com" term="Twitter" />
							<summary type="html"><![CDATA[Line up a few years&#8217; worth of tragedies and disasters, and the online conversations about them will reveal their patterns.&#160; The same conspiracy-theory-peddling personalities who spammed X with posts claiming that Tuesday&#8217;s Baltimore bridge collapse was a deliberate attack have also called mass shootings &#8220;false flag&#8221; events and denied basic facts about the Covid-19 pandemic. [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Baltimore’s Francis Scott Key Bridge collapsed after being struck by a cargo ship on March 26. | Scott Olson/Getty Images" data-portal-copyright="Scott Olson/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/25358359/2117494777.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Baltimore’s Francis Scott Key Bridge collapsed after being struck by a cargo ship on March 26. | Scott Olson/Getty Images	</figcaption>
</figure>
<p>Line up a few years&rsquo; worth of tragedies and disasters, and the online conversations about them will reveal their patterns.&nbsp;</p>

<p>The same <a href="https://archive.is/gvOL9">conspiracy-theory-peddling</a> personalities who spammed X with posts claiming that Tuesday&rsquo;s <a href="https://www.vox.com/2024/3/26/24112776/baltimore-bridge-collapse-francis-scott-key-maryland-cargo-ship-explainer-analysis">Baltimore bridge collapse </a>was a deliberate attack have also called mass shootings &ldquo;false flag&rdquo; events and <a href="https://www.independent.co.uk/tv/news/andrew-tate-tucker-carlson-ukraine-covid-b2373786.html">denied basic facts </a>about the <a href="https://www.vox.com/coronavirus-covid19" data-source="encore">Covid-19 pandemic</a>. A Florida Republican running for Congress <a href="https://archive.is/QLoKA">blamed &ldquo;DEI&rdquo; </a>for the bridge collapse as racist comments <a href="https://newrepublic.com/post/180134/fox-news-racist-conspiracy-theory-baltimore-key-bridge-collapse">about immigration</a> and Baltimore <a href="https://twitter.com/chrislhayes/status/1772694080239063284">Mayor Brandon Scott</a> circulated among the far right. These comments echo Trump in 2019, who called Baltimore a &ldquo;disgusting, rat and rodent infested mess,&rdquo; and, in 2015, <a href="https://www.washingtonpost.com/politics/i-would-fix-it-fast-in-2015-trump-criticized-obama-for-not-doing-enough-to-help-baltimore/2019/07/29/d202ade2-b207-11e9-8f6c-7828e68cb15f_story.html">blamed</a> President Obama for the unrest in the city.&nbsp;</p>

<p>As conspiracy theorists compete for attention in the wake of a tragedy, others seek engagement through <a href="https://www.garbageday.email/p/ceos-want-podcasters-now">dubious </a><a href="https://www.404media.co/baltimore-amateur-bridge-engineers-have-logged-on/">expertise</a>, juicy speculation, or stolen video clips. The boundary between <a href="https://www.vox.com/technology/2023/10/12/23913472/misinformation-israel-hamas-war-social-media-literacy-palestine">conspiracy theory and engagement bait is permeable</a>; unfounded and provoking posts often outpace the trickle of verified information that follows any sort of major breaking news event. Then, the conspiracy theories become content, and a lot of people marvel and express outrage that they exist. Then they kind of forget about the raging river of Bad Internet until the next national tragedy.</p>

<p>I&rsquo;ve seen it so many times. I became a breaking news reporter in 2012, which means that in internet years, I have the experience of an almost ancient entity.&nbsp;The collapse of the Francis Scott Key bridge into the Patapsco River, though, felt a little different from most of these moments for me, for two reasons.&nbsp;</p>

<p>First, it was happening after a few big shifts in what the internet even is, as <a href="https://www.vox.com/twitter" data-source="encore">Twitter</a>, once a go-to space for following breaking news events, became an <a href="https://www.technologyreview.com/2022/12/15/1065013/twitter-brain-death/">Elon Musk-owned factory</a> for verified accounts with bad ideas, while <a href="https://www.vox.com/technology/24079459/sora-openai-video-tool-world-simulator">generative AI tools </a>have superpowered grifters wanting to make plausible text and visual fabrications. And second, I live in Baltimore. People I know commute on that bridge, which forms part of the city&rsquo;s Beltway. Some of the <a href="https://www.thebaltimorebanner.com/community/transportation/key-bridge-collapse-YDNMRSLMDREE7ADUZJQFQJ3WDA/">workers who fell,</a> now presumed dead, lived in a neighborhood across the park from me.&nbsp;</p>
<h2 class="wp-block-heading">The local cost of global misinformation</h2>
<p>On Tuesday evening, I called <a href="https://twitter.com/LisaESnowden">Lisa Snowden</a>, the editor-in-chief of the <a href="https://baltimorebeat.com/">Baltimore Beat</a><em> &mdash;</em> the city&rsquo;s Black-owned alt-weekly &mdash; and an influential presence in Baltimore&rsquo;s still pretty active X community. I wanted to talk about how following breaking news online has changed over time.&nbsp;</p>

<p>Snowden was up during the <a href="https://www.thebaltimorebanner.com/baltimore/key-bridge-collapse-ship-pilot-LKDWFY237JFR5NCR7KXZIWRDVA/">early morning hours </a>when the bridge collapsed. Baltimore&rsquo;s X presence is small enough that journalists like her generally know who the other journalists are working in the city, especially those reporting on Baltimore itself. Almost as soon as news broke about the bridge, though, she saw accounts she&rsquo;d never heard of before speaking with authority about what had happened, sharing unsourced video, and speculating about the cause.&nbsp;&nbsp;</p>

<p>Over the next several hours, the misinformation and racism about Baltimore snowballed on X. For Snowden, this felt a bit like an invasion into a community that had so far survived the slow death of what was once Twitter by simply staying out of the spotlight.</p>

<p>&ldquo;Baltimore Twitter, it&rsquo;s usually not as bad,&rdquo; Snowden said. She sticks to the people she follows. &ldquo;But today I noticed that was pretty much impossible. It got extremely racist. And I was seeing other folks in Baltimore also being like, &lsquo;This might be what sends me finally off this app.&rsquo;&rdquo;&nbsp;&nbsp;</p>

<p>Here are some of the tweets that got attention in the hours after the collapse: Paul Szypula, a MAGA <a href="https://www.vox.com/influencers" data-source="encore">influencer</a> with more than 100,000 followers on X, <a href="https://archive.is/jhmGL">tweeted</a> &ldquo;Synergy Marine Group [the company that owned the ship in question] promotes DEI in their company. Did anti-white business practices cause this disaster?&rdquo; alongside a screenshot of a page on the company&rsquo;s website that discussed the existence of a diversity and inclusion policy.&nbsp;</p>

<p>That tweet got more than 600,000 views. Another far-right influencer speculated that there was some connection between the collapse and, I guess, Barack Obama? I don&rsquo;t know. <a href="https://archive.is/mZSRg">The tweet</a> got 5 million views as of mid-day Wednesday.</p>

<p>Being online during a tragic event is full of consequential nonsense like this, ideas and conspiracy theories that are inane enough to fall into the fog of <a href="https://www.washingtonpost.com/news/the-intersect/wp/2017/06/23/the-three-old-rules-that-explain-basically-the-entire-internet-in-2017/">Poe&rsquo;s Law</a> and yet harmful to actual people and painful to see in particular when it&rsquo;s your community being turned into views. Sure, there are<a href="https://www.vox.com/technology/2023/10/12/23913472/misinformation-israel-hamas-war-social-media-literacy-palestine"> best practices you can follow</a> to try to contribute to a better information ecosystem in these moments. Those practices matter. But for Snowden, the main thing she can do as her newsroom gets to work reporting on the impact of this disaster on the community here is to let time march on.</p>

<p>&ldquo;In a couple days, this terrible racist mob, or whatever it is, is going to be onto something else,&rdquo; Snowden said. &ldquo; Baltimore &#8230; people are still going to need things. Everybody&rsquo;s still going to be working. So I&rsquo;m just kind of waiting it out,&rdquo; she said &ldquo;But it does hurt.&rdquo;</p>

<p><em>A version of this story was published in the Vox Technology newsletter. </em><a href="https://www.vox.com/pages/newsletters"><em><strong>Sign up here</strong></em></a><em> so you don&rsquo;t miss the next one!</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
