<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Andrew Gelman | Vox</title>
	<subtitle type="text">Our world has too much noise and too little context. Vox helps you understand what matters.</subtitle>

	<updated>2016-11-06T16:59:06+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.vox.com/author/andrew-gelman" />
	<id>https://www.vox.com/authors/andrew-gelman/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.vox.com/authors/andrew-gelman/rss" />

	<icon>https://platform.vox.com/wp-content/uploads/sites/2/2024/08/vox_logo_rss_light_mode.png?w=150&amp;h=100&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Andrew Gelman</name>
			</author>
			
			<title type="html"><![CDATA[Be skeptical when polls show the presidential race swinging wildly]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/the-big-idea/2016/11/6/13540646/poll-shifts-misleading-clinton-leads-trump" />
			<id>https://www.vox.com/the-big-idea/2016/11/6/13540646/poll-shifts-misleading-clinton-leads-trump</id>
			<updated>2016-11-06T11:59:06-05:00</updated>
			<published>2016-11-06T12:30:03-05:00</published>
			<category scheme="https://www.vox.com" term="Politics" /><category scheme="https://www.vox.com" term="The Big Idea" />
							<summary type="html"><![CDATA[What do the pre-election polls tell us? We&#8217;ve had some big swings: After the Democratic convention, Hillary Clinton seemed to be locking the election up, then Trump came back to a near-tie, then came a series of events &#8212; three debates, sexual assault revelations, and a war within the Republican party &#8212; which seemed to [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="&lt;a href=&quot;http://www.shutterstock.com&quot;&gt;Shutterstock&lt;/a&gt;" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/15922244/shutterstock_382698661.0.0.1537509408.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>What do the pre-election polls tell us? We&#8217;ve had some big swings: After the Democratic convention, Hillary Clinton seemed to be locking the election up, then Trump came back to a near-tie, then came a series of events &mdash; three debates, sexual assault revelations, and a war within the Republican party &mdash; which seemed to knock Trump out of the race. Then, more recently, a series of FBI leaks brought the polls back to a near-tie. Put this together, and you get the impression of a volatile electorate: Anything can happen, and all may depend on voters&#8217; reaction to last-minute news of Chris Christie, Melania Trump, or whatever Julian Assange may have up his sleeve.</p>

<p>Actually, though, not many voters change their opinions during the general election campaign. This finding is borne out by <a href="http://www.stat.columbia.edu/~gelman/research/published/swingers.pdf">my research</a> with Sharad Goel, Doug Rivers, and David Rothschild on surveys during the 2012 campaign, <a href="http://www.slate.com/articles/news_and_politics/politics/2016/08/don_t_be_fooled_by_clinton_trump_polling_bounces.html">Alan Abramowitz&#8217;s analysis</a> of polls during the 2016 campaign showing a nearly perfect tracking between Clinton or Trump support in a survey and the proportion of Democrats or Republicans in the sample, and <a href="https://today.yougov.com/news/2016/11/01/beware-phantom-swings-why-dramatic-swings-in-the-p/">an analysis by Ben Lauderdale and Doug Rivers</a> of surveys during the recent campaign.</p>

<p>Why, then, do the polls swing so much? This can mostly be explained by differential nonresponse to pollsters: Clinton goes up when more Democrats answer a survey, and Trump goes up when Democrats are less likely to respond. As Lauderdale and Rivers put it, &#8220;when things are going badly for a candidate, their supporters tend to stop participating in polls.&#8221; This is how new events can have big effects on the polls even if they aren&#8217;t changing many vote intentions.</p>

<p>For example, after the FBI letter on Clinton a week or so ago, I predicted that Clinton would fall in the polls &mdash; not because she was going to lose many votes but because the news would pump up a lot of Trump supporters who would then become more enthusiastic about the election and respond to surveys. The preceding weeks had been full of bad news for the Republican candidate, hence his supporters were dejected and not participating in polls. During that period, I and others suspected that Clinton&#8217;s lead in the polls had been exaggerating her strength among the electorate.</p>

<p><strong>Four questions remain:</strong></p>
<ol class="wp-block-list"><li>Why are we talking about all this now; why was differential nonresponse not a part of the conversation in previous elections?</li><li>Is this bias a problem with poll aggregators such as Nate Silver?</li><li>Does likely voter screening fix the problem?</li><li>Supposing that my colleagues and I are right that much of the swing in polls is explainable by differential nonresponse: Would this then also show up as differential voter turnout? In other words, would a failure to respond to polls predict failure to turn up on Election Day?</li></ol>
<p><strong>Here are my answers.</strong></p>
<ol class="wp-block-list"><li>Differential nonresponse is a bigger deal now than it used to be, for two reasons. Survey response rates are lower. Not too many decades ago, quality polls had response rates over 50 percent; now a survey is lucky to get 10 percent participation. As a result, responding to surveys is much more optional, and we&#039;d expect differential nonresponse to be a bigger deal. At the same time, the electorate is more polarized, and fewer people change their minds during the campaign. Thus, compared with previous decades, the &quot;signal&quot; of actual swings is lower and the &quot;noise&quot; of nonresponse is higher, and we need to be concerned about this source of bias more than ever before.</li><li>Yes, poll aggregators are subject to this bias, because a poll average or poll-based model is only as good as the surveys that go into it. It would be possible for a forecasting model to attempt to correct for differential nonresponse bias by adjusting surveys based on the partisanship and recalled vote of their respondents, but this would require more information than is usually collected by poll aggregation sites.</li><li>No, likely voter screening won&#039;t fix differential nonresponse bias. Actually, it can make the problem worse. Poll fluctuations are driven by fluctuations in enthusiasm, and I&#039;d expect that screening for likely voters — which is just another measure of interest in the election —would just exacerbate the bias and increase these artifactual swings.</li><li>Finally, what about voter turnout? Voter turnout rates in the general election for a US president is about 60 percent. Survey response rates are below 10 percent. Survey response is a much more optional thing, hence it makes sense to see much bigger swings in differential survey responses than in differential turnout. So, yes, differential turnout in voting is a thing, it’s just not as big as differential nonresponse in surveys.</li></ol>
<p>Put it together, and what do we have? Very little evidence of opinion swings and months of polls that are consistent with a narrow but stable Clinton lead.</p>

<p><em>Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He blogs at </em><a href="http://andrewgelman.com/"><em><strong>Statistical Modeling</strong></em></a><em>.</em></p>
<hr class="wp-block-separator" />
<p>The Big Idea is Vox&rsquo;s home for smart, often scholarly excursions into the most important issues and ideas in politics, science, and culture &mdash; typically written by outside contributors. If you have an idea for a piece, pitch us at <a href="mailto:thebigidea@vox.com"><strong>thebigidea@vox.com</strong></a>.</p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Andrew Gelman</name>
			</author>
			
			<title type="html"><![CDATA[Why you should be skeptical of wacky new studies about what sways elections]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/2016/9/9/12862066/election-studies-statistics-bogus-politics" />
			<id>https://www.vox.com/2016/9/9/12862066/election-studies-statistics-bogus-politics</id>
			<updated>2016-09-09T10:30:12-04:00</updated>
			<published>2016-09-09T11:20:08-04:00</published>
			<category scheme="https://www.vox.com" term="2016 Presidential Election" /><category scheme="https://www.vox.com" term="Politics" />
							<summary type="html"><![CDATA[This has been a rough year for pollsters and pundits, with prediction after prediction going painfully awry. Even those supposedly unflappable data journalists have found themselves stepping in it. But it&#8217;s not just the journalists and pollsters. Since I&#8217;m a professor of statistics as well as a blogger who often comments on academic papers that [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="Two scholars claimed this year that Fox News could shift the election by 12 percentage points. | Andy Kropa/Getty Images" data-portal-copyright="Andy Kropa/Getty Images" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/5987923/GettyImages-121246189.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
	Two scholars claimed this year that Fox News could shift the election by 12 percentage points. | Andy Kropa/Getty Images	</figcaption>
</figure>
<p>This has been a rough year for pollsters and pundits, with prediction after prediction going painfully awry. Even those supposedly unflappable data journalists have found themselves stepping in it.</p>

<p>But it&rsquo;s not just the journalists and pollsters. Since I&rsquo;m a professor of statistics as well as a blogger who often comments on academic papers that I think misuse numbers, I have a front-row seat to some of the least persuasive academic takes on politics and elections. And it&rsquo;s been a big year for bad studies.</p>

<p>In journalism and polling, premature obituaries of Trump have been one common problem. In July 2015, the New York Times&rsquo;s Nate Cohn <a href="http://www.nytimes.com/2015/07/21/upshot/the-trump-campaigns-turning-point.html).">remarked on</a> &#8220;a shift that will probably mark the moment when Trump&rsquo;s candidacy went from boom to bust.&#8221; (That was a reference to Trump crudely dismissing the war record of John McCain, the former Republican presidential nominee.) &#8220;His support will erode,&#8221; Cohn wrote confidently, &#8220;as the tone of coverage shifts from publicizing his anti-establishment and anti-immigration views &hellip; to reflecting the chorus of Republican criticism of his most outrageous comments and the more liberal elements of his record.&#8221;</p>

<p>Whoops. Only a month later, famed number cruncher Nate Silver gave Trump a <a href="http://fivethirtyeight.com/features/donald-trumps-six-stages-of-doom/">2 percent chance</a> of winning the Republican nomination.</p>
<h2 class="wp-block-heading">Gallup throws in the towel</h2>
<p>A couple of months after <em>that</em>, Gallup made the historic announcement that the organization would no longer do horse race&ndash;style election polling. You can see why this might be a smart time to get out of the predictions game.</p>

<p>I&#8217;d love to claim that I&#8217;m above all this myself, but really I too had no idea what would happen during the primary season. Whenever anyone asked me, I&#8217;d point them to <a href="http://campaignstops.blogs.nytimes.com/2011/11/29/why-are-primaries-hard-to-predict/)">an article I wrote in 2011</a> explaining why primaries are hard to predict.</p>

<p>In short, in the general election voters have months to make their decisions, the choice is between two candidates who are ideologically distinct, and most voters can rely on party cues. In contrast, primaries come in a rushed sequence, competing candidates tend to be similar in ideology, and (of course) they come from the same party. And with multiple candidates comes the opportunity for strategic voting (casting a vote for someone you dislike to defeat someone you dislike even more), which is a hard thing to model.</p>
<p><q class="center" aria-hidden="true"><span>In recent years we have seen claims that political attitudes and preferences were determined by menstrual cycles and smiley face icons</span></q></p>
<p>In short, I avoided making any embarrassing predictions about primary election winners only by the tactic of avoiding making predictions, period &mdash; an option that was not so available to the Nates Cohn and Silver, who were expected to make real-time predictions (and who, to their credit, <a href="http://www.nytimes.com/2016/05/18/upshot/is-traditional-polling-underselling-donald-trumps-true-strength.html">examined</a> their <a href="http://fivethirtyeight.com/features/how-i-acted-like-a-pundit-and-screwed-up-on-donald-trump/">errors</a> afterward).</p>
<h2 class="wp-block-heading">One study alleged that the Democratic primary was rigged</h2>
<p>But academia has had no shortage of errant &#8220;findings&#8221; as well. This year, perhaps more than others, the internet has been swarming with conspiracy theories &mdash; some of these defended with statistical arguments.</p>
<p><span> In June, various people pointed me to </span><a href="https://drive.google.com/file/d/0B6mLpCEIGEYGYl9RZWFRcmpsZk0/view?pref=2&amp;pli=1">a paper</a><span> by Axel Geijsel and Rodolfo Cortes Barragan, graduate students at Tilburg University and Stanford, respectively, with the portentous title &#8220;Are we witnessing a dishonest election? A between state comparison based on the used voting procedures of the 2016 Democratic Party Primary for the Presidency of the United States of America.&#8221; (Yes, indeed: </span><span><em>that</em></span><span> presidency.) </span></p>
<p>The paper, issued before the primary race between Hillary Clinton and Bernie Sanders was decided, made the case that Sanders tended to win in states where electronic voting could be double-checked with a paper trail. Clinton, suspiciously &mdash; or &#8220;suspiciously&#8221; &mdash; tended to win when there was no paper trail. Moreover, Geijsel and Barragan wrote, the inaccuracy of exit polling supposedly rose in states without a paper trail, and the official results seemed biased toward Clinton.</p>

<p>The paper itself did not convince me, as there can be all sorts of differences between different states, and there&rsquo;s no reason to pick just one of these factors and give it a causal interpretation. It&rsquo;s what we call an observational comparison. You never know, fraud could always happen, but the paper <a href="http://andrewgelman.com/2016/06/15/30409/)">supplied no useful evidence</a> that this difference was the one driving the election results. (Not that you&rsquo;d need an explanation as to why a 74-year-old socialist fails to win a major party nomination in the United States.)</p>

<p>But if going viral among Bernie followers counted in academia, these students would have tenure already.</p>
<h2 class="wp-block-heading">How much of a kingmaker is Fox News?</h2>
<p>Closer to the mainstream, in June, economics professors Ray Fisman and Andrea Prat, of Boston University and Columbia, posted <a href="http://www.slate.com/articles/news_and_politics/the_dismal_science/2016/06/fox_news_exerts_more_power_over_the_electorate_than_you_might_think.html">a piece</a> in Slate claiming that Fox News support for Donald Trump &#8220;could erase a 12-percentage-point Democratic lead in the popular vote.&#8221;</p>

<p>I&rsquo;m skeptical that this number is anything close to reasonable. After looking at the <a href="http://web.stanford.edu/~ayurukog/cable_news.pdf">cited study</a>, by professors Gregory Martin (political science, Emory) and Ali Yurukoglu (Stanford Business School), it seems to me that Fisman and Prat improperly extrapolated an estimate that was already probably too high.</p>

<p>Martin and Yurukoglu estimated that watching Fox News an extra 2.5 minutes a day increased a voter&rsquo;s probability of voting Republican by 0.3 percentage points. But <a href="https://www.washingtonpost.com/news/monkey-cage/wp/2016/06/20/why-i-dont-believe-the-claim-that-fox-news-can-get-trump-elected/)">it&rsquo;s not reasonable</a> to assume that if the time watching the channel continued to grow, the shift in vote preference would continue to be strong and linear &mdash; all the way to 12 percent!</p>

<p>In addition, while I trust that the authors found what they reported, there is a well-known tendency for small but variable effects to be overestimated in this sort of statistical study. In general, estimates near zero are discarded and high estimates are reported. We call this the &#8220;statistical significance filter,&#8221; which can turn weak results into robust-seeming ones.</p>

<p><strong> </strong>Regarding partisan news sources, I have more trust in <a href="http://nowpublishers.com/article/Details/QJPS-12099">a study</a> by political scientists Dan Hopkins and <a href="http://www.vox.com/users/Jonathan%20M.%20Ladd">Jonathan Ladd</a> of Georgetown University, who analyze data from a 2000 pre-election poll and find a positive effect of Fox News on support for George W. Bush, but only for Republicans and independents. In summarizing this study, Hopkins writes that media influence &#8220;fosters political polarization. For Republicans and pure independents, Fox News access in 2000 reinforced GOP loyalties.&#8221; Not a lot of room for a 12 percent swing in that claim.</p>
<h2 class="wp-block-heading">Or does Google pick our presidents?</h2>
<p>The next month came <a href="http://www.csmonitor.com/Technology/2016/0717/Will-Google-s-tool-sway-the-presidential-election">a piece</a>, based on work by the research psychologist Robert Epstein &mdash; Epstein also <a href="http://www.politico.com/magazine/story/2015/08/how-google-could-rig-the-2016-election-121548">publicized</a> it last year &mdash; called &#8220;How Google Could Rig the 2016 Election.&#8221; It claimed that &#8220;Google&rsquo;s search algorithm can easily shift the voting preferences of undecided voters by 20 percent or more &mdash; up to 80 percent in some demographic groups &mdash; with virtually no one knowing they are being manipulated. &hellip; Given that many elections are won by small margins, this gives Google the power, right now, to flip upwards of 25 percent of the national elections worldwide.&#8221;</p>

<p>Quite a claim. The numbers, however, came from a highly artificial set of <a href="http://www.pnas.org/content/112/33/E4512.abstract">lab experiments</a> in which participants were asked questions about unfamiliar political candidates after being shown unrealistically rigged search results. The researchers put extremely biased articles favoring one candidate on page one, moderately biased articles on page two, and so on, so participants had to go to pages four and five of a five-page search to find anything strongly favoring the other candidate.</p>

<p>Epstein then compounded his exaggerations by claiming, ridiculously, that the real-world impact of Google on elections would &#8220;undoubtedly be larger&#8221; than in his loaded experiments.</p>

<p>In fact, the real presidential election is not being held in an isolated lab: Voters have many sources of information about Clinton and Trump, beyond those found in (hypothetically) rigged search results. (Full disclosure: Some of my research is funded by Google.)</p>

<p>And it&rsquo;s still only early September! Just wait till next month, when just about any election-related study will get 15 minutes of fame. In recent years we have seen claims that political attitudes and preferences were <a href="http://www.stat.columbia.edu/~gelman/research/published/bayes_management.pdf">determined by menstrual cycles</a>, <a href="http://www.stat.columbia.edu/~gelman/research/published/ChanceEthics12.pdf),">smiley faces</a> displayed near survey questions for subliminally short durations, and the mood swings caused by the results of <a href="http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/09/04/heres-how-a-cartoon-smiley-face-punched-a-big-hole-in-democratic-theory/,">college football games</a> (really). All of these studies struck me as flawed, either in design or in the analysis of the data. (Follow the links for more details about my doubts.)</p>

<p>I&#8217;m not saying that these studies shouldn&#8217;t have been done (well, in most cases). Researchers should be free to try out all sorts of outside-the-box ideas, and, indeed, in some of these cases I&rsquo;m not criticizing the studies so much as the accompanying hype. But respected news organizations should think twice about dramatic claims about voting and elections, even if they are published in reputable scientific journals.</p>

<p>When it comes to research, election season is silly season, and there always seems to be room for one more story about how irrational those voters are. Who knows what else they&rsquo;ll come up with before November 8?</p>

<p><em>Andrew Gelman is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. He blogs at </em><a href="http://andrewgelman.com/"><em>Statistical Modeling</em></a><em>.</em></p>
						]]>
									</content>
			
					</entry>
	</feed>
