<?xml version="1.0" encoding="UTF-8"?><feed
	xmlns="http://www.w3.org/2005/Atom"
	xmlns:thr="http://purl.org/syndication/thread/1.0"
	xml:lang="en-US"
	>
	<title type="text">Kris Hammond | Vox</title>
	<subtitle type="text">Our world has too much noise and too little context. Vox helps you understand what matters.</subtitle>

	<updated>2019-03-06T11:33:12+00:00</updated>

	<link rel="alternate" type="text/html" href="https://www.vox.com/author/kris-hammond" />
	<id>https://www.vox.com/authors/kris-hammond/rss</id>
	<link rel="self" type="application/atom+xml" href="https://www.vox.com/authors/kris-hammond/rss" />

	<icon>https://platform.vox.com/wp-content/uploads/sites/2/2024/08/vox_logo_rss_light_mode.png?w=150&amp;h=100&amp;crop=1</icon>
		<entry>
			
			<author>
				<name>Kris Hammond</name>
			</author>
			
			<title type="html"><![CDATA[Teaching machines to avoid our mistakes]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/2016/4/29/11644892/teaching-machines-to-avoid-our-mistakes" />
			<id>https://www.vox.com/2016/4/29/11644892/teaching-machines-to-avoid-our-mistakes</id>
			<updated>2019-03-06T06:33:12-05:00</updated>
			<published>2016-04-29T06:00:42-04:00</published>
			<category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[The conventional wisdom is that intelligent systems, while good with numbers and maybe facts, are not going to be able to cope with the world of judgment and decision-making. The common assumption is that computers will not be able to deal with the nuance of reasoning that drives the solely human ability to assess what [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="Olga Pink/Shutterstock" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/15812335/mistake_olga-pink.0.1462827754.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>The conventional wisdom is that intelligent systems, while good with numbers and maybe facts, are not going to be able to cope with the world of judgment and decision-making. The common assumption is that computers will not be able to deal with the nuance of reasoning that drives the solely human ability to assess what is happening in the world and then make reasoned decisions in reaction to that assessment.</p>

<p>And herein lies my problem &mdash; the assumption hidden in this belief is that humans are actually good at this sort of reasoning. And it&rsquo;s not clear that this is true. In particular, we seem prone to reasoning mistakes based on biases in decision-making that hinder us every day. Because of this, I believe that current and future intelligent systems are going to end up being partners that help us to improve our decision-making skills by avoiding many of the cognitive biases that plague us.</p>

<p>Last year, I had the privilege of participating in a United Nations working group tasked with crafting a policy document around issues of intelligent autonomous weapons. As the issue includes the possibility of machines killing people, it is one that is even more fraught with challenges than the expanding role of intelligent systems in the workplace, but it raised questions that can be applied across the category.</p>

<p>One particular moment stood out to me. As we talked about the question of proportionality &mdash; the assessment of how many casualties are acceptable given a military goal &mdash; the core assumption from the group was this kind of decision should never be in the hands of a machine. Not a surprising reaction, and my guess is that this is a commonly held point of view.</p>
<blockquote class="red right"><p>What surprised me was not the idea that machines should never make life-and-death decisions, but the overwhelming assumption and unshakeable belief that people are actually good at such decisions.</p></blockquote>
<p>The drivers behind this assumption include issues of empathy, context, human judgment, dynamically dealing with changing circumstances, etc. Given the current state of machine intelligence, these are perfectly valid arguments. What surprised me, however, was not the idea that machines should never make life-and-death decisions, but the overwhelming assumption and unshakeable belief that people are actually good at such decisions.</p>

<p>The reality is, we&rsquo;re not.</p>

<p>Please understand, I love my human brothers and sisters, but when it comes to many areas of decision-making, we are pretty much goofballs. Just looking at the work of Richard Thaler (&ldquo;<a href="http://www.amazon.com/Misbehaving-Behavioral-Economics-Richard-Thaler/dp/0393080943/">Misbehaving: The Making of Behavioral Economics</a>&rdquo;), Daniel Kahneman (&ldquo;<a href="http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555">Thinking, Fast and Slow</a>&rdquo;) and Dan Arely (&ldquo;<a href="http://www.amazon.com/Predictably-Irrational-Revised-Expanded-Decisions/dp/0061353248">Predictably Irrational: The Hidden Forces That Shape Our Decisions</a>&rdquo;), we see that even under the most controlled of situations, our decision-making skills are faulty. We cherry-pick data to fit our worldviews, prefer inaction to action, misunderstand nearly everything related to probability, and prefer decisions skewed in the direction of avoiding failure rather than achieving success.</p>

<p>One such bias is <em>anchoring</em>, the common human tendency to rely heavily on the first piece of information offered (the &ldquo;anchor&rdquo;) when making decisions. For example, in an effort to contextualize numbers, we try to find other numbers to compare them to, but if we don&rsquo;t have any relevant numbers at hand, we tend to use the most recent one we have seen or heard.</p>

<p>So let&rsquo;s say my son approaches me asking for more allowance just as I am reading the news that <a href="http://www.usatoday.com/story/tech/2016/02/03/cisco-acquire-iot-vendor-jasper-14-billion/79779896/">Cisco is acquiring another company for $1.4 billion</a>. Whatever increase he is requesting will seem small to me because I&rsquo;ve just been exposed to such a large sum. If I have been looking at the current price of a gallon of gas ($2.56), however, he is not going to get such a positive response.</p>

<p>The amazing thing is that while numbers impact our decision-making, they are irrelevant, in that they have nothing to do with each other. Whatever we hear first tends to provide a starting point against which other numbers will be viewed and compared. And while it skews our thinking, it does so without us even noticing.</p>
<blockquote class="red right"><p>My favorite bias is one that almost killed me &mdash; the <em>status quo fallacy</em>. This is our tendency to view the world as being the same over time, even in the face of change.</p></blockquote>
<p>Other biases, such as <em>confirmation bias</em>, make it hard for us to see the evidence that is placed in front of us if it is in conflict with our beliefs. So, for example, if we think pitbulls are mean, then we tend to only remember those dogs that have displayed aggressive tendencies. If we think a co-worker is argumentative, we tend to interpret everything that person says as adversarial. Once an idea is in our head, our mechanisms to understand the world become focused on making sure everything we see supports that idea.</p>

<p>My favorite bias is one that almost killed me, the <em>status quo fallacy</em>. This is our tendency to view the world as being the same over time, even in the face of change. It is the view that because something has never happened before, it will not happen now. For me, this played out as a resistance to the idea that I had a pulmonary embolism until my inability to breathe forced the issue and convinced me to go to the hospital &mdash; but that is a story for another day.</p>

<p>All of these biases are grounded in absolutely reasonable <a href="https://www.verywell.com/what-is-a-heuristic-2795235">heuristics</a> that make it possible for us to make decisions quickly. Of course, the status quo is going to hold most of the time. Of course, the world is going to fit our understanding of it most of the time. That these heuristics sometimes fail is not a condemnation of our ability to think, but just a confirmation that they are heuristics.</p>

<p>So what does this mean for intelligent systems?</p>

<p>The ability of intelligent systems to reason in the absence of these biases is powerful. Since technology is not prone to these biases, it can do a better job than we do at managing complex and nuanced decisions.</p>
<blockquote class="red right"><p>For every cognitive bias that disrupts our thinking, there is an opportunity to partner with intelligent systems that can assess the situation and ask us, &ldquo;Are you sure that this isn&rsquo;t just you being embarrassed about making a bad decision last week?&rdquo;</p></blockquote>
<p>Imagine, for example, an investment that you made is not performing well, but unfortunately, because you now own the stock and think of yourself as a good investor, you are prone to ownership bias and have a bit of cognitive dissonance. <em>Ownership bias</em> causes you to value the things you have over the potential things to acquire. The cognitive dissonance is born of the tension between the decision you made, your view of yourself as a good investor, and the current evidence that this stock wasn&rsquo;t a great buy. These factors come together to make you want to discount the evidence and hold on to the stock longer than you should.</p>

<p>A machine considering the same factors would not be prone to such bias, and could make buy/sell decisions or give advice on the basis of the numbers without a self-defeating sense of unease or embarrassment.</p>

<p>Of course, this is not unique to financial decision-making. For every cognitive bias that disrupts our thinking, there is an opportunity to partner with intelligent systems that can assess the situation and ask us, &ldquo;Are you sure that this isn&rsquo;t just you being embarrassed about making a bad decision last week?&rdquo; or &ldquo;If you didn&rsquo;t already own it, would you buy now?&rdquo;</p>

<p>I am not saying that we should give ourselves over to algorithmic decision-making. We should always remember that just as the machine is free of the cognitive biases that often defeat us, we have information about the world that the machine does not. My argument is that, with intelligent systems, we now have the opportunity to be genuinely smarter.</p>

<p>Going back to the earlier question of autonomous lethal devices and proportionality assessment, it turns out that such decisions usually have equations associated with them. Or, in other words, there are some places where we&rsquo;re already partnering with intelligent systems. In these situations, there is a calculation or algorithm used to inform the decision-making. The reasoning behind this is simple. People tend to have difficulty with such assessments because emotions bias their thinking. The algorithms&rsquo; output provides an anchor to help the human decision-makers think more clearly.</p>

<p>Even here, the algorithms make us into better thinkers. And isn&rsquo;t that what we want to be?</p>
<hr class="wp-block-separator" />
<p><em>In addition to being chief scientist at </em><a href="https://www.narrativescience.com"><em>Narrative Science</em></a><em>, </em><a href="https://www.linkedin.com/in/kristianhammond"><em>Kris Hammond</em></a><em> is a professor of Computer Science and Journalism at Northwestern University. Prior to joining the faculty at Northwestern, Hammond founded the University of Chicago&rsquo;s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. He currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Reach him </em><a href="https://twitter.com/KJ_Hammond"><em>@KJ_Hammond</em></a>.</p>

<p><small><em>This article originally appeared on Recode.net.</em></small></p>
						]]>
									</content>
			
					</entry>
			<entry>
			
			<author>
				<name>Kris Hammond</name>
			</author>
			
			<title type="html"><![CDATA[Ethics and Artificial Intelligence: The Moral Compass of a Machine]]></title>
			<link rel="alternate" type="text/html" href="https://www.vox.com/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine" />
			<id>https://www.vox.com/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine</id>
			<updated>2019-03-06T06:06:03-05:00</updated>
			<published>2016-04-13T14:22:18-04:00</published>
			<category scheme="https://www.vox.com" term="Artificial Intelligence" /><category scheme="https://www.vox.com" term="Innovation" /><category scheme="https://www.vox.com" term="Technology" />
							<summary type="html"><![CDATA[The question of robotic ethics is making everyone tense. We worry about the machine&#8217;s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon. Personally, I do not have such worries. I am less [&#8230;]]]></summary>
			
							<content type="html">
											<![CDATA[

						
<figure>

<img alt="" data-caption="" data-portal-copyright="John Williams RUS/Shutterstock" data-has-syndication-rights="1" src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/15805901/robot-ethics_john-williams-rus.0.1462827751.jpg?quality=90&#038;strip=all&#038;crop=0,0,100,100" />
	<figcaption>
		</figcaption>
</figure>
<p>The question of robotic ethics is making everyone tense. We worry about the machine&rsquo;s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon.</p>

<p>Personally, I do not have such worries.</p>

<p>I am less concerned about robots doing wrong, and far more concerned about the moment they look at us and are appalled at how often we fail to do right. I am convinced that they will not only be smarter than we are, but have truer moral compasses, as well.</p>

<p>Let&rsquo;s be clear about what is and is not at issue here.</p>
<blockquote class="red right"><p>I am less concerned about robots doing wrong, and far more concerned about the moment they look at us and are appalled at how often we fail to do right.</p></blockquote>
<p>First, I am not talking about whether or not we should deploy robotic soldiers. That is an ethical decision that is in human hands. When we consider the question of automating war, we are considering the nature of ourselves not our machines. Yes, there is a question of whether the capabilities of robotic soldiers and autonomous weapons are up to the task, but that has to do with how well they work rather than what their ethics are.</p>

<p>Second, I am not talking about the &ldquo;ethics&rdquo; of machines that are just badly designed. A self-driving car that plows into a crowd of people because its sensors fail to register them isn&rsquo;t any more unethical than a vehicle that experiences unintended acceleration. It is broken or badly built. Certainly there is a tragedy here, and there is responsibility, but it is in the hands of the designers and manufacturers.</p>

<p>Third, while we need to look at responsibility, this is not about punishment. Our ability or inability to punish a device is a matter of how we respond to unethical behavior, not how to assess it. The question of whether a machine has done something wrong is very different than the issue of what we are going to do about it.</p>
<blockquote class="red right"><p>This is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.</p></blockquote>
<p>Finally, this is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals. I would put this kind of example in the category of &ldquo;badly designed.&rdquo; And given that most of the systems that manage printer queues in our offices are smarter than a system that would tend to do this, it is probably not something that should concern us.</p>

<p>These are examples of machines doing bad things because they are broken or because that&rsquo;s how they are built. These are all examples of tools that might very well hurt us, but do not have to themselves deal with ethical dilemmas.</p>

<p>But &ldquo;dilemma&rdquo; is the important word here.</p>

<p>Situations that match up well against atomic rules of action are easy to deal with for both machines and people. Given a rule that states that you should never kill anyone, it is pretty easy for a machine (or person for that matter) to know that it is wrong to murder the owner of its local bodega, even if it means that it won&rsquo;t have to pay for that bottle of Chardonnay. Human life trumps cost savings.</p>
<p><img src="https://recodetech.files.wordpress.com/2016/04/i_robot_-_runaround.jpg?quality=80&amp;strip=info" alt="I, Robot cover" width="206" height="334" class="alignright size-full wp-image-595994"></p>
<p>This is why Isaac Asimov&rsquo;s &ldquo;<a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics">Three Laws of Robotics</a>&rdquo; seem so appealing to us. They provide a simple value ranking that &mdash; on the face of it, at least &mdash; seems to make sense:</p>
<ul class="wp-block-list"><li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.</li><li>A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.</li><li>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.</li></ul>
<p>The place where both robots and humans run into problems is situations in which adherence to a rule is impossible, because all choices violate the same rule. The standard example that is used to explain this is the Trolley Car Dilemma.</p>

<p>The dilemma is as follows:</p>

<p>A train is out of control and moving at top speed down a track. At the end of the track, five people are tied down, and will be killed in seconds. There is a switch that can divert the train to another track but, unfortunately, another person is tied down on that track, and will be crushed if you pull the switch.</p>

<p>If you pull the switch, one person dies. If you don&rsquo;t, five people die. Either way, your action or inaction is going to kill people. The question is, how many?</p>

<p>Most people actually agree that sacrificing the lone victim makes the most sense. You are trading one against five. Saving more lives is better than saving fewer lives. This is based on a fairly utilitarian calculus that one could easily hand to a machine.</p>

<p>Unfortunately, it is easy to the change details of this example in ways that shift our own intuitions.</p>

<p>Imagine that the single victim is a researcher who now has the cure for cancer in his or her head. Or our lone victim could be a genuinely noble person who has and will continue to help those in need. Likewise, the victims on the first track could all be terminally ill with only days to live, or could all be convicted murderers who were on their way to death row before being waylaid.</p>

<p>In each of these cases, we begin to consider different ways to evaluate the trade-offs, moving from a simple tallying up of survivors to more nuanced calculations that take into account some assessment of their &ldquo;value.&rdquo;</p>

<p>Even with these differences, the issue still remains one of a calculus of sorts.</p>
<div class="chorus-asset" data-chorus-asset-id="6462233"><img src="https://platform.vox.com/wp-content/uploads/sites/2/chorus/uploads/chorus_asset/file/6462233/trolleyproblem.0.gif"></div>
<p>But one change wipes this all away.</p>

<p>There are five people tied down, and the trolley is out of control, but there is only one track. The only way to halt the trolley is to derail it by tossing something large onto the track. And the only large thing you have it hand is the somewhat large person standing next to you. There is no alternative.</p>

<p>You can&rsquo;t just flip a switch. You have to push someone onto the track.</p>

<p>If you are like most people, the idea of doing this changes things completely. It has a disturbing intimacy that is not part of the earlier one. As a result, although most people would pull the switch, those same people resist the idea of pushing their fellow commuter to his or her doom to serve the greater good.</p>

<p>But while they are emotionally different, from the point of view of ethics or morality they are the same. In both cases, we are taking one life to save others. The calculus is identical, but our feelings are different.</p>

<p>Of course, as we increase the number of people on the track, there is a point at which most of us think that we will overcome our horror and sacrifice the life of the lone commuter in order to save the five, 10, one hundred or one thousand victims tied to the track.</p>

<p>And it is interesting to consider what we say about such people. What do we say to someone who is on their knees weeping because they have done a horrible thing in service of what was clearly the greater good? We tell them that they did what they had to do, and they did the right thing. We tell them that they were brave, courageous, and even heroic.</p>

<p>The Trolley Dilemma exposes an interesting problem. Sometimes our ethical and moral instincts are skewed by circumstance. Our determination of what is right or wrong becomes complex when we mix in emotional issues related to family, friends, tribal connections, and even the details of the actions that we take. The difficulty of doing the right thing does not rise out of us not knowing what it is. It comes from us being unwilling to pay the price that the right action often demands.</p>

<p>And what of our robot friends?</p>

<p>I would argue that an ethical or moral sense for machines can be built on a utilitarian base. The metrics are ours to choose, and can be coarse-grained (save as many people as possible), nuanced (women, children and Nobel laureates first) or detailed (evaluate each individual by education, criminal history, social media mentions, etc.). The choice of the code is up to us.</p>
<blockquote class="red right"><p>I would argue that an ethical or moral sense for machines can be built on a utilitarian base. The choice of code is up to us.</p></blockquote>
<p>Of course, there are special cases that will require modifications of the core rules that are based on the circumstances of their use. Doctors, for example, don&rsquo;t euthanize patients in order to spread the wealth of their organs, even if it means that there is a net positive with regard to survivors. They have to conform to a separate code of ethics designed around the needs of patients and their rights that restricts their actions. The same holds for lawyers, religious leaders and military personnel who establish special relationships with individuals that are protected by specific ethical code.</p>

<p>So the simple utilitarian model will certainly have overlays depending on the role that these robots and AIs will play. It would not seem unreasonable for a machine to respond for a request for personal information by saying &ldquo;I am sorry but he is my patient and that information is protected.&rdquo; In much the same way that Apple protected its encryption in the face of homeland security, it follows that robotic doctors will be asked to be HIPAA compliant.</p>

<p>Our machines need not hesitate when they see the Trolley coming. They will act in accord with whatever moral or ethical code we provide them and the value determinations that we set. They will run the numbers and do the right thing. In emergency situations, our autonomous cars will sacrifice the few to protect the many. When faced with dilemmas, they will seek the best outcomes independent of whether or not they themselves are comfortable with the actions. And while we may want to call such calculations cold, we will have to admit that they are also right.</p>
<blockquote class="red right"><p>Machine intelligence will be different than us, and might very well do things that are at odds with what we expect.</p></blockquote>
<p>But they will be different than us, and might very well do things that are at odds with what we expect. So, as with all other aspects of machine intelligence, it is crucial that these systems are able to explain their moral decisions to us. They will need to be able to reach into their silicon souls and explain the reasoning that supports their actions.</p>

<p>Of course, we will need them to do be able to explain themselves in all aspects of their reasoning and actions. Their moral reasoning will be subject to the same explanatory requirements that we would demand of explaining any action they take. And my guess is that they will be able to explain themselves better than we do.</p>

<p>At the end of the movie &ldquo;I, Robot,&rdquo; Will Smith and his robot partner have to disable an AI that has just enslaved all of humanity. As they close in on their goal, Smith&rsquo;s onscreen girlfriend slips, and is about to fall to her death. In response, Smith screams, &ldquo;Save the girl!&rdquo; and the robot, demonstrating its newly learned humanity, turns its back on the primary goal and focuses on saving the girl. While very &ldquo;human,&rdquo; this action is intensely selfish, and a huge moral lapse.</p>

<p>Every time I watch this scene, I just want the robot to say, &ldquo;I&rsquo;m sorry, I can&rsquo;t. I have to save everyone else.&rdquo; But then, I don&rsquo;t want it to be human. I want it to be true to its code.</p>
<hr class="wp-block-separator" />
<p><em>In addition to being chief scientist at </em><a href="https://www.narrativescience.com"><em>Narrative Science</em></a><em>, </em><a href="https://www.linkedin.com/in/kristianhammond"><em>Kris Hammond</em></a><em> is a professor of Computer Science and Journalism at Northwestern University. Prior to joining the faculty at Northwestern, Hammond founded the University of Chicago&rsquo;s Artificial Intelligence Laboratory. His research has been primarily focused on artificial intelligence, machine-generated content and context-driven information systems. He currently sits on a United Nations policy committee run by the United Nations Institute for Disarmament Research (UNIDIR). Reach him </em><a href="https://twitter.com/KJ_Hammond"><em>@KJ_Hammond</em></a>.</p>

<p><small><em>This article originally appeared on Recode.net.</em></small></p>
						]]>
									</content>
			
					</entry>
	</feed>
