Peter Norvig

In 1950, Alan Turing suggested we should ask not “Can Machines Think” but rather “What Can Machines Do?” Edsger Dijkstra got it right in 1984 when he said the question of Can Machine Think “is about as relevant as the question of whether Submarines Can Swim.” By that he meant that both are questions in sociolinguistics: how do we choose to use words such as “think”? In English, submarines do not swim, but in Russian, they do. This is irrelevant to the capabilities of submarines. So let’s explore what it is that machines can do, and whether we should fear their capabilities.

One thought on “Peter Norvig

  1. shinichi Post author

    Ask Not Can Machines Think, Ask How Machines Fit Into The Mechanisms We Design

    2015 : WHAT DO YOU THINK ABOUT MACHINES THAT THINK?

    by Peter Norvig

    https://www.edge.org/response-detail/26055

    In 1950, Alan Turing suggested we should ask not “Can Machines Think” but rather “What Can Machines Do?” Edsger Dijkstra got it right in 1984 when he said the question of Can Machine Think “is about as relevant as the question of whether Submarines Can Swim.” By that he meant that both are questions in sociolinguistics: how do we choose to use words such as “think”? In English, submarines do not swim, but in Russian, they do. This is irrelevant to the capabilities of submarines. So let’s explore what it is that machines can do, and whether we should fear their capabilities.

    Pessimists warn that we don’t know how to safely and reliably build large complex AI systems. They have a valid point. We also don’t know how to safely and reliably build large complex non-AI systems. For example, we invented the internal combustion engine 150 years ago, and in many ways it has served humanity well, but it also has lead to widespread pollution, political instability over access to oil, a million deaths per year, and other problems.

    Any complex system will have a mix of positive outcomes and unintended consequences but are there worrisome issues that are unique to systems built with AI? I think the interesting issues are Adaptability, Autonomy, and Universality.

    Systems that use machine learning are adaptable. They change over time, based on what they learn from examples. Adaptability is useful. We want, say, our automated spelling correction programs to quickly learn new terms such as “bitcoin”, rather than waiting for the next edition of a published dictionary to list them. A non-adaptable program will repeat the same mistakes. But an adaptable program can make new mistakes, which may be harder to predict and deal with. We have tools for dealing with these problems, but just as the designers of bridges must learn to deal with crosswinds, so the designers of AI systems must learn to deal with adaptability.

    Some critics are worried about AI systems that are built with a framework that maximizes expected utility. Such an AI system estimates the current state of the world, considers all the possible actions it can take, simulates the possible outcomes of those actions, and then chooses the action that leads to the best possible distribution of outcomes. Errors can occur at any point along the way, but the concern here is in determining what is the “best outcome”—in other words, what is it that we desire? If we describe the wrong desires, or allow a system to adapt its desires in a wrong direction, we get the wrong results.

    History shows that we often get this wrong, in all kinds of systems that we build, not just in AI systems. The US Constitution is a document that specifies our desires; the original framers made what we now recognize as an error in this specification, and correcting that error with the 13th amendment cost over 600,000 lives. Similarly, we designed stock-trading system that allowed speculators to create bubbles that led to busts. These are important issues for system design (and what is known as “mechanism design”), and are not specific to AI systems. The world is complicated, so acting correctly in the world is complicated.

    The second concern is autonomy. If AI systems act on their own, they can make errors that perhaps would not be made by a system with a human in the loop. This too is a valid concern, and again one that is not unique to AI systems. Consider our system of automated traffic lights, which replaced a system of human policemen directing traffic. The automated system leads to some errors, but is a tradeoff that we have decided is worthwhile. We will continue to make tradeoffs in where we deploy autonomous systems.

    There is a possibility that we will soon see a widespread increase in the capabilities of autonomous systems, and thus more displacement of people. This could lead to a societal problem of increased unemployment and income inequality. To me, this is the most serious concern about future AI systems. In past technological revolutions (agricultural and industrial) the notion of work changed, but the changes happened over generations, not years, and the changes always led to new jobs. We may be in for a period of change that is much more rapid and disruptive; we will need some social conventions and safety nets to restore stability.

    The third concern is the universality of intelligent machines. In 1965 I. J. Good wrote “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.” I think this fetishizes “intelligence” as a monolithic superpower, and I think reality is more nuanced. The smartest person is not always the most successful; the wisest policies are not always the ones adopted. Recently I spent an hour reading the news about the middle east, and thinking. I didn’t come up with a solution. Now imagine a hypothetical “Speed Superintelligence” (as described by Nick Bostrom) that could think as well as any human but a thousand times faster. I’m pretty sure it also would have been unable to come up with a solution. I also know from computational complexity theory that there are a wide class of problems that are completely resistant to intelligence, in the sense that, no matter how clever you are, you won’t have enough computing power. So there are some problems where intelligence (or computing power) just doesn’t help.

    But of course, there are many problems where intelligence does help. If I want to predict the motions of a billion stars in a galaxy, I would certainly appreciate the help of a computer. Computers are tools. They are tools of our design that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult, but it is difficult mostly because the world is complex; adding AI to the mix doesn’t fundamentally change things. I suggest being careful with our mechanism design and using the best tools for the job regardless of whether the tool has the label “AI” on it or not.

    Reply

Leave a Reply to shinichi Cancel reply

Your email address will not be published. Required fields are marked *