Philosophy of artificial intelligence

Jump to: navigation, search


The philosophy of artificial intelligence concerns such questions as:

  • What is intelligence? How can one recognize its presence and applications?
  • Is it possible for machines to exhibit intelligence?
  • Is the human brain essentially a computer?
  • Can a machine have a mind, mental states and consciousness in the same sense that we do?
  • Is creating human-like artificial intelligence moral? What ethical stances should they take? What ethical stances should humans take toward them?

The first question defines the terms of the debate. The next three questions reflect the divergent interests of AI researchers, cognitive scientists and philosophers, respectively. The last question is discussed in a sister article on the ethics of artificial intelligence.

Important propositions in the philosophy of AI include the following.

Turing's "polite convention":

  • If a machine acts as intelligently a human being, then it is as intelligent as a human being.[1]

The founding premise of AI:

  • Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.[2]

The physical symbol system hypothesis:

  • A physical symbol system has the necessary and sufficient means of general intelligent action.[3]

Hobbes Mechanism:

  • Reason is nothing but reckoning.[4]

Searle's Weak AI Hypothesis:

  • A physical symbol system can act intelligently.[5]

Searle's Strong AI Hypothesis:

  • A physical symbol system can have a mind and mental states.[5]

These positions are concerned with the relationship between five concepts: intelligence, minds (and mental states), brains, machines and physical symbol systems. Defining each of these terms is part of understanding their relationships.

Intelligence

Consider these questions:

  • "Can machines fly?" This is true, since airplanes fly.
  • "Can machines swim?" This is false, because submarines don't swim.
  • "Can machines think?" This is the question we need to answer. Is it like the first or like the second?

The difference is how we understand the words: is "thinking" like "swimming" -- something that human beings do by definition? Or is it possible to define "thinking" without reference to human beings so that we can determine if our machines are doing it?[6] Unfortunately, there is no standard definition of intelligence.[7]

Turing Test

Alan Turing attempted to answer the question "Can machines think?" in his famous and seminal "Computing machinery and intelligence".[8] The paper reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question put to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human.[1]

Turing notes that no one (except philosophers) ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks."[9] Turing's test extends this polite convention to machines:

  • If a machine acts as intelligently as human being, then it is as intelligent as a human being.

The power of the Turing test derives from the fact that it is possible to talk about anything. Turing wrote "the question and answer method seems to be suitable for introducing almost any one of the fields of human endeavor that we wish to include."[10] John Haugeland adds that "understanding the words is not enough; you have to understand the topic as well."[11] In order to pass a well designed Turing test, the machine would have to use natural language, to reason, to have knowledge and to learn. The test can be extended to include video input, as well as a "hatch" through which objects can be passed, and this would force the machine to demonstrate the skill of vision and robotics as well. Together these represent almost all the major problems of artificial intelligence.[12]

Russell and Norvig note that "AI researchers have devoted little attention to passing the Turing Test",[13] since there are easier ways to test their programs: by giving them a task directly, rather than through the roundabout method of first posing a question in a chat room populated with machines and people. Turing never intended his test to be used as a real, day-to-day measure of the intelligence of AI programs. He wanted to provide a clear and understandable example to help us discuss the philosophy of artificial intelligence.[14] Real Turing tests, such as the Loebner prize, don't usually force programs to demonstrate the full range of intelligence and are reserved for testing chatterbot programs.

Human intelligence vs. intelligence in general

One criticism of the Turing test is that it is explicitly anthropomorphic. If our ultimate goal is to create machines that are more intelligent than people, why should we insist that our machines must closely resemble people? Russell and Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"[15] AI founder John McCarthy has always argued against human measures of intelligence, and said in a recent speech "artificial intelligence is not, by definition, simulation of human intelligence"[16]

Recent AI research defines intelligence in terms of rational agents or intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.[17]

  • If an agent acts so as maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent.[18]

This definition has the advantage that it does not distinguish between humans and machines.

The basic premise of AI

  • Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.[2]

Working AI researchers are primarily interested in this question: is it possible to create a machine that can solve all the problems we solve using our intelligence? This question defines the scope of what machines will be able to do in the future and guides the direction of AI research. AI researchers are far less concerned with the issues raised by computationalism or Searle's strong AI.

Arguments against the basic premise must show that building a working AI system is impossible, because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for thinking and yet can't be duplicated by a machine (or by the methods of current AI research).

Arguments in favor of the basic premise must show that such a system is possible. The most convincing demonstration would be to build one. In this way, while attacking the basic premise is a philosophical problem, defending it is an engineering problem.

Symbol systems vs. machines

An important issue is the distinction between symbol systems and machines. A physical symbol system (also called a formal system) takes physical objects (symbols), combines them into structures (expressions) and manipulates them (using processes) to produce new expressions.

The basic premise of AI was restated in terms of physical symbol systems by Alan Newell and Herbert Simon in a 1963 paper:

  • A physical symbol system has the sufficient means of general intelligent action.[19]

In the simplest possible sense, all computer programs are symbol systems since they manipulate the binary symbols of one and zero. In fact, the Church-Turing thesis implies:

  • A (sufficiently complex) physical symbol system can accurately duplicate the behavior of any other physical symbol system.[20]

However a distinction is usually made between the kind of high level symbols that directly correspond with objects in the world, such as <dog> and <tail> and the more complex "symbols" that are present in a machine like a neural network. This distinction between high level symbol manipulating programs and general "machines" like neural networks will become very important, since many of the critiques of AI only apply to this kind of high level symbol manipulation.

Arguments against the basic premise

Lucas, Penrose and Godel

  • There are statements that no physical symbol system can prove.[21]

Gödel's incompleteness theorems imply that some propositions are forever beyond the reach of any system that follows formal rules. A human being, however, can (with some thought) see the truth of these "Gödel statements". In 1961 John Lucas argued that this showed that human reason would always be superior to machines.[22] He wrote "Gödel's theorem seems to me to prove that mechanism is false, that is, that minds cannot be explained as machines."[23]

Roger Penrose expanded on this argument in his 1989 book The Emperor's New Mind, where he speculated that quantum mechanical processes inside individual neurons gave humans this special advantage over machines.[24]

Responses to Lucas and Penrose

Russell and Norvig note that Gödel's argument only applies to idealized machines, like Turing machines that have an infinite amount of memory. Real machines are always finite, and so Gödel's argument does not apply. In fact, machines with a finite amount of memory are equivalent to first order predicate logic and so are decidable.[25]

Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach, explains that these "Gödel-statements" always refer to the system itself, similar to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying".[26] But, of course, the Epimenides paradox applies to anything that makes statements, whether they are machines or humans, even Lucas himself. Consider:

  • Lucas can't assert the truth of this statement.[27]

This statement is true but can't be asserted by Lucas. This shows that Lucas is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.[28]

Dreyfus and Heidegger: The primacy of unconscious skills

Hubert Dreyfus argued that human intelligence and expertise depended primarily on unconscious instincts rather than conscious symbolic manipulation.[29]

Dreyfus identified two different kinds of skills, which he called "knowing-that" and "knowing-how" (based on Heidegger's distinction of present-at-hand and ready-to-hand). Knowing-that uses logic, language and symbols and Dreyfus agreed that a physical symbol system may be able to imitate it. Knowing-how is a form of contextually circumscribed guessing that allows us to arrive at an answer or take an action without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say to a troubled friend.[30] (Malcolm Gladwell would later name this "fast" process of thinking as a "blink" in a bestseller of the same name.[31])

"Knowing-how" requires that we use all of our unconscious intuitions, attitudes and knowledge about the world. This context or "background" (related to Heidegger's Dasein) is a form of knowledge that is not stored in our brains symbolically, but intuitively. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider.[29] (Gladwell calls this "thin-slicing").[31]

Dreyfus claimed that no physical symbol system, as they were implemented in the 70s and 80s, could capture this background or do the kind of fast problem solving, or blinking, that it allows. This, he claimed, showed that some aspects of human intelligence don't depend on symbol manipulation, refuting the physical symbol system hypothesis.[32]

Responses to Dreyfus

Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and intelligence, where he had classified this as the "argument from the informality of behavior."[33] Turing argued in response that, just because we don't know the rules that govern a complex behavior, this does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific observation, and we certainly know of no circumstances under which we could say, 'We have searched enough. There are no such laws.'"[34]

Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has been made towards discovering the "rules" that govern Dreyfus' background, for example, active vision is addressing the problem of directing sensors towards those aspects of the environment that are most "interesting" or "useful" using a theory of "information value".[35]

The situated movement in robotics research also attempts to capture our unconscious skills at perception and attention.

In fact, since Dreyfus first published his critiques in the 60s, AI research in general has moved away from high level symbol manipulation or "GOFAI", towards new models that are intended to capture more of our unconscious reasoning. Historian and AI researcher Daniel Crevier wrote that it was "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[36]

Computationalism: brains are computers

  • Reason is nothing but reckoning[4]

Computationalism asserts that, at some level, human brains are computers. This issue is of primary importance to cognitive scientists, who study the nature of human thinking and problem solving.

For AI researchers, if computationalism can be shown to be true, then it provides strong evidence that the basic premise of AI is true and suggests that research should focus on duplicating human brain functions.

Strong AI vs. weak AI

See also: Strong AI, where the term "strong AI" is used to describe a system with artificial general intelligence.
  • A physical symbol system can have a mind and mental states.[5]

The "strong AI hypothesis" and "weak AI hypothesis" are the names of two contrasting philosophical interpretations of a what a successful artificial intelligence program really represents. Weak AI claims only that it is possible (and useful) to build a system with intelligence. Strong AI agrees, but goes on to claim that such a system would actually have a mind, mental states or consciousness in the same way people do.[5]

The terms were introduced by philosopher John Searle in his 1980 paper Mind, Brains and Programs, where he wrote:

I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.[37]

Searle's weak AI hypothesis is a version of the basic premise of AI, the only significant difference being that Searle's weak AI only claims that machines can perform some intelligent behaviors, whereas the basic premise of AI claims that machines can perform any intelligent behavior. The weak AI claim is almost trivially true: machines have been demonstrating some intelligent behavior since at least 1956, when Newell and Simon wrote Logic Theorist.

Searle introduced to the terms to isolate strong AI from weak AI so he could focus on what he thought was the more interesting and debatable issue. He wants to say that even if we assume that we had a computer program that acted exactly like a human mind, there would still be a difficult philosophical question that needed to be answered.[37] Strong AI is related to the hard problem of consciousness, the mind body problem, the problem of other minds and other difficult questions in the philosophy of mind. Strong AI is primarily of concern to philosophers.

Many AI researchers dismiss strong AI as being uninteresting or perhaps even meaningless. Russell and Norvig write: "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis."[38] AI founder Marvin Minsky said that Searle “misunderstands, and should be ignored.”[39]

Arguments against the strong AI hypothesis

Searle's Chinese Room

John Searle asks us to consider a thought experiment: suppose we have written a computer program that passes the Turing Test and demonstrates "general intelligent action." Suppose, specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and give them to an ordinary person. Lock the person into a room and have him follow the instructions on the cards. He will copy out Chinese characters and pass them in and out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the room that understands chinese? That is, is there anything that has the mental state of understanding, or which has conscious awareness of what is being discussed in Chinese? The man is clearly not aware. The room can't be aware. The cards certainly aren't aware. Searle concludes that the Chinese room, or any other physical symbol system, can not have a mind.[37]

Searle goes on to argue that actual mental states and consciousness require specific "causal properties of the brain." He is not a dualist, rather he believes that there is something special about brains and neurons that gives rise to minds: in his words "brains cause minds."[37]

Responses to Searle

Arguments that the strong AI hypothesis is meaningless

See also


External links

Notes

  1. 1.0 1.1 Turing 1950, Haugeland 1985, pp. 6-9, Crevier 1993, p. 24, Russell & Norvig 2003, pp. 2-3 and 948
  2. 2.0 2.1 This assertion was printed in the program for the Dartmouth Conference of 1956, widely considered the "birth of AI." McCarthy et al. 1955 See also Crevier 1993, p. 28
  3. Newell & Simon 1963 and Russell & Norvig 2003, p. 18
  4. 4.0 4.1 Hobbes 1651, chapter 5
  5. 5.0 5.1 5.2 5.3 Searle 1980. See also Russell & Norvig 2003, p. 947 where they write ""The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis," although Searle's arguments, such as the Chinese Room, apply only to physical symbol systems, not to machines in general (he would consider the brain a machine). Also, notice that the positions as Searle states them don't make any commitment to how much intelligence the system has: it is one thing to say a machine can act intelligently, it is another to say it can act as intelligently as a human being.
  6. Russell & Norvig, p. 948, Drew McDermott made a similar point about Deep Blue: :"Saying Deep Blue doesn't really think about chess is like saying an airplane doesn't really fly because it doesn't flap its wings." (McDermott 1997).
  7. AI founder John McCarthy writes "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." See What is Artificial Intelligence?
  8. Turing 1950 and see Russell & Norvig 2003, p. 948, where they call his paper "famous" and write "Turing examined a wide variety of possible objections to the possibility of intelligent machines, including virtually all of those that have been raised in the half century since his paper appeared."
  9. Turing 1950 under "The Argument from Consciousness"
  10. Turing 1950 under "Critique of the New Problem"
  11. Haugeland 1985, p. 8
  12. "These six disciplines represent most of AI". Russell & Norvig 2003, p. 3
  13. Russell & Norvig 2003, p. 3
  14. Turing 1950 under The Imitation Game, where he writes "Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."
  15. Russell & Norvig 2003, p. 3
  16. See McCarthy's presentation at AI@50
  17. Russell & Norvig 2003, p. 4-5, 32, 35, 36 and 56
  18. Russell and Norvig would prefer the word "rational" to "intelligent".
  19. Newell & Simon 1963 and Russell & Norvig 2003, p. 18. This is the "sufficient condition" of their physical symbol systems hypothesis. They also made the stronger claim that "a physical symbol system has the necessary means of general intelligent action", which implies computationalism. See below).
  20. Turing 1950 under "Universality of Digital Computers"
  21. Russell & Norvig 2003, p. 949, Hofstadter 1979
  22. Lucas 1961, Russell & Norvig 2003, pp. 949-950, Hofstadter 1979, pp. 471-473,476-477
  23. Lucas 1961, p. 57-9
  24. Penrose 1989
  25. Russell & Norvig 2003, p. 950
  26. Hofstadter 1979
  27. According to Hofstadter 1979, p. 476-477, this statement was first proposed by C. H. Whitely
  28. Hofstadter 1979, pp. 476-477, Russell & Norvig 2003, p. 950
  29. 29.0 29.1 Dreyfus & Dreyfus 1986
  30. Dreyfus & Dreyfus 1986 (with brother Stuart Dreyfus) and see From Socrates to Expert Systems. The "knowing-how"/"knowing-that" terminology was introduced in the 1950s by philosopher Gilbert Ryle.
  31. 31.0 31.1 Gladwell 2005
  32. Dreyfus 1972, Russell & Norvig 2003, pp. 951-952
  33. Russell & Norvig 2003, p. 950-51
  34. Turing 1950 under "The Argument from the Informality of Behavior"
  35. Russell & Norvig 2003, p. 52
  36. Crevier 1993, p. 125
  37. 37.0 37.1 37.2 37.3 Searle 1980
  38. Russell & Norvig, p. 947
  39. Crevier 1993, p. 143

References


Linked-in.jpg