LATEST UPDATES
Showing posts with label Artificial-Intelligence. Show all posts
Showing posts with label Artificial-Intelligence. Show all posts

Saturday, 14 May 2016

Computer Engineering

How can we build an efficient computer?

For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice. The modern digital electronic computer was invented independently and almost simultaneously by scientists in three countries embattled in World War II. The first operational  computer was the electro-mechanical Heath Robinson,"built in 1940 by Alan Turing's team for a single purpose: deciphering German messages.
In 1943, the same group developed the Colossus, a powerful general - purpose machine based on vacuum tubes. The first operational  programmable  computer was the 2-3, the invention of Konrad Zuse in Germany in 1941. Zuse also invented floating - point numbers and the first high - level programming language, Plankalkiil. The first  electronic  computer, the ABC, was assembled by John Atanasoff and his student Clifford Berry between 1940 and 1942 at Iowa State University. Atanasoff's research received little support or recognition; it was the ENIAC, developed as part of a secret military project at the University of Pennsylvania by a team including John Mauchly and John Eckert, that proved to be the most influential forerunner of modern computers.



In the half - century since then, each generation of computer hardware has brought an increase in speed and capacity and a decrease in price. Performance doubles every 18 months or so, with a decade or two to go at this rate of increase. After that, we will need molecular engineering or some other new technology.
Of course, there were calculating devices before the electronic computer. The earliest automated machines, dating from the 17th century.  The first  programmable  machine was a loom devised in 1805 by Joseph Marie Jacquard (1752 - 1834) that



used punched cards to store instructions for the pattern to be woven. In the mid - 19th century, Charles Babbage (1792 - 1871) designed two machines, neither of which he completed. The " Difference Engine, "  which appears on the cover of this book, was intended to compute mathematical tables for engineering and scientific projects. It was finally built and shown to work
in 1991 at the Science Museum in London (Swade, 1993).





Babbage's  " Analytical Engine " was far more ambitious: it included addressable memory, stored programs, and conditional jumps and was the first artifact capable of universal computation. Babbage's colleague Ada Lovelace, daughter of the poet Lord Byron, was perhaps the world's first programmer. (The programming language Ada is named after her.) She wrote programs for the unfinished Analytical Engine and even speculated that the machine could play chess or compose music.


AI  also owes a debt to the software side of computer science, which has supplied the operating systems, programming languages, and tools needed to write modern programs (and papers about them). But this is one area where the debt has been repaid: work in  A.I  has pioneered many ideas that have made their way back to mainstream computer science, including time sharing, interactive interpreters, personal computers with windows and mice, rapid development environments, the linked list data type, automatic storage management, and key concepts of symbolic, functional, dynamic, and object - oriented programming.

How do humans and animals think and act? (Psychology)

The origins of scientific psychology are usually traced to the work of the von Helmholtz (1 82 1 - 1 894) and his student Wilhelm Wundt (1 832 - 1920). Helmholtz applied the scientific method to the study of human vision, and his Handbook of Physiological Optics is even now described as  " the single most important treatise on the physics and physiology of human vision "  (Nalwa, 1993, p.15). In 1879, Wundt opened the first laboratory of experimental psychology at the University of Leipzig. Wundt insisted on carefully controlled experiments in which his workers would perform a perceptual or associative task while introspecting on their thought processes. The careful controls went a long way toward making psychology a science, but the subjective nature of the data made it unlikely that an experimenter would ever dis confirm his or her own theories.

Biologists studying animal behavior, on the other hand, lacked introspective data and developed an objective methodology, as described by  H.  S. Jennings (1906)l in his influential work Behavior of the  Lower Organisms. Applying this viewpoint to humans, the  behaviourism  movement, led by John Watson (1878-1958), rejected any theory involving mental processes on the grounds that introspection could not provide reliable evidence. Behaviorist's insisted on studying only objective measures of the percepts (or stiwzulus) given to an1 animal and its resulting actions (or response). Mental constructs such as knowledge, beliefs, goals, and reasoning steps were dismissed as unscientific  " folk psychology. "  Behaviourism discovered a lot about rats and pigeons, but had less success at understanding humans. Nevertheless, it exerted a strong hold on psychology (especially in the United States) from about I1920 to 1960.

The view of the brain as an information - processing device, which is a principal characteristic of  cognitive psychology,

can be traced back at least to the works of William James" (1 842 - 19 10). Helmholtz also insisted that perception involved a form of unconscious logical inference. The cognitive viewpoint was largely eclipsed by behaviourism in the United States, but at Cambridge's Applied Psychology Unit, directed by Frederic Bartlett (1886-1969), cognitive modeling was able to flourish. The Nature of Explanation, by Bartlett's student and successor Kenneth Craik (1943), forcefully reestablished the legitimacy of such " mental "  terms as beliefs and goals, arguing that they are just as scientific as, say, using pressure and temperature to talk about gases, despite their being made of molecules that have neither. Craik specified the three key steps of a knowledge - based agent:

(1) the stimulus must be translated into an internal representation.

(2) the representation is manipulated by cognitive processes to derive new internal representations.

(3)  these are in turn retranslated. 
back into action. He clearly explained why this was a good design for an agent:

If the organism carries a  " small - scale model "  of external reality and of its own possible actions within its head, it is able to try out various alternatives, conclude which  is  the best of them, react to future situations before they arise, utilize the knowledge of past events in dealing with the present and future, and in every  way  to react in a much fuller, safer, and more competent manner to the emergencies which face it. (Craik,  1943) After Craik's death in a bicycle accident in 194.5, his work was continued by Donald Broadbent, whose book Perception and Communication (1958) included some of the first information - processing models of psychological phenomena. Meanwhile, in the United States, the development of computer modeling led to the creation of the field of  cognitive science.  The field can be said to have started at a workshop in September 1956 at MIT. (We shall see that this is just two months after the conference at which A1 itself was  " born. " ) At the workshop, George Miller presented The Magic Number Seven, Noam Chomsky presented Three Models of Language, and Allen Newel1 and Herbert Simon presented The Logic Theory  Machine. These three influential papers showed how computer models could be used to address the psychology of memory, language, and logical thinlung, respectively.

It is now a common view among psychologists that  " a cognitive theory should be like a computer program "  (Anderson, 1980), that is, it should describe a detailed information - processing mecha -
nism whereby some cognitive function might be implemented.

N E U R O S C I E N C E



  • How do brains process information?
Neuroscience is the study of the nervous system, particularly the brain. The exact way in which the brain enables thought is one of the great mysteries of science. It has been appreciated for thousands of years that the brain is somehow involved in thought, because of the evidence that strong blows to the head can lead to mental incapacitation. It has also long been known that human brains are somehow different; in about 335  B.C.  Aristotle wrote,  " Of all the animals, man has the largest brain in proportion to his size. "  Still, it was not until the middle of the 18th century that the brain was widely recognized as the seat of consciousness. Before then, candidate locations included the heart, the spleen, and the pineal gland.

Paul Broca's (1824 - 1880) study of aphasia (speech deficit) in brain - damaged patients in 1861 reinvigorated the field and persuaded the medical establishment of the existence of localized areas of the brain responsible for specific cognitive functions. In particular, he showed that speech production was localized to a portion of the left hemisphere now called Broca's area By that time, it was known that the brain consisted of nerve cells or neurons, but it was not until 1873 that Carnillo Golgi (1843 - 1926) developed a staining technique allowing the observation of individual neurons in the brain (see Figure ). 

In Figure 1, the parts of a nerve cell or neuron. Each neuron consists of a cell body, or soma, that contains a cell nucleus. Branching out from the cell body are a number of fibers called dendrites and a single long fiber called the axon. The axon stretches out for a long distance, much longer than the scale in this diagram indicates. Typically they are 1 cm long (100 times the diameter of the cell body), but can reach up to  1  meter.  A  neuron makes connections with 10 to 100,000 other neurons at junctions called synapses. Signals are propagated from neuron to neuron by a complicated electrochemical reaction. The signals control brain activity in the short term, and also enable long - term changes in the position and connectivity of neurons. These mechanisms are thought to form the basis for learning in the brain. Most information processing goes on  in  the cerebral cortex, the outer layer of the brain.  The  basic organizational unit appears to be  a  column of tissue about 0.5  mm  in diameter, extending the full depth of the cortex, which is about  4  mm in humans.  A  column contains about 20,000 neurons.

This technique was used by Santiago Ramon y Cajal (1852 - 1934) in his pioneering studies of the brain's neuronal structures.
We now have some data on the mapping between areas  of  the brain and the parts of the body that they control or from which they receive sensory input. Such mappings are able to change radically over the course of a few weeks, and some animals seem to have multiple maps. Moreover, we do not fully understand how other areas can take over functions when one area is damaged. There  is  almost no theory on how an individual memory is stored.
The measurement of intact brain activity began in  1929  with the invention by Hans Berger of the electroencephalograph  (EEG).  The recent development of functional magnetic resonance imaging  (fMRI)  (Ogawa  et  al.,  1990)  is giving neuro-scientists unprecedentedly detailed images of brain activity, enabling measurements that correspond in interesting ways to ongoing cognitive processes. These are augmented by advances in single - cell recording of neuron activity. Despite these advances, we are still a long way from understanding how any of these cognitive processes actually work.

In figure 2, A  crude comparison of the raw computational resources available to computers (circa  2003) and brains. The computer's numbers have all increased by at least a factor of 10 since the first edition of this book, and  are  expected to do so again this decade. The brain's numbers have not changed in the last 10,000 years.

Brains and digital computers perform quite different tasks and have different properties. Figure  2 shows that there are 1000 times more neurons in the typical human brain than there are gates in the CPU of a typical high - end computer. Moore's Law 9  predicts that the CPU's gate count will equal the brain's neuron count around 2020.  Of  course, little can be inferred from such predictions; moreover, the difference in storage capacity is minor compared to the difference in switching speed and in parallelism. Computer chips can execute an instruction in a nanosecond, whereas neurons are millions of times slower. Brains more than make up for this, however, because all the neurons and synapses are active simultaneously, whereas most current computers have only one or at most a few CPUs. Thus, even though a computer is a million times faster in raw switching speed, the brain ends up being 100,000 times faster at what it does.

Economics

  • How should we make decisions so as to maximize payoff?
  • How should we do this when others may not go along?
  • How should we do this when the payoff may be f,x in the future?

The science of economics got its start in 1776, when Scottish philosopher Adam Smith (1723 - 1790) published An Inquiry into the Nature and Causes of the Wealth of Nations. While the ancient Greeks and others had made contributions to economic thought, Smith was the first to treat it as a science, using the idea that economies can be thought of as consisting of individual agents maximizing their own economic well - being. Most people think of economics as being about money, but economists will say that they are really studying how people make choices that lead to preferred outcomes. The mathematical treatment of  " preferred outcomes ' or  utility  was first formalized by Lkon Walras (pronounced  " Valrasse " ) (1834 -  1910) and was improved by Frank Ramsey (193  1)  and later by John von Neumann and Oskar Morgenstern in their book The Theory of Games and Economic Behavior (1944).

Decision Theory

In Decision theory,  which combines probability theory with utility theory, provides a formal and complete framework for decisions (economic or otherwise) made under uncertainty that is, in cases where probabilistic descriptions appropriately capture the decision - maker's environment. This is suitable for  " large "  economies where each agent need pay no  attention to the actions of other agents as individuals. For  " small "  economies, the situation is much more like a game: the actions of one player can significantly affect the utility of another (either positively or negatively). Von Neumann and Morgenstern's development of game theory (see also Luce and Raiffa, 1957) included the surprising result that, for some games, a rational agent should act in a random fashion, or at least in a way that appears random to the adversaries.

For the most part, economists did not address the third question listed above, namely, how to make rational decisions when payoffs from actions are not immediate but instead result from several actions taken  in  sequence. This topic was pursued in the field of operations research, which emerged in World War I1 from efforts in Britain to optimize radar installations, and later found civilian applications in complex management decisions. The work of Richard Bellman (1957) formalized a class of sequential decision problems called Markov decision processes.

Work in economics and operations research has contributed much to our notion of rational agents, yet for many years Artificial Intelligence research developed along entirely separate paths. One reason was the apparent complexity of making rational decisions. Herbert Simon (1916-2001), the pioneering  Artificial Intelligence  researcher, won the Nobel prize in economics in 1978 for his early work showing that models based on satisficing - making decisions that are  " good enough, " rather than laboriously calculating an optimal decision - gave a better description of actual human behavior (Simon, 1947). In the 1990s, there has been a resurgence of interest in decision - theoretic techniques for agent systems (Wellman, 1995).

Mathematics

Goal - based analysis is useful, but does not say what  110  do when several actions will achieve the goal, or when no action will achieve it completely. Antoine Arnauld (1612 - 1694) correctly described a quantitative formula for deciding what action to take in cases like this. John Stuart Mill's (1806 - 1873) book  Utilitarianism  (Mill, 1863) promoted the idea of rational decision criteria in all spheres of human activity. The more formal theory of decisions is discussed in the following section.


Mathematics


  • What are the formal rules to draw valid conclusions?
  • What can be computed?
  • How do we reason with uncertain information?

Philosophers staked out most of the important ideas of  k1,  but the leap to a formal science required a level of mathematical formalization in three fundamental areas: logic, computation,

and probability.
The idea of formal logic can be traced back to the philosophers of ancient Greece but its mathematical development really began with the work of  George  Boole (1 8 15 - 1 864), who worked out the details of propositional, or Boolean, logic (Boole, 1847).

In 1879, Gottlob Frege (1848 - 1925) extended Boole's logic to include objects and relations, creating the first - order logic that is used today as the most basic knowledge representation system. Alfred Tarski (1902 - 1983) introduced a theory of reference that shows how to relate the objects in a logic to objects in the real world. The next step was to determine the limits of what could be done with logic and computation.
The first nontrivial  algorithm  is thought to be Euclid's algorithm for computing greatest common denominators. The study of algorithms as objects in themselves goes back to al - Khowarazmi, a Persian mathematician of the 9th century, whose writings also introduced Arabic numerals and algebra to Europe. Boole and others discussed algorithms for logical deduction, and, by the late 19th century, efforts were under way to formalize general mathematical reasoning as logical deduction. In 1900, David Hilbert (1862 - 1943) presented a list of 23 problems that he correctly predicted would occupy mathematicians for the bulk of the century. The final problem asks whether there is  an  algorithm for deciding the truth of any logical proposition involving the natural numbers - the famous Entscheidungsproblem, or decision problem. Essentially, Hilbert was asking whether there were fundamental limits to the power of effective proof procedures. In 1930, Kurt Godel (1906 - 1978) showed that there exists an effective procedure to prove any true statement in the first - order logic of Frege and Russell, but that first - order logic could not capture the principle of mathematical induction needed to characterize the natural numbers. In 1931, he showed that real limits do exist. His  incompleteness theorem  showed that in any language expressive enough to describe the properties of the natural numbers, there are true statements that are undecidable in the sense that their truth cannot be established by any algorithm.
This fundamental result can also be interpreted as showing that there are some functions on the integers that cannot be represented by an algorithm - that is, they cannot be computed. This motivated Alan Turing (1912 - 1954) to try to characterize exactly which functions are capable of being computed. This notion is actually slightly problematic, because the notion of a computation or effective procedure really cannot be given a formal definition. However, the Church - Turing thesis, which states that the Turing machine (Turing, 1936) is capable of computing any computable function, is generally accepted as providing a sufficient definition. Turing also showed that there were some functions that no Turing machine can compute. For example, no machine can tell in general whether a given program will return an answer on a given input or run forever.
Although undecidability and non-computability are important to an understanding of computation, the notion of  intractability  has had a much greater impact. Roughly speaking, a problem is called intractable if the time required to solve instances of the problem grows exponentially with the size of the instances. The distinction between polynomial and exponential growth in complexity was first emphasized in the mid - 1960s (Cobham, 1964; Edmonds, 1965). It is important because exponential growth means that even moderately large instances cannot be solved in any reasonable time. Therefore, one should strive to divide the overall problem of generating intelligent behavior into tractable sub problems rather than intractable ones.
How can one recognize an intractable problem? The theory of  NP - completeness,  pioneered by Steven Cook (1971) and Richard  Karp  (1972), provides a method. Cook and  Karp showed the existence of large classes of canonical combinatorial search and reasoning problems that are NP - complete. Any problem class to which the: class of NP - complete problems can be reduced is likely to be intractable. (Although it has not been proved that NP - complete problems are necessarily intractable, most theoreticians believe it.) These results contrast with the optimism with which the popular press greeted the first computers - " Electronic Super - Brains "  that were  " Faster than Einstein! "  Despite the increasing speed of computers, careful use of resources will characterize intelligent systems. Put crudely, the world is an extremely large problem instance! In recent years,  AI  has helped explain why some instances of NP - complete problems are hard, yet others are easy (Cheeseman et al., 1991).
Besides logic and computation, the third great contribution of mathematics to AI is the theory of probability.  The Italian Gerolamo Cardano (1501 - 1576) first framed the idea of probability, describing it in terms of the possible outcomes of gambling events. Probability quickly became an invaluable part of all the quantitative sciences, helping to deal with uncertain measurements and incomplete theories. Pierre Fermat (1 60 1 - 1 665), Blaise Pascal (1623-1662), James Bernoulli (1654-1705), F'ierre Laplace (1749-1827), and others advanced the theory and introduced new statistical methods. Thomas Bayes (1702 - 1 761) proposed  a  rule for updating probabilities in the light of new evidence. Bayes' rule and the resulting field called Bayesian analysis form the basis of most modern approaches to uncertain reasoning in  AI  systems.

Wednesday, 27 April 2016

The Foundations of Artificial Intelligence


In this post, we provide a brief history of the disciplines that contributed ideas, viewpoints, and techniques to AI. Like any history, this one is forced to (concentrate on a small number of people, events, and ideas and to ignore others that (also were important. We organize the history around a series of questions. We certainly would not wish to give the impression that these questions are the only ones the disciplines address or that the disciplines have all been working toward A1 as their ultimate fruition.

Philosophy

  • Can formal rules be used to draw valid conclusions?
  • How does the mental mind arise from a physical brain?
  • Where does knowledge come from?
  • How does knowledge lead to action?

Developments:

Aristotle (384 - 322  B.C.)  was the first to formulate a precise set of laws governing the rational part of the mind. He developed an informal system of syllogisms for proper reasoning, which in principle allowed one to generate conclusions mechanically, given initial premises. Much later, Ramon Lull (d. 13 15) had the idea that useful reasoning could actually be carried out by a mechanical artifact. His  " concept wheels "  are on the cover of this book. Thomas Hobbes (1588 - 1679) proposed that reasoning was like numerical computation, that  " we add and subtract in our silent thoughts. "  The automation of computation itself was already well under way; around 1500, Leonardo da Vinci (1452 - 1519) designed but did not build a mechanical calculator; recent reconstructions have shown the design to be functional. The first known calculating machine was constructed around 1623 by the German scientist Wilhelm Schickard (1592-1635), although the Pascaline, built in 1642 by Blaise Pascal (1623-1662), is more famous. Pascal wrote that  " the arithmetical machine produces effects which appear nearer to thought than all the actions of animals. "  Gottfried Wilhelm Leibniz (1646 - 1716) built a mechanical device intended to carry out operations on concepts rather than numbers,
but its scope was rather limited.
Now that we have the idea of a set of rules that can describe the formal, rational part of the mind, the next step is to consider the mind as a physical system. RenC Descartes (1596 - 1650) gave the first clear discussion of the distinction between mind and matter and of the problems that arise. One problem with a purely physical conception of the mind is that it seems to leave little room for free will: if the mind is governed entirely by physical laws, then it has no more free will than a rock  " deciding "  to fall toward the center of the earth. Although a strong advocate of the power of reasoning, Descartes was also a proponent of  dualism.  He held that there is a part of the human mind (or soul or spirit) that is outside of nature, exempt from physical laws. Animals, on the other hand, did not possess this dual quality; they could be treated as machines. An alternative to dualism is  materialism,  which holds that the brain's operation according to the laws of physics  constitutes  the mind. Free will is simply the way that the perception of available choices appears to the choice process.
Given  a  physical mind that manipulates knowledge, the next problem is to establish the source of knowledge. The  empiricism  movement, starting with Francis Bacon's (1561 - 1626) Novum  is characterized by a dictum of John Locke (1632 - 1704):  " Nothing is in the understanding, which was not first in the senses. "  David Hume's (171 1 - 1776)  A  Treatise of Human Nature  (Hume, 1739) proposed what is now known as the principle of  induction: that general rules are acquired by exposure to repeated associations between their elements. Building on the work of Ludwig Wittgenstein (1889 - 1951) and Bertrand Russell (1872-1970), the famous Vienna Circle, led by Rudolf Carnap (1891-1970), developed the doctrine  of  logical positivism.  This doctrine holds that all knowledge can be characterized by logical theories connected, ultimately, to  observation sentences  that correspond to sensory inputs. The  confirmation theory  of Carnap and Carl Hempel (1905 - 1997) attempted to understand how knowledge can be acquired from experience. Carnap's book  The Logical Structure of the  World  (1928) defined an explicit computational procedure for extracting knowledge from elementary experiences. It was probably the first theory of mind as a computational process.
The final element in the philosophical picture of the mind is the connection between knowledge and action. This question is vital to AI, because intelligence requires action as well as reasoning. Moreover, only by understanding how actions are justified can we understand how to build  an  agent whose actions are justifiable (or rational). Aristotle argued that actions are justified by a logical connection between goals and knowledge of the action's outcome (the last part of this extract also appears on the front cover of this book).

Thinking humanly: The cognitive modeling approach


If we are going to say that a given program thinks like  a  human, we must have some way of determining how humans think. We need to get  inside  the actual workings of human minds. There are two ways to do this: through introspection trying to catch our own thoughts as they go by and through psychological experiments. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program. If the program's input/output and timing behaviors match corresponding human behaviors, that is evidence that some of the program's mechanisms could also be operating in humans. For example, Allen Newell and Herbert Simon, who developed  GPS,  the  " General Problem Solver "
(Newell and Simon, 1961), were not content to have their program solve problems correctly. They were more concerned with comparing the trace of its reasoning steps to traces of human subjects solving the same problems. The interdisciplinary field of  cognitive science  brings together computer models from AI and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind. 
Cognitive science is a fascinating field, worthy of an encyclopedia in itself (Wilson and Keil, 1999). We will not attempt to describe what is known of human cognition in this book. We will occasionally comment on similarities or differences between AI techniques and human cognition. Real cognitive science, however, is necessarily based on experimental investigation of actual humans or animals, and we assume that the reader has access only to a computer for experimentation.
In the early days of AI there was often confusion between the approaches: an author would argue that an algorithm performs well on a task and that it is  therefore  a good model of human performance, or vice versa. Modern authors separate the two kinds of claims; this distinction has allowed both AI and cognitive science to develop more rapidly. The two fields continue to fertilize each other, especially in the areas of vision and natural language. Vision in particular has recently made advances via an integrated approach that considers neurophysiological evidence and computational models.

Thinking rationally: The  " laws of thought "  approach

The Greek philosopher Aristotle was one of the first to attempt to codify  " right thinking, "  that is, irrefutable reasoning processes. His  syllogisms  provided patterns for argument structures that always yielded correct conclusions when given correct premises - for example,  " Socrates is a man; all men are mortal; therefore, Socrates is mortal. "  These laws of thought were  supposed to govern the operation of the mind; their study initiated the field called  logic.
Logicians in the 19th century developed a precise notation for statements about all kinds of things in the world and about the relations among them. (Contrast this with ordinary arithmetic notation, which provides mainly for equality and inequality statements about numbers.) By 1965, programs existed that could, in principle, solve  any  solvable problem described in logical notation. The so - called  logicist  tradition within artificial intelligence hopes to build on such programs to create intelligent systems.
There are two main obstacles to this approach. First, it is not easy to take informal knowledge and state it in the formal terms required by logical notation, particularly when the knowledge is less than  100%  certain. Second, there is a big difference between being able to solve a problem  " in principle "  and doing so in practice. Even problems with just a few dozen facts can exhaust the computational resources of any computer unless it has some guidance as to which reasoning steps to try first. Although both of these obstacles apply to  any  attempt to build computational reasoning systems, they appeared first in the logicist tradition.

Acting rationally: The rational agent approach

An  agent  is just something that acts  (agent  comes from the Latin  agere,  to do). But computer
agents are expected to have other attributes that distinguish them from mere  " programs, "
such as operating under autonomous control, perceiving their environment, persisting over a
prolonged time period, adapting to change, and being capable of taking on another's goals.

RATIONAL AGENT: 

A rational agent  is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.
In the  " laws of thought "  approach to  AI,  the emphasis was on correct inferences. Making correct inferences is sometimes  part  of being a rational agent, because one way to act rationally is to reason logically to the conclusion that a given action will achieve ones goals and then to act on that conclusion. On the other hand, correct inference is not  all  of rationality, because there are often situations where there is no provably correct thing to do, yet something must still be done. There are also ways of acting rationally that cannot be said to involve inference. For example, recoiling from a hot stove is a reflex action that is usually more successful than a slower action taken after careful deliberation.
All the skills needed for the Turing Test are there to allow rational actions. Thus, we need the ability to represent knowledge and reason \with it because this enables us to reach good decisions in a wide variety of situations. We need to be able to generate comprehensible sentences in natural language because saying those sentences helps us get by in a complex society. We need learning not just for erudition, but because having a better idea of how the world works enables us to generate more effective strategies for dealing with it. We need visual perception not just because seeing is fun, but  to  get a better idea of what an action might achieve for example, being able to see a tasty morsel helps one to move toward it.
For these reasons, the study of AI as rational agent design has at least two advantages. First, it is more general than the  " laws of thought "  approach, because correct inference is just one of several possible mechanisms for achieving rationality. Second, it is more amenable to scientific development than are approaches based on human behavior or human thought because the standard of rationality is clearly defined and completely general. Human behavior, on the other hand, is well - adapted for one specific environment and is the product, in  part, of a complicated and largely unknown evolutionary process that still is far from producing perfection. This book will therefore concentrate on general principles  of  rational agents and on components for constructing them. We will see that despite the apparent simplicity with which the problem can be stated, an enormous variety of issues come up when we try to solve it.

What is Artificial Intelligence ?


We have claimed that  AI  is exciting, but we have not said what it  is?
Definitions of artificial intelligence according to four basic approaches to define AI. These approaches are vary two major dimensions first, thought processes & reasoning.There are four basic approaches to define AI:

Systems that think like humans
Systems that think rationally
Systems that act like humans
Systems that act rationally
System that think like humans

The first approach define that" The exciting new effort to make computers think & machines with minds,  in the full and literal sense. This approach appeared in 1985 by Haugeland. In second "The automation of activities that we associate with human thinking, activities such as decision  making, problem solving, learning.( A person who gave this approach concept, named was Bellman, in 1978).The question was raised in every mind that was: Can system will do it? Can system think like humans?

Systems that think rationally

In second approach define that" The study of mental faculties through the use of computational models. "(This approach was given by Chamiak and McDermott, in 1985). In second " The study of the computations that make it possible to perceive, reason, and act. "(This concept was given by Winston , in 1992).

Systems that act like humans

This approach was different from others, because in that approach the AI researchers said that Computer can act like humans like think, speak and learn. We will take two major concepts and judge can possible that systems or computers act like humans. In first concept " The art of creating machines that perform functions that require intelligence when performed by people. "  (This concept was given by Kurzweil, in 1990). In second concept " The study of how to make computers do things at which, at the moment, people are better. "(this concept was given by Rich and Knight, in 1991).

Systems that act rationally

In fourth approach " Computational Intelligence is the study of the design of intelligent agents. "  (Poole et  al gave this concept in 1998) In second concept of this approach was "A1  .  .  .is concerned with intelligent behavior in artifacts. "  (this concept was given by Nilsson in 1998).

Historically, all four approaches to  A1  have been followed. As one might expect,  a tension exists between approaches centered around humans and approaches centered around rationality.
A  human - centered approach must be an empirical science, involving hypothesis and experimental confirmation.  A  rationalist approach involves a combination of mathematics and engineering. Each group has both disparaged and helped the other. Let us look at the four approaches in more detail.

Acting humanly: The Turing Test approach

The  Turing Test,  proposed by Alan Turing in 1950, this test was designed to provide a satisfactory operational definition of intelligence. Rather than proposing a long and perhaps controversial list of qualifications required for intelligence, he suggested a test based on indistinguishably from undeniably intelligent entities human beings. The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not. Further discusses the details of the test and whether a computer is really intelligent if it passes. For now, we note that programming a computer to pass the test provides plenty to work on. The computer would need to possess the following capabilities:

Natural language processing  to enable it to communicate successfully in English.
Knowledge representation  to store what it knows or hears.
Automated reasoning  to use the stored information to answer questions and to draw new conclusions.

  Machine learning  to adapt to new circumstances and to detect and extrapolate patterns.

Turing's test deliberately avoided direct physical interaction between the interrogator and the computer, because  physical  simulation of a person is unnecessary for intelligence. However, the so called  total Turing Test  includes a video signal so that the interrogator can test the subject's perceptual abilities, as well as the opportunity for the interrogator to pass physical objects  " through the hatch. "  To pass the total Turing Test, the computer will need
Computer vision  to perceive objects & robotics  to manipulate objects and move about.


These six disciplines compose most of AI, and Turing deserves credit for designing a test that remains relevant  50  years later. Yet A1 researchers have devoted little effort to passing the Turing test, believing that it is more important to study the underlying principles of intelligence than to duplicate an exemplar. The quest for "artificial flight "  succeeded when the Wright brothers and others stopped imitating birds and learned about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making  " machines that fly so exactly like pigeons that they can fool even other pigeons.
 
Copyright © 2014 Blue Coderz