LATEST UPDATES

Saturday 14 May 2016

Mathematics

Goal - based analysis is useful, but does not say what  110  do when several actions will achieve the goal, or when no action will achieve it completely. Antoine Arnauld (1612 - 1694) correctly described a quantitative formula for deciding what action to take in cases like this. John Stuart Mill's (1806 - 1873) book  Utilitarianism  (Mill, 1863) promoted the idea of rational decision criteria in all spheres of human activity. The more formal theory of decisions is discussed in the following section.


Mathematics


  • What are the formal rules to draw valid conclusions?
  • What can be computed?
  • How do we reason with uncertain information?

Philosophers staked out most of the important ideas of  k1,  but the leap to a formal science required a level of mathematical formalization in three fundamental areas: logic, computation,

and probability.
The idea of formal logic can be traced back to the philosophers of ancient Greece but its mathematical development really began with the work of  George  Boole (1 8 15 - 1 864), who worked out the details of propositional, or Boolean, logic (Boole, 1847).

In 1879, Gottlob Frege (1848 - 1925) extended Boole's logic to include objects and relations, creating the first - order logic that is used today as the most basic knowledge representation system. Alfred Tarski (1902 - 1983) introduced a theory of reference that shows how to relate the objects in a logic to objects in the real world. The next step was to determine the limits of what could be done with logic and computation.
The first nontrivial  algorithm  is thought to be Euclid's algorithm for computing greatest common denominators. The study of algorithms as objects in themselves goes back to al - Khowarazmi, a Persian mathematician of the 9th century, whose writings also introduced Arabic numerals and algebra to Europe. Boole and others discussed algorithms for logical deduction, and, by the late 19th century, efforts were under way to formalize general mathematical reasoning as logical deduction. In 1900, David Hilbert (1862 - 1943) presented a list of 23 problems that he correctly predicted would occupy mathematicians for the bulk of the century. The final problem asks whether there is  an  algorithm for deciding the truth of any logical proposition involving the natural numbers - the famous Entscheidungsproblem, or decision problem. Essentially, Hilbert was asking whether there were fundamental limits to the power of effective proof procedures. In 1930, Kurt Godel (1906 - 1978) showed that there exists an effective procedure to prove any true statement in the first - order logic of Frege and Russell, but that first - order logic could not capture the principle of mathematical induction needed to characterize the natural numbers. In 1931, he showed that real limits do exist. His  incompleteness theorem  showed that in any language expressive enough to describe the properties of the natural numbers, there are true statements that are undecidable in the sense that their truth cannot be established by any algorithm.
This fundamental result can also be interpreted as showing that there are some functions on the integers that cannot be represented by an algorithm - that is, they cannot be computed. This motivated Alan Turing (1912 - 1954) to try to characterize exactly which functions are capable of being computed. This notion is actually slightly problematic, because the notion of a computation or effective procedure really cannot be given a formal definition. However, the Church - Turing thesis, which states that the Turing machine (Turing, 1936) is capable of computing any computable function, is generally accepted as providing a sufficient definition. Turing also showed that there were some functions that no Turing machine can compute. For example, no machine can tell in general whether a given program will return an answer on a given input or run forever.
Although undecidability and non-computability are important to an understanding of computation, the notion of  intractability  has had a much greater impact. Roughly speaking, a problem is called intractable if the time required to solve instances of the problem grows exponentially with the size of the instances. The distinction between polynomial and exponential growth in complexity was first emphasized in the mid - 1960s (Cobham, 1964; Edmonds, 1965). It is important because exponential growth means that even moderately large instances cannot be solved in any reasonable time. Therefore, one should strive to divide the overall problem of generating intelligent behavior into tractable sub problems rather than intractable ones.
How can one recognize an intractable problem? The theory of  NP - completeness,  pioneered by Steven Cook (1971) and Richard  Karp  (1972), provides a method. Cook and  Karp showed the existence of large classes of canonical combinatorial search and reasoning problems that are NP - complete. Any problem class to which the: class of NP - complete problems can be reduced is likely to be intractable. (Although it has not been proved that NP - complete problems are necessarily intractable, most theoreticians believe it.) These results contrast with the optimism with which the popular press greeted the first computers - " Electronic Super - Brains "  that were  " Faster than Einstein! "  Despite the increasing speed of computers, careful use of resources will characterize intelligent systems. Put crudely, the world is an extremely large problem instance! In recent years,  AI  has helped explain why some instances of NP - complete problems are hard, yet others are easy (Cheeseman et al., 1991).
Besides logic and computation, the third great contribution of mathematics to AI is the theory of probability.  The Italian Gerolamo Cardano (1501 - 1576) first framed the idea of probability, describing it in terms of the possible outcomes of gambling events. Probability quickly became an invaluable part of all the quantitative sciences, helping to deal with uncertain measurements and incomplete theories. Pierre Fermat (1 60 1 - 1 665), Blaise Pascal (1623-1662), James Bernoulli (1654-1705), F'ierre Laplace (1749-1827), and others advanced the theory and introduced new statistical methods. Thomas Bayes (1702 - 1 761) proposed  a  rule for updating probabilities in the light of new evidence. Bayes' rule and the resulting field called Bayesian analysis form the basis of most modern approaches to uncertain reasoning in  AI  systems.

Post a Comment

 
Copyright © 2014 Blue Coderz