Wednesday, September 3, 2008

Artificial Intelligent

AI (Artificial Intelligent)

1. Inroduction

The study and design of intelligent system that achieve its environment and takes actions which maximize its chances of success. AI can be seen as a realization of an abstract intelligent agent (AIA) which shows the functional importance of intelligence.Person who term defines it as "the science and engineering of making intelligent machines".Among the distinguishing feature that researchers hope machines will show are reasoning, knowledge, planning, learning, communication, understand and the ability to move and manipulate objects

General intelligence (or "strong AI") has not yet been achieved and is a long-term goal of AI research.AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, ontology, operations research, economics, control theory, probability, optimization and logic. AI research also correspond with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.

Other names for the field have been proposed, such as computational intelligence, synthetic intelligence, intelligent systems, or computational rationality. There have been a variety of AI programs, and they have impacted other technological advancements. The 1949 , the stored program computer, made the job of entering a program easier, and advancements in computer theory lead to computer science, and eventually Artificial intelligence. With the invention of an electronic means of processing data, came a medium that made AI possible.

2. History

In the middle of the 20th century, a handful of scientists began a new approach to building intelligent machines, based on recent discoveries in neurology, a new mathematical theory of information, an understanding of control and stability called cybernetics, and above all, by the invention of the digital computer, a machine based on the abstract essence of mathematical reasoning. The field of mdern AI research was founded at conference on the campus of Dartmouth College in the summer of 1956.

Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford.

They and their students wrote programs that were, to most people, simply astonishing:computers were solving word problems in algebra, proving logical theorems and speaking English.By the middle 60s their research was heavily funded by the U.S. Department of Defense and they were optimistic about the future of the new field:

In 1965, H. A. Simon: "Machines will be capable, within twenty years, of doing any work a man can do"

In 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." . These predictions, and many like them, would not come true. They had failed to recognize the difficulty of some of the problems they faced. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from Congress to fund more productive projects, the U.S. and British governments cut off all undirected, exploratory research in AI.

This was the first AI Winter. In the early 80s, AI research was revived by the commercial success of expert systems (a form of AI program that simulated the knowledge and analytical skills of one or more human experts) and by 1985 the market for AI had reached more than a billion dollars.

Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began. In the 90s and early 21st century AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas.

The success was due to several factors: the incredible power of computers today, a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

3.Tools of AI research

In the course of 50 years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below:


Poplog is sophisticated, highly extendable, collection of tools for teaching, research and development providing three powerful AI programming languages (Pop11, the core language, Common Lisp, and Prolog) as well as Standard ML.

The teaching materials allow students to learn about many aspects of Artificial Intelligence and Cognitive Science by designing, implementing, testing, extending, analysing working programs. The 'TEACH' files are accessible through an integrated visual editor, which 'knows about' the compilers, the program libraries and the documentation libraries. E.g. a tutorial file can have a hypertext link to other documentation or to program libraries.

Programming examples can be run from inside the editor, then modified and run again, without restarting the editor or Poplog. The environment has many of the interactive features of interpreted languages, though the Poplog languages are all incrementally compiled to machine code for efficiency.

There are many libraries extending the core system to support various kinds of programming tasks. For example the SIM Agent toolkit supports exploration of architectures for more or less intelligent agents.

There is also a collection of libraries and tutorials related to computer vision, i.e. image analysis and interpretation, in David Young's "POP vision" Library including neural net, array manipulation, and linear algebra utilities.

For more information on Poplog and its history see

4. Example of AI Research Areas

There are numbers of search algorith through trees of goals and subgoals, attempting to find a path to a target goal. There are several types of search algorithms to meet the goals in minimal time.

"Uninformed" search algorithms eventually search through every possible answer until they locate their goal. Naive algorithms quickly run into problems when they expand the size of their search space to astronomical numbers. The result is a search that is too slow or never completes.
Heuristic or "informed" searches use heuristic methods to eliminate choices that are unlikely to lead to their goal, thus drastically reducing the number of possibilities they must explore. The eliminatation of choices that are certain not to lead to the goal is called pruning.

Local searches, such as hill climbing, simulated annealing and beam search, use techniques borrowed from optimization theory.

Global searches are more robust in the presence of local optima. Techniques include evolutionary algorithms, swarm intelligence and random optimization algorithms. (Algorithms written in java language will be provided
in another edition).

Knowledge capture, representation and reasoning

In order to guide search or even to describe problems, actions, and solutions the relevant domain knowledge must be encoded in a form that can be effectively manipulated by a program.

More generally, the usefulness of any reasoning process depends not only on the reasoning process itself, but also on having the right knowledge and representing it in a form the program can use. In the logicist approach to knowledge representation and reasoning, information is en-coded as assertions in logic, and the system draws conclusions by deduction from those assertions. Other research studies non-deductive forms of reasoning, such as reasoning by analogy and inference the process of inferring the best explanation for a set of facts.

Conclusion does not guarantee sound conclusions, but is enormously useful for tasks such as medical diagnosis, in which a reasoned must hypothesize causes for a set of symptoms. Capturing the knowledge needed by AI systems has proven to be a challenging task.

The knowledge in rule-based expert systems, for example, is represented in the form of rules listing conditions to check for, and conclusions to be drawn if those conditions are satisfied. For example, a rule might state that IF certain conditions hold (e.g., the patient has certain symptoms), THEN certain conclusions should be drawn (e.g., that the patient has a particular condition or disease).

A natural way to generate these rules is to interview experts. Unfortunately, the experts may not be able to adequately explain their decisions in a rule-based way, resulting in a knowledge-acquisition bottleneck" impeding system development.

One approach to alleviating the knowledge acquisition problem is to develop sharable knowledge sources that represent knowledge in a form that can be re-used across multiple tasks. The CYC project, for example, is a massive ongoing effort to encode the according knowledge" that underlies much commonsense reasoning [Lenat, 1995]. Much current knowledge representation research develops sharable ontologies that represent particular domains.

Ontologies provide a formal specification of the concepts in the domain and their relation-ships, to use as a foundation for developing knowledge bases and facilitating knowledge sharing [Chandrasekaran et al., 1999].

Reasoning under uncertainty

AI systems like people must often act despite partial and uncertain information. First, the information received may be unreliable (e.g., a patient may not remember when a disease started, or may not have noticed a symptom that is important to a diagnosis).

In addition, rules connecting real-world events can never include all the factors that might determine whether their conclusions really apply (e.g., the correctness of basing a diagnosis on a lab test depends whether there were conditions that might have caused a false positive, on the test being done correctly, on the results being associated with the right patient, etc.) Thus in order to draw useful conclusions, AI systems must be able to reason about the probability of events, given their current knowledge (See PROBABILITY).

Research on Bayesian reasoning provides methods for calculating these probabilities. Bayesian networks, graphical models of the relationships between variables of interest, have been applied to a wide range of tasks, including natural language understanding, user modeling, and medical diagnosis. For example, Intellipath, a commercial system for pathology diagnosis, was approved by the AMA and has been implemented in hundreds of hospitals worldwide. Diagnostic reasoning may

also be combined with reasoning about the value of alternative actions, in order to select the course of action with the greatest expected utility.

For example, a medical decision-making system might make decisions by considering the probability of a patient having a particular condition, the probability of bad side-effects of a treatment and their severity, and the probability and severity of bad effects if the treatment is not performed.

In addition to dealing with uncertain information, everyday reasoners must be able to deal with vague descriptions, such as those provided in natural language. For example, a doctor who is told that a patient has a high fever," must be able to reason about the fuzzy concept of high fevers."

Whether a particular fever is high" is not simply a true or false decision decided by a cutoff_ point, but rather, a matter of degree. Fuzzy reasoning provides methods for reasoning about vague knowledge.

Planning, Vision, and Robotics

The conclusions of the reasoning process can determine goals to be achieved. Planning addresses the question of how to determine a sequence of actions to achieve those goals. The resulting action sequences may be designed to be applied in many ways, such as by robots in the world, by intelligent agents on the Internet, or even by humans.

Planning systems may use a number of techniques to make the planning process practical, such as hierarchical planning, basis fist at higher levels of abstraction and then elaborating details within

the high-level framework, and partial-order planning, enabling actions to be inserted in the plan in any order, rather than chronologically, and sub-plans to be merged. Dean and Kambhampati (1997) provide an
extensive survey of this area.

In real-world situations, it is seldom possible to generate a complete plan in advance and
then execute it without changes. The state of the world may be imperfectly-known, the effects of actions may be uncertain, the world may change while the plan is being generated or executed, and the plan may require the coordination of multiple cooperating agents, or counter planning to neutralize the interference of agents with opposing goals.

Determining the state of the world and guiding action requires the ability to gather information about the world, though sensors such as sonar or cameras, and to interpret that information to draw conclusions (See MACHINE VISION). In addition, carrying out actions in a messy and changing world may require rapid responses to important events (e.g., for a robot-guided vehicle to correct a skid), or an ongoing process of rapidly selecting actions based on the current context (for example, when a basketball player must avoid an opponent).

Such problems have led to research on reactive planning, as well as on how to integrate reactive methods with the deliberative methods providing long-term guidance (See ROBOTICS). The Robocop Federation sponsors an annual series of competitions between robot soccer teams as a tested for demonstrating new methods and extending the state of the art in robotics (

5 Practical Impact of AI

AI technology has had broad impact. AI components are embedded in numerous devices, such as copy machines that combine case-based reasoning and fuzzy reasoning to automatically adjust the copier to maintain copy quality. AI systems are also in everyday use for tasks such as identifying credit card fraud, configuring products, aiding complex planning tasks, and advising physicians.

AI is also playing an increasing role in corporate knowledge management, facilitating the capture and reuse of expert knowledge. Intelligent tutoring systems make it possible to provide students with more personalized attention and even for the computer to listen to what children say and respond to it ( Cognitive models developed by AI can also suggest principles for effective support for human learning, guiding the design of educational systems [Leake and Kolodner, 2001].

AI technology is being used in autonomous agents that independently monitor their surroundings, make decisions and act to achieve their goals without human intervention. For example, in space exploration, the lag times for communications between earth and probes make it essential for robotic space probes to be able to perform their own decision-making.

Depending on the relative locations of the earth and Mars, one-way communication can take over 20 minutes. In a 1999 experiment, an AI system was given primary control of a spacecraft, NASA's Deep Space 1, 60,000,000 miles from earth, as a step towards autonomous robotic exploration of space (see Methods from autonomous systems also
promise to provide important technologies to aid humans.

For example, in a 1996 experiment called No Hands Across America," the RALPH system [Pomerleau and Jochem, 1996], a vision-based adaptive system to learn road features, was used to drive a vehicle for 98 percent of a trip from Washington, D.C., to San Diego, maintaining an average speed of 63 mph in daytime, dusk and night driving conditions. Such systems could be used not only for autonomous vehicles, but also for safety systems to warn drivers if their vehicles deviate
from a safe path.

In electronic commerce, AI is providing methods for determining which products buyers
want and configuring them to suit buyers' needs. The explosive growth of the internet has also led to growing interest in internet agents to monitor users' tasks, seek needed information, and learn which information is most useful [Hendler, 1999].

For example, the Watson system monitors users as they perform tasks using standard software tools such as word processors, and uses the task context to focus search for useful information to provide to them as they work [Budzik and Hammond, 2000]. Continuing investigation of fundamental aspects of intelligence promises broad impact as well. For example, researchers are studying the nature of creativity and how to achieve creative computer systems, providing strong arguments that creativity can be realized by
artificial systems [Hofstadter, 1985].

Numerous programs have been developed for tasks that would be considered creative in humans, such as discovering interesting mathematical concepts, in the program AM [Lenat, 1979], making paintings, in Aaron [Cohen, 1995], and performing creative explanation, in SWALE [Schank and Leake, 1989]. The task of AM, for example, was not to prove mathematical theorems, but to discover interesting concepts.

The program was provided only with basic background knowledge from number theory , and with heuristics for revising existing concepts and selecting promising concepts to explore. Starting from this knowledge, it discovered fundamental concepts such as addition, multiplication, and prime numbers.

It even rediscovered a famous mathematical conjecture that was not known to its programmer: Goldbach's conjecture, the conjecture that every even integer greater than 2 can be written as the sum of two primes. Buchanan (2001) surveys some significant projects in machine creativity and argues for its potential impact
on the future of artificial intelligence.

5.Evaluating artificial intelligence

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.

The broad classes of outcome for an AI test are:
Optimal: it is not possible to perform better
Strong super-human: performs better than all humans
Super-human: performs better than most humans
Sub-human: performs worse than most humans

For example, performance at checkers is optimal, performance at chess is super-human and nearing strong super-human and performance at many everyday tasks performed by humans is sub-human.


The military put AI based hardware to the test of war during Desert Storm. AI-based technologies were used in missile systems, heads-up-displays, and other advancements. AI has also made the transition to the home.

With the popularity of the AI computer growing, the interest of the public has also grown. Applications for the Apple Macintosh and IBM compatible computer, such as voice and character recognition have become available (Open source). Also AI technology has made steadying camcorders simple using fuzzy logic. With a greater demand for AI-related technology, new advancements are becoming available.

Inevitably Artificial Intelligence has, and will continue to affecting our lives. Finding shortest path , games, GPS System, Robotic engineering and all other electronic based systems are now using AI and it's efficiency is also growing day by day.

AI has increased understanding of the nature of intelligence and provided an impressive array of applications in a wide range of areas. It has sharpened understanding of human reasoning, and of the nature of intelligence in general. At the same time, it has revealed the complexity of modeling human reasoning, providing new areas and rich challenges for the future.

No comments: