Definition:Artificial intelligence (AI) is the intelligence of machines and the branch of computer science which aims to create it.It can also be defined as the field as "the study and design of intelligent agents ,"where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John Mcarthy coined the term in 1956 defined it as AI .
Problems of AI:As is obvious while we stand to implement the aforesaid model by which or on whose a machine would be able to equate an human in intelligence ,problems are obvious .The most prominent of them include:
1.Deduction, reasoning, problem solving
2.Knowledge representation
a.Default reasoning and the qualification problem.
b.The breadth of commonsense knowledge
3.Planning
4.Learning
5.perception
6.creativity
Cybernetics and brain simulation--The human brain provides inspiration for artificial intelligence researchers, however there is no consensus on how closely it should be simulated.In the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.
Uses of AI:
1.AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics
2.Brandeis University researchers Hod Lipson and Jordan Pollack have developed a computer that can build robots
3.In producing advanced computer games
4.In performing complex calculations scientific and nonscientific
5.In performing all the day to day jobs that a human has to perform.
TESTING OF AI:--- In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
The broad classes of outcome for an AI test are:
1. optimal: it is not possible to perform better
2. strong super-human: performs better than all humans
3. super-human: performs better than most humans
4. sub-human: performs worse than most humans
#The Loebner Prize for artificial intelligence ( AI ) is the first formal instantiation of a Turing test. The test is named after Alan Turing the brilliant British mathematician
A simple illustration of complexity of the procedure:
How complex could agent-based AI really get? While dealing with 32,000 rows of data culled from a batch of over 400,000 items taken from multiple systems, through multiple processes and multiple filters.
Let’s start with a simple hypothetical:
So if we change that one parameter for that one transition threshold for that one agent by 0.5%, it’s only a small change, right? If we wanted to test the ramifications of that parameter and how the 5 agents interact over time we would only have to test… how many situations? 298 * 10^15? You know what? Never mind. My 32,000 rows of simple data starts to look attractive.
concept of neural networks:
Traditionally, the term neural network had been used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.
1. Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
2. Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.
An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
Implementation of AI:
AI paradigms and techniques have, by and large, been developed under the influence of sequential computational models targeted at the von Neumann processor. This has, of necessity, determined the common languages and data structures employed in AI-related problem-solving. An exception is the cognitive modelling wing of AI which has consistently looked forward to a connectionist, or so-called parallel distributed processing, environment. But here again, AI researchers have usually been obliged to simulate their networks on sequential processors.. Now that there exists the technology to implement affordable parallelism, designers of novel hardware should seek to identify AI primitives at the level of declarative and representational formalisms, rather than at the level of (sequential) programming languages and data structures. The potential gain in taking the higher view is that not only are execution speeds improved but software becomes less complex
It involves:
1.The Inference Engine (Hardware Parser)
2. The Unification Mechanism (Attribute Evaluator)
a sample code ------------------------------ ------------------------------ ------------------------------ ------------------------------ --------
Problems of AI:As is obvious while we stand to implement the aforesaid model by which or on whose a machine would be able to equate an human in intelligence ,problems are obvious .The most prominent of them include:
1.Deduction, reasoning, problem solving
2.Knowledge representation
a.Default reasoning and the qualification problem.
b.The breadth of commonsense knowledge
3.Planning
4.Learning
5.perception
6.creativity
Cybernetics and brain simulation--The human brain provides inspiration for artificial intelligence researchers, however there is no consensus on how closely it should be simulated.In the 40s and 50s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.
Uses of AI:
1.AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics
2.Brandeis University researchers Hod Lipson and Jordan Pollack have developed a computer that can build robots
3.In producing advanced computer games
4.In performing complex calculations scientific and nonscientific
5.In performing all the day to day jobs that a human has to perform.
TESTING OF AI:--- In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
The broad classes of outcome for an AI test are:
1. optimal: it is not possible to perform better
2. strong super-human: performs better than all humans
3. super-human: performs better than most humans
4. sub-human: performs worse than most humans
#The Loebner Prize for artificial intelligence ( AI ) is the first formal instantiation of a Turing test. The test is named after Alan Turing the brilliant British mathematician
A simple illustration of complexity of the procedure:
How complex could agent-based AI really get? While dealing with 32,000 rows of data culled from a batch of over 400,000 items taken from multiple systems, through multiple processes and multiple filters.
Let’s start with a simple hypothetical:
- 1 agent with 5 states
5 states with 1 transition each = 5 transitions(never mind that…)- 5 states with 1 transition to each of the other 4 states = 5 * 4 = 20 transitions?
- 5 interacting agents, each with 5 states = 3125 combinations of the agents’ states
- 3125 agent-state combinations * 4 potential transitions for each of 5 agents (20) = 62500 potential individual transitions at any given moment.
So if we change that one parameter for that one transition threshold for that one agent by 0.5%, it’s only a small change, right? If we wanted to test the ramifications of that parameter and how the 5 agents interact over time we would only have to test… how many situations? 298 * 10^15? You know what? Never mind. My 32,000 rows of simple data starts to look attractive.
concept of neural networks:
Traditionally, the term neural network had been used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes.
1. Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
2. Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.
An artificial neural network (ANN), also called a simulated neural network (SNN) or commonly just neural network (NN) is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
Implementation of AI:
AI paradigms and techniques have, by and large, been developed under the influence of sequential computational models targeted at the von Neumann processor. This has, of necessity, determined the common languages and data structures employed in AI-related problem-solving. An exception is the cognitive modelling wing of AI which has consistently looked forward to a connectionist, or so-called parallel distributed processing, environment. But here again, AI researchers have usually been obliged to simulate their networks on sequential processors.. Now that there exists the technology to implement affordable parallelism, designers of novel hardware should seek to identify AI primitives at the level of declarative and representational formalisms, rather than at the level of (sequential) programming languages and data structures. The potential gain in taking the higher view is that not only are execution speeds improved but software becomes less complex
It involves:
1.The Inference Engine (Hardware Parser)
2. The Unification Mechanism (Attribute Evaluator)
a sample code ------------------------------
//This class represents a node which will contain an arraylist of cards plus pointers to parent and child nodes.
//The root node will be a special node, It will not have any parents
//This program illustrates the easiest procedure to implement AI by java
import java.util.ArrayList;
public class Node
{
//Object Attributes
//Declaring instance variables.
int nodeID;
int depth;
Card nodeCard;
Node parent;
ArrayList children;
public Node(Card rootCard, ArrayList rootChildren) //constructor for the root node
{
nodeID = 1;
depth = 1;
nodeCard = rootCard;
parent = null;
children = rootChildren;
}
public Node(int nodeid, int deppth, Card nodeCard, Node Parent, ...) //constructor for the other, children nodes
{
}
public Card getData()
{
return Card;
}
public int getDepth()
{
return depth;
}
...
}
No comments:
Post a Comment