Está en la página 1de 26

Seminar

On

Distributed Intelligence

By:
Bandeep Singh (047/CSE)

Department of Computer Science & Engineering


Guru Tegh Bahadur Institute of Technology

Guru Gobind Singh Indraprastha University


Kashmere Gate, New Delhi Year 20010-2011
ABSTRACT
Distributed Intelligence
The history of the human race is one of increasing intellectual capability. Since the time
of our early ancestors, our brains have gotten no bigger; nevertheless, there has been a
steady accretion of new tools for intellectual work (including advanced visual interfaces)
and an increasing distribution of complex activities among many minds. Despite this
transcendence of human cognition beyond what is "inside" a person's head, most studies
and frameworks on cognition have disregarded the social, physical, and art factual
surroundings in which cognition and human activity take place. Distributed intelligence
provides an effective theoretical framework for understanding what humans can achieve
and how artifacts and tools can be designed and evaluated to empower human beings and
to change tasks. This paper presents and discusses the conceptual frameworks and
systems that we have developed over the last decade to create effective socio-technical
environments supporting distributed intelligence.

Distributed artificial intelligence (DAI) is a subfield of Artificial intelligence research


dedicated to the development of distributed solutions for complex problems regarded as
requiring intelligence. DAI is closely related to and a predecessor of the field of Multi-
Agent Systems.

Background of the project


There are many reasons for wanting to distribute intelligence or cope with multiagent
systems

Mainstreams in DAI research included the following:


(1)Parallel problem solving: mainly deals with how classic AI concepts can be modified,
so that multiprocessor systems and clusters of computers can be used to speed up
calculation.
(2)Distributed problem solving (DPS): the concept of agent, autonomous entities that can
communicate with each other, was developed to serve as an abstraction for developing
DPS systems. See below for further details.

(3)Multi-Agent Based Simulation (MABS): a branch of DAI that builds the foundation for
simulations that need to analyze not only phenomena at macro level but also at micro
level, as it is in many social simulation scenarios.

The key concept used in DPS and MABS is the abstraction called software agents. An
agent is a virtual (or physical) autonomous entity that has an understanding of its
environment and acts upon it. An agent is usually able to communicate with other agents
in the same system to achieve a common goal, that one agent alone could not achieve.

A first classification that is useful is to divide agents into:


Reactive agent - A reactive agent is not much more than an automaton that receives input,
processes it and produces an output.
Deliberative agent - A deliberative agent in contrast should have an internal view of its
environment and is able to follow its own plans.
Hybrid agent - A hybrid agent is a mixture of reactive and deliberative that follows its
own plans, but also sometimes directly reacts to external events without deliberation.

Well-recognized agent architectures that describe how an agent is internally structured


are:
Soar (a rule-based approach)
BDI (Believe Desire Intention, a general architecture that describes how plans are made)
InterRAP (A three-layer architecture, with a reactive, a deliberative and a social layer)
PECS (Physics, Emotion, Cognition, Social, describes how those four parts influences the
agents behavior).
Why Distributed Intelligence?

Centralized systems have disadvantages that make them unsuitable for large-scale
integration, including high reliance on centralized communication, high complexity, lack
of scalability, and high cost of integration. The use of distributed intelligence system
technologies avoids these weaknesses. Distributed intelligence systems are based on the
use of cooperative agents, organized in hardware or software components, that
independently handle specialized tasks and cooperate to achieve system-level goals and
achieve a high degree of flexibility. By distributing the logistic and strategic requirements
of a system, it is possible to achieve greatly improved robustness, reliability, scalability,
and security. Key to achieving these benefits is the use of holonic system technologies
that establish a peer-to-peer environment to enable coordination, collaboration, and
cooperation within the network. Such systems require both hardware and software
components.
TABLE OF CONTENTS

Chapter Page
1. INTRODUCTION …………………………………………………………..…….1
2. MULTI-AGENT SYSTEMS……………………..……………..…..….….............4
3. AGENT’S INTELLIGENCE…………………………………………………..…..6
4. ARCHITECTURE OF AN AGENT…………………………………………...…..7
5. CLASSIFICATION OF AGENTS…………………………….…….…..………..9
6. INTERACTION AMONG AGENTS……………………………………..….…..10
7. AGENT COMMUNICATION LANGUAGE……………………………...….....12
8. BASIC MODELS OF COMMUNICATION……………………………..…...…15
9. DISTRIBUTED PROBLEM SOLVING AND PLANNING…………….........…17
10. APPLICATIONS…………………………………………………………...…...19
REFERENCES……………………………………………………………..………..21
1. INTRODUCTION

1.1 Artificial Intelligence


Artificial Intelligence (AI) is the area of computer science focusing on creating machines
that can engage on behaviors that humans consider intelligent
"The science and engineering of making intelligent machines.
- John McCarthy

The ability to create intelligent machines has intrigued humans since ancient times, and
today with the advent of the computer and 50 years of research into AI
programming techniques, the dream of smart machines is becoming a reality. Researchers
are creating systems which can mimic human thought, understand speech, beat the best
human chess player, and countless other feats never before possible. Find out how the
military is applying AI logic to its hi-tech systems, and how in the near future Artificial
Intelligence may impact our lives. Artificial Intelligence is a branch of Science which
deals with helping machines finds solutions to complex problems in a more human-like
fashion. This generally involves borrowing characteristics from human intelligence, and
applying them as algorithms in a computer friendly way. A more or less flexible or
efficient approach can be taken depending on the requirements established, which
influences how artificial the intelligent behavior appears.

AI is generally associated with Computer Science, but it has many important links with
other fields such as Maths, Psychology, Cognition, Biology and Philosophy, among many
others. Our ability to combine knowledge from all these fields will ultimately benefit our
progress in the quest of creating an intelligent artificial being.

1.2 Technology
There are many different approaches to Artificial Intelligence, none of which are either
completely right or wrong. Some are obviously more suited than others in some cases,
but any working alternative can be defended. Over the years, trends have emerged based
on the state of mind of influential researchers, funding opportunities as well as available
computer hardware.

Over the past five decades, AI research has mostly been focusing on solving specific
problems. Numerous solutions have been devised and improved to do so efficiently and
reliably. This explains why the field of Artificial Intelligence is split into many branches,
ranging from Pattern Recognition to Artificial Life, including Evolutionary Computation
and Planning.

1.3 Distributed Artificial intelligence


Distributed artificial intelligence (DAI) is a subfield of Artificial intelligence research
dedicated to the development of distributed solutions for complex problems regarded as
requiring intelligence. DAI is closely related to and a predecessor of the field of Multi-
Agent Systems.
Intelligence refers to systems of entities working together to reason, plan, solve problems,
think abstractly, comprehend ideas and language, and learn. Here, we define an entity as
any type of intelligent process or system, including agents, humans, robots, smart sensors,
and so forth. In these systems, different entities commonly specialize in certain aspects of
the task at hand. As humans, we are all familiar with distributed intelligence in teams of
human entities. For example, corporate management teams consist of leaders with
particular specialties such as Chief Executive Officer (CEO), Chief Operating Officer
(COO), Chief Financial Officer (CFO), Chief Information Officer (CIO), and so forth.
Oncology patient care teams consist of doctors that specialize in various areas, such as
surgical oncology, medical oncology, plastic and reconstructive surgery, pathology, etc.
we
Distributed intelligence is also exhibited in military applications, such as special forces
A-Teams, where team members specialize in apons, engineering, medicine,
communications, and so forth. Another military example includes personnel on an aircraft
carrier flight deck, who are segmented into the catapult.
The objective of distributed intelligence in computer science (and related fields) is to
generate systems of software agents, robots, sensors, computer systems, and even people
and animals (such as search and rescue dogs) that can work together with the same level
of efficiency and expertise as human teams. Clearly, such systems could address many
important challenges, including not only urban search and rescue, but also military
network-centric operations, gaming technologies and simulation, computer security,
transportation and logistics, and many others.

What is the potential promise of distributed intelligence?


Certainly, some applications can be better solved using a distributed solution approach —
especially those tasks that are inherently distributed in space, time, or functionality.

Further, if a system is solving various sub problems in parallel, then it offers the potential
of reducing the overall task completion time. Any system consisting of multiple,
sometimes redundant entities, offers possibilities of increasing the robustness and
reliability of the solution, due to the ability for one entity to take over from another
failing entity. Finally, for many applications, creating a monolithic entity that can address
all aspects of a problem can be very expensive and complex; instead, creating multiple,
more specialized entities that can share the workload offers the possibility of reducing the
complexity of the individual entities.

Common systems of distributed intelligence are classified based upon the types of
interactions exhibited, since the type of interaction has relevance to the solution paradigm
to be used. Three common paradigms for distributed intelligence are — the bio-inspired
paradigm, the organizational and social paradigm, and the knowledge-based, ontological
paradigm — and give examples of how these paradigms can be used in multi-robot
systems. problem is very different depending upon the paradigm chosen for abstracting
the problem.. Further work is needed to provide guidance to the system designer on
selecting the proper abstraction, or paradigm, for a given problem.
2. MULTI-AGENT SYSTEMS

A multi-agent system (MAS) is a system composed of multiple interacting intelligent


agents. Multi-agent systems can be used to solve problems which are difficult or
impossible for an individual agent or monolithic system to solve

2.1 Agent
Jennings: "An agent is a computational system, situated in some environment that is
capable of intelligent, autonomous action in order to meet its design objectives."

intelligent agent collaborative


learning agent

cooperate learn

interface agent
collaborative agent autonomous
2.2 Agent is not just a Program
An agent in the context of Distributed Artificial Intelligence is a member of a multi-agent
community, where its behavior and logic behind reasoning has to bee seen from the
multi-agent perspective
 freely interact, interaction among agents is emergent
 can group into coalitions, teams, they can benefit from this
 do not have to be benevolent, have free will, can cheat
 can leave/join the community
 can adapt and improve their social role

However there are also other agents such as migrating agent, viruses, information seekers
who are not members of multi-agent community in the above sense
3. AGENT’S INTELLIGENCE
In artificial intelligence, an intelligent agent (IA) is an autonomous entity which observes
and acts upon an environment (i.e. it is an agent) and directs its activity towards
achieving goals (i.e. it is rational). [1] Intelligent agents may also learn or use knowledge
to achieve their goals. They may be very simple or very complex: a reflex machine such
as a thermostat is an intelligent agent [citation needed], as is a human being, as is a
community of human beings working together towards a goal.

Intelligent agents are often described schematically as an abstract functional system


similar to a computer program. For this reason, intelligent agents are sometimes called
abstract intelligent agents (AIA) to distinguish them from their real world
implementations as computer systems, biological systems, or organizations. Some
definitions of intelligent agents emphasize their autonomy, and so prefer the term
autonomous intelligent agents. Still others (notably Russell & Norvig (2003)) considered
goal-directed behavior as the essence of intelligence and so prefer a term borrowed from
economics, "rational agent".

Intelligent agents in artificial intelligence are closely related to agents in economics, and
versions of the intelligent agent paradigm are studied in cognitive science, ethics, the
philosophy of practical reason, as well as in many interdisciplinary socio-cognitive
modeling and computer social simulations.

Reactivity – ability to provide intelligent responses to percepts and agent senses from the
environment (user interface)

Proactivity – ability to maintain agents long term intention, organize its behavior in order
to meet targeted goals

Social Intelligence – ability to perform reasoning about other agents abilities, intentions,
current status and possible future course of actions
4. ARCHITECTURE OF AN AGENT

4.1 Agent’s Abstract Architecture


Agent’s Communication Wrapper
Translation to and from ACL (Agent Communication Language)
Physical connection and responsibility delegation
Perception × action
Social model

AG E 'NS T A G E 'NS T
W R A P P E R BO D Y

4.2 Soar
Soar is a symbolic cognitive architecture, created by John Laird, Allen Newell, and Paul
Rosen bloom at Carnegie Mellon University, now maintained by John Laird's research
group at the University of Michigan. It is both a view of what cognition is and an
implementation of that view through a computer programming architecture for Artificial
Intelligence (AI). Since its beginnings in 1983 and its presentation in a paper in 1987, it
has been widely used by AI researchers to model different aspects of human behavior.
4.3 Belief-Desire-Intention software model
The Belief-Desire-Intention (BDI) software model (usually referred to simply, but
ambiguously, as BDI) is a software model developed for programming intelligent agents.
Superficially characterized by the implementation of an agent's beliefs, desires and
intentions, it actually uses these concepts to solve a particular problem in agent
programming. In essence, it provides a mechanism for separating the activity of selecting
a plan (from a plan library) from the execution of currently active plans. Consequently,
BDI agents are able to balance the time spent on deliberating about plans (choosing what
to do) and executing those plans (doing it)

4.4 InterRAP
Three-layer architecture, with a reactive, a deliberative and a social layer
5. CLASSIFICATION OF AGENTS

Reactive agent - A reactive agent is not much more than an automaton that receives input
processes it and produces an output.

Deliberative agent - A deliberative agent in contrast should have an internal view of its
environment and is able to follow its own plans.

Hybrid agent - A hybrid agent is a mixture of reactive and deliberative that follows its
own plans, but also sometimes directly reacts to external events without deliberation.

Reactive agents are agents that do not contain any symbolic knowledge representation
(i.e.: no state, no representation of the environment, no representation of the other
agents, ...). Their behaviour is simply defined by a set of perception-action rules.

Cognitive Agents are agents with an explicit knowledge representation of own capability,
other agents, the environment, etc.

There are various models of agents’ cognitive states (differ in purpose, generality, …) –
BDI (Belief Desire Intention)
Joint Intentions Theory
3bA (Tri-Base Acquaintance Model)
6. INTERACTION AMONG AGENTS
One of the defining characteristics of an information agent is its ability for flexible
interaction and interoperation with other, similar software agents. This focus on
interoperability has been the foundation of the approach of the Knowledge Sharing Effort
(KSE) in developing a basic framework for intelligent systems. We present KSE
approach and the solutions suggested for the sub problems identified by the consortium,
emphasizing the KSE's communication language and protocol KQML (Knowledge Query
and Manipulation Language). In addition to presenting specific solutions, we are
interested in demonstrating the conceptual decomposition of the problem of knowledge
sharing into smaller more manageable problems, and in arguing that there is merit to
those concepts independent of the success of individual solutions.

It is doubtful that any conversation about agents will result in a consensus on the
definition of an agent or of agency. From personal assistants and \smart" interfaces to
powerful applications, and from autonomous, intelligent entities to information retrieval
systems, anything might qualify as an agent these days. But, despite these different
viewpoints, most would agree that the ability for interaction and interoperation is
desirable. The building block for intelligent interaction is knowledge sharing that
includes both mutual understanding of knowledge and the communication of that
knowledge. The importance of such communication is emphasized by Genesereth, who
goes so far as to suggest that an entity is a software agent if and only if it communicates
correctly in an agent communication language [13]. After all, it is hard to picture
cyberspace with entities that exist only in isolation; it would go against our perception of
a decentralized, interconnected electronic universe. How might meaningful, constructive
and intelligent interaction among software agents be provided? The same problem for
humans requires more than the knowledge of a common language
6.1 Ways in which intelligent agents interact

Organization
An arrangement of relationships between individuals or components, division of tasks,
distribution of roles, and contribution-awards

Cooperation
Sharing responsibilities in satisfying shared goal and generating mutually dependent roles
in joint activities

Coordination
Management of agent’s activities so that they coordinate their deeds with each other in
order to share resources, meet their own interests

Negotiation
Information exchange aimed at resolving conflict of access to resources, different
solutions to the same problem or goal conflicts

Communication
Information, knowledge and request exchange via mutually agreed ACL

Benevolence
Agents are benevolent if they will agree to cooperate in asked/required
7. AGENT COMMUNICATION LANGUAGE
Agent Communication Language (ACL), proposed by the Foundation for Intelligent
Physical Agents (FIPA), is a proposed standard language for agent communications.
Knowledge Query and Manipulation Language (KQML) is another proposed standard. In
order to ensure agent interoperability, mutually agreed communication protocol (ACL)
must be provided. To make agents understand each other they have to not only speak the
same language, but also have a common ontology. Ontology is a part of the agent's
knowledge base that describes what kind of things an agent can deal with and how they
are related to each other.

7.1 The approach of the Knowledge Sharing Effort (KSE)


This perspective on interoperability in today's computing environment has been the
foundation of the approach of the Knowledge Sharing Effort (KSE) consortium.

Mutual understanding of what is represented may be divided into two sub problems:
1) Translation from one representation language to another (or from one family of
representation languages to another); and 2) sharing the semantic content (and often the
pragmatic content) of the represented knowledge among different applications.
Translation alone is not sufficient because each knowledge base holds implicit
assumptions about the meaning of what is represented. If two applications are to
understand each other's knowledge, such assumptions must also be shared. That is, the
semantic content of the various tokens must be preserved.

Communication is a threefold problem involving knowledge of (i) interaction protocol;


(ii) communication language; and (iii) transport protocol. The interaction protocol refers
to the high level strategy pursued by the software agent that governs its interaction with
other agents.
7.2 Knowledge Query Manipulation Language (KQML)
KQML was conceived as both a message format and a message {handling protocol to
support run-time knowledge sharing among agents.

The key features of KQML may be summarized as follows:

• KQML messages are opaque to the content they carry. KQML messages do not
merely communicate sentences in some language, but they rather communicate an
attitude about the content (assertion, request, and query).
• The language's primitives are called performatives. As the term suggests, the
concept is related to speech act theory. Performatives define the permissible actions
(operations) that agents may attempt in communicating with each other.
• An environment of KQML speaking agents may be enriched with special agents,
called facilitators that provide such functions as: association of physical addresses with
symbolic names; registration of databases and/or services ordered and sought by agents;
and communication services (forwarding, brokering etc.). To use a metaphor, facilitators
act as efficient secretaries for the agents in their domain.

Intelligent interaction is more than an exchange of messages. As suggested in Section 2,


KQML is an attempt to dissociate these issues from the communication language, which
should define a set of standard message types that are to be interpreted identically by all
interacting parties. A universal communication language is of interest to a wide range of
applications that need to communicate something more than predefined or fixed
statements of facts.

12

Mutual knowledge understanding


Translating from one knowledge representation language into another
sharing of semantics (and often pragmatics)

Inter Agent Communication


transport protocol (e.g. TCP/IP, SMTP, HTTP, …)
communication language
interaction protocol
8. Basic Models of Communication

Broadcasting of a task announcement


Autonomous communication
Communication intensive

Central Communication Agent


Well organized, saves communication
Central, fragile, communication bottleneck

Acquaintance Models
Model of the environment in an abstract sense

Broadcasting a Task Announcement

Agent 2
Agent 1

Agent 2

Agent 2

Agent 2
Central Communication Agent

Agent 2
Agent 1 Facilitator

Agent 2

Agent 2

Agent 2
9. Distributed Problem Solving and Planning
Distributed problem solving is the name applied to a subfield of distributed artificial
intelligence (AI) in which the emphasis is on getting agents to work together well to solve
problems that require collective effort. Due to an inherent distribution of resources such
as knowledge, capability, information, and expertise among the agents, an agent in a
distributed problem-solving system is unable to accomplish its own tasks alone, or at
least can accomplish its tasks better (more quickly, completely, precisely, or certainly)
when working with others. Solving distributed problems well demands both group
coherence (that is, agents need to want to work together) and competence (that is, agents
need to know how to work together well). As the reader by now recognizes, group
coherence
is hard to realize among individually-motivated agents. In distributed problem solving,
we typically assume a fair degree of coherence is already present: the agents have been
designed to work together; or the payoffs to self-interested agents are only accrued
through collective efforts; or social engineering has introduced disincentives for agent
individualism; etc.

Distributed problem solving thus concentrates on competence; as anyone who has played
on a team, worked on a group project, or performed in an orchestra can tell you, simply
having the desire to work together by no means ensures a competent collective outcome!

Distributed problem solving presumes the existence of problems that need to be solved
and expectations about what constitute solutions. For example, a problem to solve might
be for a team of (computational) agents to design an artefact (say, a car). The solution
they formulate must satisfy overall requirements (it should have four wheels, the engine
should fit within the engine compartment and be powerful enough to move the car, etc.),
and must exist in a particular form (a specification document for the assembly plant). The
teamed agents formulate solutions by each tackling (one or more) sub problems and
synthesizing these sub problem solutions into overall solutions.
Sometimes the problem the agents are solving is to construct a plan. And often, even if
the agents are solving other kinds of problems, they also have to solve
planning problems as well. That is, how the agents should plan to work together—
decompose problems into sub problems, allocate these sub problems, exchange sub-
problem solutions, and synthesize overall solutions—is itself a problem the agents need
to solve. Distributed planning is thus tightly intertwined with distributed problem solving,
being both a problem in itself and a means to solving a problem.
10. Applications
Successful application of agents (as of any technology) must reconcile two perspectives.
The researcher (exemplified in the preceding chapters) focuses on a particular capability
(e.g., communication, planning, learning), and seeks practical problems to demonstrate
the usefulness of this capability (and justify further funding). The industrial practitioner
has a practical problem to solve, and cares much more about the speed and cost-
effectiveness of the solution than about its elegance or sophistication. This chapter
attempts to bridge these perspectives. To the agent researcher, it offers an overview of the
kinds of problems that industrialists face, and some examples of agent technologies that
have made their way into practical application. To the industrialist, it explains why agents
are not just the latest technical fad, but a natural match to the characteristics of a broad
class of real problems.

10.1 Why Use DAI in Industry?

Agents are not a panacea for industrial software. Like any other technology, they are best
used for problems whose characteristics require their particular capabilities. Agents are
appropriate for applications that are modular, decentralized, changeable, ill-structured,
and complex [44]. In some cases, a problem may naturally exhibit or lack these
characteristics, but many industrial problems can be formulated in different ways. In
these cases, attention to these characteristics during problem formulation and analysis can
yield a solution that is more robust and adaptable than one supported by other
technologies.

10.2 Distributed Intelligence for Smart Home Appliances

Automation systems in smart homes are concerned with sensors and actuators that
monitor the occupants, communicate with each other, and intelligently support the
occupants in their daily activities. Collective intelligence technology will be essential to
analyze data from these distributed sensors. Research through this article focus on
achieving the adaptation of soft-computing algorithms, developed usually as software
modules in conventional computers, being
implemented into specific hardware to obtain adaptation to the users. The introduced
approach to the control problem of the complex system use a division into agents in a
very lower complexity level that it is usual in MAS, allowing a higher implication
between the physical and computing components if it is compared with standard
literature.

10.3 DIDABOTS (Didactic Robots)


REFERENCES

Books
1. Multiagent Systems: A Modern Approach to Distributed Artificial, Gerhard Weiss,
The MIT Press
2. Distributed Artificial Intelligence, Agent Technology, and Collaborative Applications,
Vijayan Sugumaran, Information Science Reference
3. Foundations of distributed artificial intelligence, Greg M. P. O'Hare, Nick Jennings
Wiley-Interscience

Web Sites
1. http://en.wikipedia.org/wiki/Distributed_artificial_intelligence
2. http://www.scribd.com/Distributed-Artificial-Intelligence-Agent-
Technology-and-Collaborative-Applications/d/20075007

También podría gustarte