Christopher Lueg and Ralf Salomon (1997).
A New AI Perspective on Software Agents:
In: Proceedings of the Second German Workshop on Artificial Life ( GWAL
97) pp. 59-60. Dortmund, Germany, April 17 - 18, 1997
postscript version (20kb gzip)
Artificial life (Alife) deals with the synthesis and simulation
of life-like behavior, whereas biology mainly focuses on the analysis of
living organisms [Langton87]. The synthesis of life-like behavior can be done
in hardware (e.g., robotics), software (e.g., evolving computer programs),
and wetware [Ray94].
Alife has gained a lot of recent interest, since it complements (classical)
artificial intelligence (AI) by providing a bottom-up approach
to designing artificial systems that exhibit intelligent behavior.
By contrast, classical AI approaches intelligence on a top-down
level by focusing on abstract problem-solving.
Despite their successes, existing AI techniques often yield insufficient
results in real-world and highly dynamic environments. Consequently,
Brooks (1985) [Brooks85] initiated the development of a new research
direction, called nouvelle AI, new AI, or behavior-based AI.
Due to the constraint that an agent has to be situated in the real world, new
AI has so far focused on mobile robots. In the real world, an agent is exposed
to a continuously changing stream of sensory patterns, and therefore an agent
has to deal with a high degree of uncertainty. As a consequence, surprisingly
little attention has been devoted to software agents. In what follows,
we first briefly review mobile robots and then show that software agents are
also situated in their real world which is cyberspace.
The size of typical mobile robots varies between 6 cm and the range of
approximately 1 meter. In order to allow for an appropriate
agent-environment interaction, such an agent is typically equipped with the
(1) sensors, such as infrared sensors, ambient light sensors, ultra sonar
sensors or CCD cameras;
(2) effectors, such as motors or grippers; and
(3) a control architecture that allows for the coordination of all
In order to pursue the new AI's goal of understanding intelligent behavior on
an agent-environment-interaction level, 10 design principles have been
developed [Pfeifer96] that provide a compact description of many aspects
that are considered important for autonomous agent design. These design
principles can be grouped into three classes. The first class demands
that agents have to be ``complete systems'', i.e., self-sufficient embodied
systems that are situated in their particular environment
and capable of interacting with the real world without human intervention
(also called autonomy). The second class of design principles concerns
the agent's morphology it's control architecture, and the agent's ``ecological
niche''. An important point is that the agent's control architecture does
not contain a central decision unit, which is often called a
Homunculus, since, so far, no equivalent to a Homunculus has been found in
biological brains. Instead, the control architecture should consist of several
loosely-coupled processes [Brooks85]. each of which is responsible for a
particular behavioral aspect. The demand of an ecological niche refers to the
observation that ``there is no universality in the real world''
[Brooks91b,Pfeifer96]. The third class of design principles
focuses on strategies and stances in research on autonomous agents.
A more elaborate form of these design principles can be found in
Now, the questions are
(1) whether cyberspace provides a ``real-world'' environment for software
(2) what the differences are between pure simulations and software agents.
Since the real world is permanently changing and indefinitely rich,
it is always subject to limited perception and knowledge [Brooks91b].
These properties also hold for cyberspace. The computer's internal state is
permanently changing and unpredictable, since, among other reasons,
process scheduling and the execution order of the program's instructions
are unknow in advance. The execution of a program depends on user actions
and other external sources, such as the mouse or the network.
Hence, the computer's internal state is inherently unpredictable,
since the sequence of external events is unforeseeable.
It should be noted that due to the dependence on external events,
the state of the machine partly reflects changes in the surrounding real world.
We therefore conclude that cyberspace is certainly a serious ``real world''
for software agents, even though cyberspace consists of the hardware and
software of a computer.
In cyberspace, the sensors of a software agent need to be implemented by
appropriate software components. In order to adopt the cheap-design principle,
these sensors should be implemented by low-level mechanisms,
such as primitive machine instructions or a few lines of code.
The information from these (software) sensors is fed into
several processes that constitute the agent's control architecture.
As in mobile robots, each of these processes is responsible for a
particular part of the overall behavior. For instance,
one process might send a signal indicating that the processor's load average
is above a certain threshold. Another process might indicate
that the network load is rather low and therefore,
both processes together may decide that the agent should move to a
different machine. It is important to note here that all processes of the
control architecture potentially operate in parallel, even though an actual
implementation might allow only for sequential operations. Recall,
that for the study of behavior-based intelligence, it is currently considered
essential that the agent's control architecture does not contain
a central decision unit or Homunculus [Pfeifer96].
The crucial distinction between software agents and pure simulation programs
is that the latter provide a well-defined context or environment
for the particular task or problem. Thus, the simulation environment does not
contain any uncertainties other than some potential random number generators
and unpredictable user inputs. It is important to note
that the result of the simulation is independent of the actual machine;
no matter which machine is performing the computation and
no matter what the actual usage of the machine is, the results remain
equivalent. Furthermore, the program can be run on a different machine
and the program will very likely yield the same result. In other words,
the internal state of the machine, such as load, swapping activities, memory
status, does not have any considerable impact on the run-time behavior
of the simulation software
beyond some technical aspects,
such as the run time or some random numbers.
With this notion of software agents, it should become clear
why we do not consider the following approaches as software agents,
but rather as pure simulations or application programs.
For example, web robots [WebRobots] and personal assistants
[Maes94a] are pure application programs,
and softbots [EtzioniWeld94] lack situatedness.
They are not discussed due to the focus of this paper.
Ray's Tierra system [Ray94] is an approach to comparative biology and
simulates the evolution in a digital soup inhabited by digital organisms
competing for memory and CPU cycles. The original Tierra program is a pure
simulation system, but the Internet version [NetTierra] offers an
interesting niche for creatures close to the real world.
Computer viruses [Spafford91] are yet another important area
related to software agents since they already inhibit the real world.
Viruses, however, are intended to harm other software,
but software agents are intended to gain insights in the nature of interaction
with the real world.
In this paper, we have shown that programs can be, similarly to robots,
considered software agents given that they are situated in cyberspace,
which is the world inside computers and networks. To this end,
it is most important to realize that the real world of mobile robots
and cyberspace of software agents have striking similarities.
Software agents inhabit an unpredictable, frequently
changing, and -- in a sense -- an only partially knowable world.
We have then indicated that the design principles for mobile robots
can be fruitfully transferred to the design of software agents.
The main purpose of this paper is to give an impression as to what
constitutes an (autonomous) software agent and why there is a fundamental
difference between software agents and pure simulations.
We are currently investigating how to implement software agents,
since most other approaches are located on a higher level of abstraction.
Our approach is deliberately machine dependent,
similar to the approach pursued by computer viruses [Spafford91].
This work was supported in part by the Swiss National Energy Commission
(Nationaler Energieforschungsfond NEFF) and a Human Capital and
Mobility fellowship of the European Union, grant number ERBCHBICT941266. The
authors would like to thank Michelle Hoyle
for her help and are grateful to Rolf Pfeifer for fruitful discussions.
Christopher Lueg / revised 30/04/97