Newsgroups: rec.arts.int-fiction
Path: gmd.de!xlink.net!howland.reston.ans.net!sol.ctr.columbia.edu!news.kei.com!ub!acsu.buffalo.edu!goetz
From: goetz@cs.buffalo.edu (Phil Goetz)
Subject: Re: When is planning good?
Message-ID: <CLK17s.Gnt@acsu.buffalo.edu>
Sender: nntp@acsu.buffalo.edu
Nntp-Posting-Host: pegasus.cs.buffalo.edu
Organization: State University of New York at Buffalo/Comp Sci
References: <CL61nA.14s@faber.ka.sub.org> <whittenCLE8CF.B8H@netcom.com>
Date: Mon, 21 Feb 1994 03:19:04 GMT
Lines: 74

In article <whittenCLE8CF.B8H@netcom.com>,
David Whitten <whitten@netcom.com> wrote:
>I saw this in comp.ai, figured it was relevant to the reasoning agent
>discussion ongoing in r.a.if, so figured I'd respond to it in r.a.if
>
>Juerg Reinhart (jr@faber.ka.sub.org) wrote:
>: Traditional agent architectures are of the plan-then-execute type
>: augmented by a monitor to detect and handle unexpected situations. An
>: alternative way is to seperate the agent's goals and to construct a
>: task-achieving subsystem for each of them (e.g., Brooks' subsumption
>: architecture).  Each subsystem is reactive, it is connected to the
>: sensors and actuators of the agent.
>
>How is the monitor programmed ? is it a task that runs along with the
>agent's code? or is it a filter executed by the agent?

Don't know what you mean by "the monitor".
In Brooks' subsumption architecture, a robot has a number of layers,
each responsible for a behavior (seek light, seek power, avoid collision,
etc.).  They each run on an independent processor, but they share inputs
and outputs.  A high-priority behavior can inhibit a low-priority behavior.
It (the high-priority behavior) decides on its own to do this.
Pretty cool, but don't get carried away with it like Brooks and some
others who think that people (or at least dogs) go through life with
no knowledge representation.  That's going back to behaviorism.

>Would the sensors be events that 'feed' info to the agent?

In an IF equivalent, I think so.

>: It is quite astonishing how many demanding problems purely reactive
>: agents of the latter kind can solve without any planning though the
>: problem is to assure convergence. I wonder when the need for planning
>: arises given a basis of robust, reactive behaviors -- especially since
>: plans tend to be brittle in unstable environments. I would like to ask
>: YOU: When should an agent switch from pure reaction to anticipation?
>: In which situation is it useful to guide behavioral decisions by a
>: plan?
>
>This seems to be an essential question. If we program the I-F so the
>reactive behaviour is done by the general NPC archetype that the actor
>is an instance of, the actual planning must be done by the individual,
>not by the archetype.

Neat idea.  I envision agents in an inheritance hierarchy,
with reactive behavior at the abstract "agent" level.  Personal quirks,
e.g. fear of fire, could still be reactive, but at the instance level.

Knowing when to react & when to anticipate is tricky.  Reaction
involves anticipation:  If you throw a ball at me, I react by catching
it, and I catch it by anticipating its path.  I think we need a good
hack for now; the basic problem is too hard to be solved soon.

>: And if you find plans useful in one way or another: Should the plan be
>: detailed so that the agent can follow it word for word as long as
>: expectations meet with reality? Or should the plan be used as a
>: guideline, a rough sketch of what should be done until the agent gets
>: in the whirl of unpredictable circumstances?
>
>Since we as authors control the reality, can our NPC actors follow
>plans word for word ?

They can as long as there isn't a player or other NPCs around
to mess things up.

>I'd like to hear more discussion, Do you think he is using a system more
>powerful than SNePS ?

I don't know what he's using.  SNePS has basically the same power as most
logic systems.

>David (whitten@netcom.com) (214) 437-5255

Phil goetz@cs.buffalo.edu
