Learning to communicate internal states in a Multi-Agent Environment. A 'Beyond-Wittgenstein Learning Model'


Paper submitted to:

Computational Natural Language Learning [CoNLL97]


Madrid (Spain) 11-12th July 1997

by

Dr. Gerd Doeben-Henisch
INM - Institute for New Media
Daimlerstr. 32
D-60314 Frankfurt
Phone: ++49-(0)69-941963-10
Fax: ++49-(0)69-941963-22
email: doeb@inm.de

April 7 1997




CONTENT


I. Introduction
II. Normal Wittgenstein Agents [NWAs]
III. Non-Language Hypothesis
IV. Improved Wittgenstein Agents [IWA]
V. Possible Enhancements of Improved Wittgenstein Agents
VI. Wittgenstein Agents and the Paradigm of Multi-Agent Systems [MAS]
VII. Transcendentally or Type Induced Reference Principle
VIII. References











Learning to communicate internal states in a Multi-Agent Environment. A 'Beyond-Wittgenstein Learning Model'


I. Introduction

(1) Ludwig WITTGENSTEIN proposed in his Philosophical Investigations [PI] (1936 - 1949) the 'Non-Language-Hypothesis' stating that it is not possible to communicate internal states of a human agent successfully.

(2) This Non-Language-Hypothesis contrasts with the growing interest in agents able to communicate with human users using natural language and -moreover- which should be able to learn any NL without being especially programmed for a single language.

(3) In this paper we give a formal model for a 'Wittgenstein-Agent' and show how a population of such agents is able to learn to communicate their internal states with arbitrary linguistic expressions. This ability to learn to communicate internal states we are calling a 'Beyond-Wittgenstein Learning Model'.

(4) The paradigm of Improved Wittgenstein Agents will also be related to the paradigm of Multi-Agent Systems [MAS] and it is shown that Wittgenstein Agents represent a new agent category.

II. Normal Wittgenstein Agents [NWAs]

The construction of Wittgenstein Agents starts with remarks of Wittgenstein and then continues with more and more refinements challenged by the intention to solve the puzzle. First we introduce Normal Wittgenstein Agents which are unable to communicate their internal states, and then we will introduce Improved Wittgenstein Agents which can.

Normal Wittgenstein Agents [NWA], basic assumptions according to Wittgenstein:

(NWA1) A NWA has internal states [IS] (cf. PI 246, 283, 304).

(NWA2) If A,B are two NWAs, then A does not directly know, whether B has actually an internal State I_k or not (cf. PI 272)

(NWA3) If A,B are two NWAs, then the only way for A to know that B has an internal state I_k is by assuming that there exist relationships between the observable behavior of B and certain internal states of B (cf. PI 238).

(NWA4) The perceptual appearance ['das Bild vom inneren Vorgang'] of an internal state I_k does not give a direct hint how to use the words to communicate this internal state (cf. PI 305).

In addition to these remarks of Wittgenstein we have to assume some facts about the environment of these agents and the kind of interactions with this environment.

(WORLD1) A World1 W contains Objects O with internal states IS , a collection of actions ACT , Symbolic Expressions S, some Relations R (see below; R is a sequence of relations and functions) and some Axioms A (see below) . Normal Wittgenstein Agents NWA are a subset of Wittgenstein Agents WA which are part of O.

WORLD1(w) iff w = <<O,IS,ACT,S>, R, A>
NWA c WA c O & NWA in rn(R) & WA in rn(R)

(WAGENT1) A Wittgenstein Agent can be stimulated by the observable objects, actions and symbolic expressions resulting in sensory states SENS-OBJECTS [SO], SENS-ACTIONS [SACT], and SENS-SYMBOLS [SS] (The suffix-operator '+' added to a set X represents the set of all n-tupels of the elements of X with n>0):

stim: O x STIMULI -----> O x SENSORY & stim in rn(R)
STIMULI c pow(O ) x pow(O x ACT) x pow(O x S+) & STIMULI in rn(R)
SENSORY c pow(SO) x pow(SO x SACT) x pow(SO x SS+) & SENSORY in rn(R)
SO u SACT u SS+ u SENSORY c IS

(WAGENT2) Sensory states can be transformed by perception into 'conscious' internal states and as part of the perception process we assume a simple timely ordering:

perceive: O x SENSORY ----> O x PERCEPTION & perceive in rn(R)
PERCEPTION c pow(O' ) x pow(O' x ACT' ) x pow(O' x S'+) x TIME
PERCEPTION in rn(R)
O' u ACT' u S'+ u TIME u PERCEPTION c IS
(A:o,a,s,t)(causing_objs(o,a,s,t) = {x| <o,a,s,t> in PERCEPTION &
(x in dm(a) or x in dm(s))})

(WAGENT3) A normal Wittgenstein Agent has intentions about what he is planning to do and he can realize these intentions as actions:

realize: O x INTENTION -----> O x ACTIONS & realize in rn(R)
INTENTION c ACT' x (pow(O') u S'+) & INTENTIONS in rn(R)
ACTIONS c ACT x (pow(O) u S+) & ACTIONS in rn(R)
INTENTIONS c IS

(WAGENT4a) A Normal Wittgenstein Agent can encode PERCEPTIONs and INTENTIONs into symbolic expressions:

encode: O x PERCEPTION x INTENTION (1:1)-----> O x S'+ & encode in rn(R)

(WAGENT4b) Perceived symbolic expressions can be decoded with the inverse of encode:
decode = inv (encode) & decode in rn(R)
(A:a,s)(a in WA & s in S'+ ==> meaning(a,s) = pr2 (rn (decode(a,s)))) & meaning in rn(R)

The encode-function can therefore serve as a bidirectional lexicon which maps PERCEPTION onto symbolic expressions and vice versa.

(WAGENT5) Because the encode-function is a set of ordered pairs relating perceptions and intentions to symbolic expressions one can define a learning-function 'n-learn' as a mapping of an 'old' encoding function into a 'new' encoding function:

n-learn: O x PERCEPTION x INTENTION x encode x TIME -----> encode x TIME
n-learn in rn(R)

At time t = 0 starts the n-learn-function with an empty encode-function which in the future can be enlarged step by step. If there is a perception or intention x at time t_i which has not yet an entry with a labelling expression s in the 'old' encode function, then will the old encode function be expanded with a new pair (x,s) and thus a new 'encode' function valid at t_i+1 is created.

(WAGENT6) A Normal Wittgenstein Agent generates his actual intentions by a reaction function which is sensitive to sensory-states, perception, intentions and the decode relations:

reaction: O x SENSORY x PERCEPTION x INTENTION x decode -----> O x INTENTION
reaction in rn(R)

If we assume utter in ACT as an action which is accompanied by a symbolic expression, then we can define the special function articulate:

articulate: O x PERCEPTION x INTENTION -----> O x {utter} x S
articulate in rn(R)

III. Non-Language Hypothesis

The normal Wittgenstein Agents outlined so far are not able to communicate their internal states.

If they have perceptions and they generate intentions they can encode these perceptions and intentions into some 'locally valid' symbolic expressions. But there is no fixed scheme how they encode states. Thus if two agents A and B would be stimulated by an object O then encode(A,perceive(A,stim(A,{O},0,0)),INTENTB) would normally be different from encode(B,perceive(B,stim(B,{O},0,0)),INTENTA) where 'INTENT' is the content of the intentions of each agent.

If in another case a Normal Wittgenstein Agent B receives a symbolic expression E through the action <A,utter,E> of a Normal Wittgenstein Agent A then could B decode this expression if he would have choosen the same symbolic expression E for one of his perceptions. But because the encoding of perceptions is not correlated between agents at this point the resulting meaning(B,E') has no defined correlation to the meaning(A,E').

To remedy this fault one can try to exploit the remark (NWA3) stating, that one could improve the communication if one assumes a correlation between observable actions and internal states. Incorporating this idea into a Normal Wittgenstein Agent would imply, that (i) the encode functions of all agents are 'sufficiently similar' and (ii) that the 'input' of the encoding function is 'sufficiently similar' to the same triggering stimuli:

(WAGENT7) (To realize claim (ii) we introduce axiom Ax-1 as part of A of WORLD1, see above)
(A:a,b,s,p,i,d)(pr2 (reaction(a,s,p,i,d)) = pr2 (reaction(b,s,p,i,d))) &
(A:a,b,s)(pr2 (perceive(a,s)) = pr2 (perceive(b,s))) &
(A:a,b,s)( pr2 (stim(a,s)) = pr2 (stim(b,s)))

This is a very strong assumption and probably highly unrealistic for 'real' agents. But this assumptions guarantees that the same observable events in World W have the same perceptual effects in different agents and the reaction generates the same actions.

Nevertheless, also this very strong assumption would not suffice to produce the same symbolic encoding in different agents.

An example is given with the following situation.

(SIT1) Two Normal Wittgenstein Agents A and B are sharing a situation sit1, where a dug D is passing by. We assume, that both A and B are 'looking' at the dug and that they therefore are receiving visual stimuli from the dug. According to the assumptions the passing dug produces 'sufficiently similar' sensory representations SO_D and perceptions O' _D in A and B. In A and B these perceptions can be encoded. Let us assume that agent A encodes the object-representation O'_D of the dug D as 'DUG':

encode(A,perceive(A,stim(A,{D},0,0)),INTENTA) = <A,'DUG'>

and that A articulates this encoding through an action utterance:

articulate(A,{O'_D},0,0,t,INTENTA) = <A,utterance, 'DUG'>

If we assume the 'same processing' in B as in A and we let B encode the perceived object O'_D as 'GRID' then we get for B:

encode(B,perceive(B,stim(B,{D},0,0)),INTENTB) = <B,'GRID'>
articulate(B,{O'_D},0,0,t,INTENTB) = <B,utterance, 'GRID'>

On account of the utterance-actions there will be new stimulus-events for A and B:

A:
encode(A,perceive(A,stim(A,0,0,{<B,'GRID'>})),INTENTA) = <A,'B-GRID'>
articulate(A,0,0,{<B,'B-GRID'>}),t+c,INTENTA) = <A,utterance, 'B-GRID'>

B:
encode(B,perceive(B,stim(B,0,0,{<A,'DUG'>})),INTENTB) =<B,'A-DUG>
articulate(B,0,0,{<A,'DUG'>},t+c,INTENTB)=<B,utterance, 'A-DUG'>

This process can proceed infinitely without the need of some kind of a coordination of 'perceptions' and 'labeling linguistic expressions' although the agents have the 'same' perceptions according to the same observable stimuli.

Otherwise, to assume -according postulate (i) from above- as a possible alternative that the encode-functions are 'sufficiently similar' is completely unlikely. Such a postulated similarity could only be established if there exists a 'pre-established' mapping between triggering stimuli, perceptual mappings, and symbolic encodings. All available empirical evidence speaks against such an assumption.

From these considerations it follows:

(LEMMA1) A Normal Wittgenstein Agent characterized within the structure WORLD1 is not able to communicate successfully his internal states.

IV. Improved Wittgenstein Agents [IWA]

An Improved Wittgenstein Agent has at least all the properties of a Normal Wittgenstein Agent.

An Improved Wittgenstein Agent A has to solve the following problem: A has an internal state i_A in a certain situation Sit in world W, and A wants to communicate the occurence of this state i_A with a symbolic expression s_j in a way, that the 'hearer' B 'knows', what 'kind of' internal state A wants to communicate with s_j.

One possible solution is exploiting observations from ethology that most groups of animals showing social hierarchies which yield learning strategies in which 'younger' animals are 'learning' from 'older' animals by 'imitation'. These two different roles we call here the 'imitators' and 'teacher's. Under certain -socially defined- conditions, an imitator can become a teacher. This leads to the following assumption:

(WORLD2) w is a WORLD2 iff w is 'as strong as' WORLD1 and contains the following additional properties and axioms: Improved Wittgenstein Agents IWA are a subset of Wittgenstein Agents. Ax-2: The IWAs of WORLD2 are partitioned into agent groups. These agent groups are partitioned into teachers T and imitators I. Every agent group has exactly one teacher.

IWA c WA & IWA in rn(R)
T u I c IWA & T in rn(R) & I in rn(R)
(A:x)(x c IWA ==> (1!y)(y in x & T(y)))

(WAGENT8) The encode- and the decode-functions of Improved Wittgenstein Agents are slightly different compared to NWAs. encode2 allows for multiple 'meanings' related to only one symbolic expression, and the decode2 function computes from possible multiple meanings that variant, which is 'most similar' to the actual perceptions and intentions:

encode2: O x pow(PERCEPTION x INTENTION) -----> O x S'+ & encode2 in rn(R)
decode2: O x PERCEPTION x INTENTION x encode2 x S'+ -----> O x PERCEPTION x INTENTION & decode2 in rn(R)
(A:a,s)(a in WA & s in S'+ ==> meaning2(a,s) = pr2 (rn (decode2(a,s)))_1 u pr2 (rn (decode2(a,s)))_2) & meaning2 in rn(R)
CONTEXT1 = pr1(decode2)_1 u pr1(decode2)_2
CONTEXT2 = pr1(decode2)_1

(WAGENT9) One has also to modify the learning-function. The new learning-function 'i-learn' has two modes of operation: (a) in the 'teacher-mode' it is constructing a new encode2-function from an old one by expanding the set of pairs with a new pair (<self,x>,<self,s>) with x in CONTEXT1 and s in S'+ where either x is a perception or intention which is not yet encoded with a symbolic expression and s is a 'new' possible symbolic labelling thus creating with the pair (<self,x>,<self,s>) a new 's-entry', or there exists already an y-variant (<self,y>,<self,s>) of the s-entry in the old encode2 function and the agent wants to add (<self,x>,<self,s>) as an additional 'x-variant' of the s-entry; (b) in the 'imitator-mode' the learning function is enriched with an auxiliary function speaker_mode which can classify the values of the causing_objs function either as teachers (= 1) or as non-teachers (= 0).

speaker_mode: causing_objs -----> {0,1}
speaker_mode in rn(R)

If the causing object is not a teacher, then the i-learn function is working like case (a) above; if the causing object is a teacher, then the x of the pair (<teacher,x>,<teacher,s>) is restricted to CONTEXT2 i.e. x is only a perception and not also an intention. The i-learn function will 'imitate' the sign-usage of the teacher. There are two cases: (b1) if the symbolic expression uttered by the teacher is actually 'not known' in encode2 of the imitator, or there exists only a 'selfgenerated' s-entry in encode2, then the imitator will introduce this new symbolic expression into his encode2 function together either with the actual CONTEXT2 or with the 'most previous' CONTEXT2. In this case will a 'selfgenerated' s-entry been overwritten. (b2) If there exists already a s-entry (<teacher,y>,<teacher,s>) triggered by the teacher, but not x as a CONTEXT2 of the uttered symbolic expression s then the imitator will introduce x as a new x-variant for the s-entry.

i-learn: O x PERCEPTION x INTENTION x encode2 x TIME -----> O x encode2 x TIME
i-learn in rn(R)

With these new assumptions one gets the following behavior of Improved Wittgenstein Agents in situation sit1 (A is a teacher and B an imitator):

A:
i-learn(A,perceive(A,stim(A,{B,D},{<D,move>},0)),INTENTA,0,t_0) = <A,<A,{perceive(A,stim(A,{B,D},{<D,move>},0)),INTENTA},A,'DUG'>,t_0+1>

Teacher A operates the i-learn function in 'selfgenerating' mode, i.e. he starts with the empty encode2 function, perceives the objects B and D -D moving- which have no entry in encode2, and it generates a new DUG-entry with the new symbolic expression 'DUG'. The perception of the objects B and D is represented in encode2 as part of the whole CONTEXT1. It has to be noticed, that the 'meaning' of the expression 'DUG' is not very precise; it can be related to D, to B, to D as moving, to the relation p f B to D etc. Nevertheless, with this new encode2 function at t_0+1 the teacher A can make the utterance 'DUG':

articulate(A,perceive(A,stim(A,{B,D},{<D,move>},0)),INTENTA) = <A,utterance, 'DUG'>

B:
i-learn(B,perceive(B,stim(B,{A,D},{<D,move>},0)),INTENTB,0,t_0) = <B,<B,{perceive(B,stim(B{A,D},{<D,move>},0)),INTENTB},B,'GRID'>,t_0+1>

The i-learn function will in the beginning for B work in the same way as for A: B has an empty encode2 function, perceives objects A and D which are not yet part of encode2, he generates a symbolic expression 'GRID' and creates a new encode2 function. Then he can make also an utterance:

articulate(B,perceive(B,stim(B,{A,D},{<D,move>},0)),INTENTB) = <B,utterance, 'GRID'>

Agent A will not be influenced by the utterance of agent B because B is not a teacher for A, but vice versa B will be influenced from the utterance of A:

B:
i-learn(B,perceive(B,stim(B,{A,D},{<D,move>},{A,'DUG'})),INTENTB,ENCODE_1,t_1) =
<B,ENCODE_2,t_1+1>}
ENCODE_2 = {<B,{perceive(B,stim(B{A,D},{<D,move>},0)),INTENTB},B,'GRID'>,
<B,{perceive(B,stim(B{A,D},{<D,move>},{A,'DUG'})),INTENTB},A,'DUG'>}

B will perceive that A makes an utterance 'DUG' while he is noticing that A is a teacher for B. This causes B to expand his encode2 function with the DUG-entry stimulated by A. With this new encode2 function B can 'react' to have the intention INTENTIONB = <utter,'DUG'> and then:

articulate(B,perceive(B,stim(B,{A,D},{<D,move>},{A,'DUG'})),INTENTIONB) = <B,utterance, 'DUG'>

The 'selfdriven' encoding activity of imitator B has been 'over-written' by the utterance of teacher A. This shows in a principal manner, that an imitator can 'learn' the 'naming symbolic expressions' of a teacher for any kind of directly observable objects and symbolic expressions.

If we introduce a representation-function

represent = stim o perceive & represent in rn(R)

then we can state, that the representation function can be used together with the type of encoding and learning to define a certain class or type of agents. The representation function introduced so far defines with encode and n-learn the class of Normal Wittgenstein Agents and with the encode2 and i-learn the class of the Improved Wittgenstein Agents.

Because what is encoded in these agents are not the perceived objects 'themselves' but only the perceptual 'counterparts', one can already assert at this level, that the agents are communicating 'internal states'! This is possible because these agents have a processing structure, which is 'sufficiently similar' to establish a 'functional correlation' between world events and perceptual events on account of a shared representation function. Thus the perceptual events are 'mirroring' the world events in each agent in a way, which can be used as an 'inter agent reference framework' . Thus an agent A can presuppose -enforced by 'practical' experience- that his perception of an object is the same as the perception of any other agent 'of the same type'! Therefore we can state:

(LEMMA2) An Improved Wittgenstein Agent characterized within the structure WORLD2 is able to communicate his internal states with symbolic expressions of an arbitrary set of expressions successfully.

V. Possible Enhancements of Improved Wittgenstein Agents

As mentioned before are the made assumptions with regard to 'reality' very 'idealized' and partially underspecified. Thus the representation function should in more realistic cases cope with more complicated patterns of underspecified and concurring elements, a memory function has to be included, vague and fuzzy concepts have to be considered, some type of planning, some modelling of the other agent, and, most of all, the semiotic functions have to be expanded to deal with the symbolic representation of structured and dynamic perceptions. Also one should take into account some more body-perceptions (e.g. drives, emotions, movements) which can be important in some domains for the 'construction' of 'meaning-counterparts' which have to be shared with human partners.

In June 1995, the author had presented a software prototype of a population of Improved Wittgenstein Agents with lots of enhancements at the ars electronica festival in Linz (Austria) and in November 1995 at the telepolis conference in Luxembourg (see: DÖBEN-HENISCH 1995, 1996, 1997).

VI. Wittgenstein Agents and the Paradigm of Multi-Agent Systems [MAS]

The discussion of Wittgenstein Agents has a strong relationship to the Multi-Agent Systems [MAS] paradigm (see STONE/ VELOSO 1996, G.WEISS 1996).

Here we want to discuss the proposal of STONE/ VELOSO. They extract only two main descriptive -but very illuminating- categories to describe the whole field of multi-agent systems: homogeneous [hom] vs. heterogeneous [het] agent-systems and communicating [com] vs. non-communicating [-com] ones. With these categories they establish three main types of agent communities: <hom, -com>, <het, -com>, and <het,com>. These types shed light on an interesting fact: Improved Wittgenstein Agents are a special case not yet registered by STONE/ VELOSO; moreover, Improved Wittgenstein Agents reveal some new interesting problems not explicitly known yet.

The homogeneity/ heterogeneity of an agent is related to its internal architecture. But in the case of Wittgenstein Agents it is not so clear, how to apply these descriptive categories. In a straightforward way one would say that Normal Wittgenstein Agents have a homogeneous architecture and Improved Wittgenstein Agents have a heterogeneous structure because they differ with respect to be an imitator or not. But this is not very satisfying. According to STONE/ VELOSO heterogeneous agents can be completely different. But in the case of Improved Wittgenstein Agents they have to be completely identical except the way they encode their internal states. To cope with this case we have to introduce a distinction in the presupposed concept of architecture: we distinguish in the realm of architecture those functions which deal with semiotic encoding [SEMIO] and those wo don't [compl(SEMIO)], thus defining:

ARCHITECT = SEMIO u compl (SEMIO)

Then we get the subcategories: homogeneous with respect to SEMIO [s-hom], homogeneous with respect to the complement of SEMIO [d-hom], heterogeneous with respect to SEMIO [s-het], heterogeneous with respect to the complement of SEMIO [d-het].

Communication is understood by STONE/ VELOSO in a very broad sense; it can include the case, that the agents transmit information directly from one to each other (e.g. agent A delivers his sensory data directly to agent B). Applying this very broad meaning of communication to the Wittgenstein Agents we would get as a result, that Wittgenstein Agents would have communication. But as it is clear from the definitions of Wittgenstein Agents they are not able to transfer any internal state directly. Thus we have to differentiate the case of communication as well.

One possible differentiation could be to distinguish whether agents are communicating using semiotic/ linguistic expressions [S-COMMUNICATION] or whether they are exchanging data directly [D-COMMUNICATION]. In the case of S-COMMUNICATION the 'intended meaning' has to be 'transferred' by selected linguistic expressions which are 'mapped to the intended meaning' by 'commonly learned relationships'. In the case of Wittgenstein Agents this is realized by the construction of the encode/ decode function. The 'intended meaning itself' can not be communicated directly, only with the aid of a secondary mapping like 'encode/ decode' , which has to be constructed 'in sufficient concordance between the agents'.

Putting all these new distinctions together we get a new set of types of agents. Normal Wittgenstein Agents [NWA] would then receive either the label <hom,-com> or <<s-hom, d-hom>,<-s-com, -d-com>>. Thus NWAs fit into the old classification of STONE/ VELOSO. Improved Wittgenstein Agents [IWA] instead could only be described with the new types: <<d-hom, s-het>,<-d-com,s-com>>. This allows the following conjecture:

(CONJECTURE1) Agents with the architecture-type <d-het,*> are not able to have the communication type <s-com,*> ('*' stands for every possible value).

This is a serious limitation compared to the unrestricted type <het,com> proposed by STONE/ VELOSO. The limitation is due to the specific nature of semiotic communication. This result shows that the introduction of Improved Wittgenstein Agents introduces a specific new agent type with quite new properties.

Taking a recent proposal of SLOMAN (1997a,b) into account, which analyzes the non-semiotic structures compl(SEMIO) as a set of layered structures within the main categories REACTIVE, DELIBERATIVE and REFLEXIV, we could define.

compl(SEMIO) = REAC u DELIB u REFL

But this wouldn't necessarily change the semiotic communication of Improved Wittgenstein Agents. There can be deliberative and reflexive agents without semiotic communication and -as in the case of the Improved Wittgenstein Agents above- there can be agents with semiotic communication without deliberation and reflexion. This underlines again that the semiotic communication is a highly independent Dimension in the architecture of an agent and it deserves research on its own.

VII. Transcendentally or Type Induced Reference Principle

Structures, which are implicit in the way an organism is 'processing world', are called due to I.KANT 1781/87 'transcendental' structures. Biologists are in this case rather speaking of 'innate structures' which have been developed during the course of evolution. Depending from the point of view of the theory-constructor one can then name the used reference principle therefore either TRANSCENDENTALLY-INDUCED-REFERENCE PRINCIPLE or TYPE-INDUCED-REFERENCE PRINCIPLE.



VIII. References

G. DÖBEN-HENISCH [1995], The BLINDs WORLD I. Ein philosopisches Experiment auf dem Weg zum digitalen Bewußtsein (including an english translation), in: K.GERBEL/ P.EIBEL (eds.), Mythos Information. Welcome to the wired world. @rs electronica 95, Springer-Verlag, Wien u.a., pp.227-244.

G. DÖBEN-HENISCH [1996] Sprachfähige intelligente Agenten und die Notwendigkeit einer philosophischen Theorie des Bewußtseins. Wissenschaftstheoretische und erkenntnistheoretische Überlegungen, in: B.BECKER/ Chr.LISCHKA/ J.WEHNER (eds), Kultur - Medien - Künstliche Intelligenz. Beiträge zum Workshop 'Künstliche Intelligenz - Medien - Kultur' während der 19.Jahrestagung für Künstliche Intelligenz (KI-95) 11.-13.September 1995 in Bielefeld, GMD-Studien Nr.290, GMD-Forschungszentrum Infomationstechnik GmbH, Sankt Augustin, pp.173-190.

DÖBEN-HENISCH, Gerd [1997] Semiotic Machines - An Introduction, in: Proceedings of the International Conference on Semiotics, Amsterdam August 5-9, 1996 (forthcoming).

KANT; Immanuel [1781/1787, hier: 1956], Kritik der reinen Vernunft, ed. by R.SCHMIDT, Felix Meiner, Hamburg.

SLOMAN, Aaron [1997a], The Evolution of What? (Draft), School of Computer Science, Cognitive Science Research Centre, Birmingham (UK), http://www.cs.bham.ac.uk/~axs

SLOMAN, Aaron [1997b], Designing Human-Like Minds, Submitted to ECAL '97, School of Computer Science, Cognitive Science Research Centre, Birmingham (UK), http://www.cs.bham.ac.uk/~axs.

STONE, P./ VELOSO, M. [1996], Multiagent Systems: A Survey from a Machine Learning Perspective, Submitted to IEEE Transactions on Knowledge and Data Engineering (TKDE). June 1996., Computer Science Department, CMU, Pittsburgh (PA 15213).

WEISS, Gerhard [1996], Adaptation and Learning in Multi-Agent Systems: Some Remarks and a Bibliography, in: WEISS, Gerhard/ SEN, Sandip (eds.), Adaptation and Learning in Multi-Agent Systems, Series: Lecture Notes in Artificial Intelligence No. 1042, Springer Verlag, Berlin - Heidelberg - New York et al., pp. 1-21

WITTGENSTEIN, Ludwig [1958, 3.Aufl. 1975], Philosophische Untersuchungen, Suhrkamp, Frankfurt.