Übersicht/ Overview |
ONTOLOGY |
Ontology:
From an artificial intelligence viewpoint, an ontology is a model of some portion of the world and is described by defining a set
of representational terms. In an ontology, definitions associate the names of entities in a universe of discourse (e.g., classes,
relations, functions, or other objects) with human-readable text describing what the names mean, and formal axioms that
constrain the interpretation and well-formed use of these terms. In essence, ontologies can be used very effectively to
organize keywords as well as database concepts by capturing the semantic relationships among keywords or among tables and
fields in a relational database. By using these relationships, a network structure can be created providing users with an abstract
view of an information space for their domain of interest. Ontologies are well suited for knowledge sharing in a distributed
environment where, if necessary, various ontologies can be integrated to form a global ontology.
That such 'naive' ontologies are hiding lots of problems is revealed by Stanley RICES reflections entitled Attribution and Context: The Bases of Information Retrievals (and 'Meaning'). Overview |
K-ONTOLOGIES |
K-ONTOLOGIES := Knowbot Ontologies
K-ONTOLOGIES are generated by Knowbots automatically. Based on his/her perceptions and internal states a Knowbot applies a set of cognitive operations on these states and generates a dynamic multi-layered network of conceptual graphs which represent an ontology. The real 'content' of this ontology depends widely on the kind of environments with which a knowbot can interact and the individual 'education' he will receive. K-ONTOLOGIES are paralleled with an ontology based natural language module in such a way that the K-ONTOLOGIES function as the 'meaning' of the accompanying language. In connection with a natural language module could K-ONTOLOGIES serve either as information brokers helping the user to exploit the ontologies of the Web or as an aid to automatically encoding natural language messages into ontologies. K-ONTOLOGIES is an ongoing research project of the INM. A fully functional prototype is planned for the world exhibition June 2000 at the Hannover World Exhibition. Overview |
CLIPS |
CLIPS is a productive development and delivery expert system tool which provides a complete
environment for the construction of rule and/or object based expert systems. CLIPS is being
used by over 5,000 users throughout the public and private community including: all NASA sites
and branches of the military, numerous federal bureaus, government contractors, universities,
and many companies. The key features of CLIPS are:
Overview |
SHADE |
The SHADE project is primarily concerned with the information sharing aspect of the
concurrent engineering problem. Rather than attempting to model the design process, it is demonstrating a flexible infrastructure for anticipated knowledge-based, machine-mediated
collaboration between disparate engineering tools. The SHADE solution is to provide a medium that
allows designers, through their tools, to accumulate and share engineering knowledge spanning
the functionality of individual tools.
Three basic components are embodied in the SHADE approach to agent-based integration, corresponding to the three requirements outlined above. First a common vocabulary must be defined that allows tools to exchange design information and express shared dependencies over that information. Second, a set of communication protocols must be established that permit knowledge-level exchanges of information as well as message-level exchanges. Finally, a set of basic facilitation services is required that off-load functionality such as name service, buffering, routing of messages, and matching producers and consumers of information. Shared ontology: a formal specification of a shared conceptualization that provides the representational vocabulary with which agents can communicate. The need for a shared ontology is a direct result of the multi-disciplinary nature of engineering. There are many different views of a design (function, performance, manufacturing), each with a largely different language. However, the various perspectives typically overlap, necessitating the sharing of information if design is to proceed concurrently and cooperatively. For information to be shared, there must be a commonly understood representation and vocabulary. Whereas the language must be demonstrated as being expressive enough to bridge relationships among participating agents used in multi-disciplinary design activities, this does not imply that it must be capable of expressing the union of all distinctions made by participating agents. Many portions of a design space are of interest only to one agent, while other portions must be common to many agents. The challenge is to support different degrees of knowledge sharing, from arms-length knowledge exchanges to strong common models. SHADE acknowledges this range of knowledge sharing by presupposing an incremental evolution of language that allows the encoding of progressively richer dependencies across tools. The language evolution would proceed from an encoding of simple dependencies among opaque elements (``object X is in some way dependent on object Y'') to the gradual introduction of common models (``$Y.b = 2 * X.a + 3$'') to explanations of causality (``X caused Y to fail''). This evolution would enable increasingly sophisticated types of change notification and interaction among designers. Of course, it also imposes greater demands upon the supporting communications infrastructure. To better support the development of shared ontologies, SHADE is working on systems and techniques for building ontologies, and applying them to construct specific vocabularies for engineering. To establish conventions, promote rigor, and facilitate enhancement and extensibility, ontologies are defined within a widely accepted, formally defined representation, and the related vocabulary is modularized into hierarchical theories. The representation, tools, techniques, and theories are discussed below. Overview |
LOOM |
Loom is a language and environment for constructing intelligent applications. The heart of Loom
is a knowledge representation system that is used to provide deductive support for the
declarative portion of the Loom language. Declarative knowledge in Loom consists of
definitions, rules, facts, and default rules. A deductive engine called a classifier utilizes
forward-chaining, semantic unification and object-oriented truth maintainance technologies in
order to compile the declarative knowledge into a network designed to efficiently support
on-line deductive query processing.
Overview |
GKB |
The GKB-Editor (Generic Knowledge Base Editor) is a tool for graphically browsing and
editing knowledge bases across multiple Frame Representation Systems (FRSs) in a
uniform manner. It offers an intuitive user interface, in which objects and data items are
represented as nodes in a graph, with the relationships between them forming the edges.
Users edit a KB through direct pictorial manipulation, using a mouse or pen. A
sophisticated incremental browsing facility allows the user to selectively display only that
region of a KB that is currently of interest, even as that region changes.
The GKB-Editor consists of three main modules: a graphical interactive display, based upon Grasper-CL, a library of generic knowledge-base functions, and corresponding libraries of frame-representation specific methods, both based upon the Generic Frame Protocol (GFP). When the user manipulates the display, generic functions are called, which invoke the corresponding frame representation specific methods, which result in modifications or retrieval of information from the knowledge bases. All of the GKB-Editor's knowledge access and modification functionality has been developed using this GFP. As a direct result, the GKB-Editor will be immediately compatible with those FRSs for which the FRS-specific methods that implement the protocol have been defined, and extensible to the other FRSs. Overview |
GFP2.0 |
GFP 2.0 is a protocol for accessing knowledge bases (KBs) stored in frame
knowledge representation systems (FRSs). By FRS are meant both systems that would
traditionally be considered FRSs, as well as ``any system that admits a frame-like projection,''
which could include other types of knowledge representation (KR) systems, and database
systems. The protocol, called the Generic Frame Protocol (GFP), provides a set of operations
for a generic interface to underlying FRSs. The interface layer allows an application some
independence from the idiosyncrasies of specific FRS software and enables the development of
generic tools (e.g., graphical browsers and editors) that operate on many FRSs. GFP
implementations exist for several programming languages, including Java, C (client
implementation only), and Common Lisp, and provide access to KBs both locally and over a
network.
GFP is complementary to language specifications developed to support knowledge sharing. KIF, the Knowledge Interchange Format, provides a declarative language for describing knowledge. As a pure specification language, KIF does not include commands for knowledge base query or manipulation. Furthermore, KIF is far more expressive than most FRSs. GFP focuses on operations that are efficiently supported by most FRSs (e.g., operations on frames, slots, facets -- inheritance and slot constraint checking). GFP is intended to be well impedance-matched to the sorts of operations typically performed by applications that view or manipulate frame-based KBs. Overview |
SHOE |
SHOE := Simple HTML Ontology Editor
SHOE is a small extension to HTML which allows web page authors to annotate their web documents with machine-readable knowledge. This makes it simple for user-agents and robots to retrieve and store knowledge. The SHOE extension provides authors with a clean superset of HTML that adds a knowledge markup syntax; that is, to enable them to use HTML to directly classify their web pages and detail their web pages' relationships and semantic attributes in machine-readable form. Using such a language, a document could claim that it is the home page of a graduate student. A link from this page to a research group might declare that the graduate student works for this group as a research assistant. And the page could assert that ``Cook'' is the graduate student's last name. These claims are not simple keywords; rather they are semantic tags defined in an ``official'' set of attributes and relationships (an ontology). In this example the ontology would include attributes like ``lastName'', classifications like ``Person'', and relationships like ``employee''. Systems that gather claims about these attributes and relationships could use the resulting gathered knowledge to provide answers to sophisticated knowledge-based queries. Moreover, user-agents or robots could use gathered semantic information to refine their web-crawling process. For example, consider an intelligent agent whose task is to gather web pages about cooking. If this agent were using a thesaurus-lookup or keyword-search mechanism, it might accidentally decide that Helena Cook's web page, and pages linked from it, are good search candidates for this topic. This could be a bad mistake of course, not only for the obvious reasons, but also because Helena Cook's links are to the rest of the University of Maryland (where she works). The University of Maryland's web server network is very very large, and the robot might waste a great deal of time in fruitless searching. However, if the agent gathered semantic tags from Helena Cook's web page which indicated that Cook was her last name, then the agent would know better than to search this web page and its links. Overview |
JOE |
JOE := Java Ontology Editor
JOE is a software tool, written in Sun's Java language, that provides two different graphical user interfaces (GUI). The first one is a tool that can be used not only to view ontologies but also to create and edit them. The second one is a tool that allows the user to create queries on a given ontology by the point-and-click approach. Unlike other languages, Java has many advantages when used in a distributed environment of autonomous and heterogeneous information resources. Available Software Overview |
KIF |
KIF:= KNOWLEDGE INTERCHANGE FORMAT
Knowledge Interchange Format (KIF) is a computer-oriented language for the interchange of knowledge among disparate programs. It has declarative semantics (i.e. the meaning of expressions in the representation can be understood without appeal to an interpreter for manipulating those expressions); it is logically comprehensive (i.e. it provides for the expression of arbitrary sentences in the first-order predicate calculus); it provides for the representation of knowledge about the representation of knowledge; it provides for the representation of nonmonotonic reasoning rules; and it provides for the definition of objects, functions, and relations. Overview |
FENSEL-ERDMANN-STUDER |
A proposal how to annotate Web-documents with an ontology and an outline of an ontology-based broker that can make use of these ontologies.
Overview |
KQML |
KQML := KNOWLEDGE QUERY and MANIPULATION LANGUAGE
Modern computing systems often involve multiple intergenerating computations/nodes. Distinct, often autonomous nodes can be viewed as agents performing within the overall system in response to messages from other nodes. There are several levels at which agent-based systems must agree, at least in their interfaces, in order to successfully interoperate:
KQML is most useful for communication among autonomous, asynchronous, agent-based programs. A KQML message is called a performative, a term from speech theory. The message is intended to perform some action by virtue of being sent. There is no constraint that a program be either a server or a client. Programs are viewed as agents which are free to initiate communication or respond to communication. The KQML implementation creates additional processes to handle incoming messages asynchronously. The user's program is free to execute code or respond to local events The KQML specification doesn't specify the architecture of the environment it is used in. It is possible to use KQML in a TCP/IP network of multiprocessing systems (such as UNIX workstations). But it is also possible to transmit KQML expressions over RS-232 lines or even send them via email. The agents sending them do not have to be multitasking; they can be more primitive computing systems (e.g. computers running MS-DOS). However, each implementation has to make certain assumptions about the environment in which they work. Overview |
ONTOLINGUA |
Ontolingua is a set of tools, written in Common Lisp, for analyzing and translating ontologies. It
uses KIF as the interlingua and is portable over several representation systems. It includes a
KIF parser and syntax checker, a cross reference utility, and a set of translators from KIF into
implemented representation systems, and a HTML report generator.
Overview |
Stanford KSL Network Services |
Ontology Server [OS] using Ontolingua Vers. 5.0. Through a Web Browser can a user read available ontologies and can edit them. The OS offers an open and shared library of already created ontologies, which can be used by any user and which can be enhanced and expanded at will. The possible operations are the naming of a newe ontology, the including of an already existing one, the usage of classes, subclasses, slots (:= relations/ functions) with domains and ranges, facets (:= a certain domain or range), instances of classes and relations/ functions, was well as axioms. For axioms is the full power of first-order-logic with the syntax of KIF available. Thus internally there is a first-order-theory, externally the user interacts with objects and frames. There exist translators from ontologies into CORBAs IDL, PROLOG, CLIPS, LOOM, Epikit and KIF. Also available are an API to enable remote applications to use an OS by a network protocol. Overview |
INTELLIGENT MULTIMEDIA RETRIEVAL |
Multimedia information includes text, graphics, speech, non-speech audio (with music), imagery, animation, video, structured data. Overview |
CARNOT |
The Carnot Project is addressing the problem of logically unifying physically distributed,
enterprise wide, heterogeneous information. Carnot will provide a user with the means to
navigate information efficiently and transparently, to update that information consistently, and to
write applications easily for large , heterogeneous, distributed information systems -- systems
where resources may even reside on the conventional, closed environments that pervade
businesses worldwide. Worldwide information management is the objective. A prototype has
been implemented which provides services for enterprise modeling and model integration to
create an enterprise-wide view, semantic expansion of queries on the view to queries on
individual resources, and inter-resource consistency management. What Carnot is doing is
shown graphically here.
Topics: Knowledge discovery from distributed databases, complementary discrimination learning with probabilistic inference rule generator. Explanations with deductive and analogical proofs. Enterprise modeling with Cyc comon sense knowledge base as a global context. A deductive database based on the LDL++ System, a formal logic with object-oriented facilities, which has an open interface to existing tools. Carnot enables the development and use of distributed, knowledge-based, communicating agents. The agents are high-performance expert systems that communicate and cooperate with each other, and with human agents, in solving problems. The agents interact by using Carnot's actor-based Extensible Services Switch (ESS) to manage their communications through TCP/IP and OSI. Thus, they can be located wherever appropriate within and among enterprises -- in fact, anywhere that is reachable through the Carnot communication services. Carnot utilizes cooperative versions of such agents to provide coherent management of information in environments where there are many diverse information resources. The agents use models of each other and of the resources that are local to them in order to cooperate. Resource models may be the schemas of databases, frame systems of knowledge bases, domain models of business environments, or process models of business operations. Models enable the agents and information resources to use the appropriate semantics when they communicate with each other. This is accomplished by specifying the semantics in terms of a global ontology, called Cyc, and using actors to mediate the interactions. When used for one application in telecommunication service provisioning, the agents implement virtual state machines, and interact by exchanging state information. The resultant interaction produces an implementation of relaxed transaction processing. Implementation: Most of the prototype software has been written in C, C++ and Rosette (:= concurrent object-oriented programming language. It is prototype-based and incorporates multiple inheritance and reflection), using Unix, Motif and X Windows. The software is available now on Sun platforms. The software is also available on NCR, HP, DEC, and Silicon Graphics systems. The prototype software has been installed at Ameritech, Bellcore, Boeing, Department of Defense, NCR, Eastman Chemical Company, and Eastman Kodak. Several of the Carnot sponsors are in the process of developing products using Carnot technology. Itasca Systems has already released a product based on the Carnot GIE software. The sponsors of the Carnot Project at MCC are Andersen Consulting, Bellcore, Ameritech, Amoco, Boeing Computer Services, Department of Defense, Eastman Kodak , Eastman Chemical Company, and NCR/AT&T. Overview |
HARVEST |
Harvest is an integrated set of tools to gather, extract, organize, search, cache, and replicate
relevant information across the Internet. With modest effort users can tailor Harvest to digest
information in many different formats from many different machines, and offer custom search
services on the web.
Implementation: Anyone with a World Wide Web client (e.g., NCSA Mosaic) can access and use Harvest servers. A Unix system is required to run the Harvest servers. Executables are available for SunOS 4.1.3, Solaris 2.4, and OSF/1 3.0 The code is also known to compile and operate with perhaps a few adjustments on IRIX 5.x, HP-UX 9.05, and FreeBSD 2.0. With a little more work it will even compile on AIX and Linux. Unsupported binary ports are available at FTP contrib directory. To compile Harvest requires GNU gcc 2.5.8, bison 1.22, and flex 2.4.7 (or later versions). To run Harvest requires Perl 4.0 or 5.0 and the GNU gzip compression program. All of these are available on the GNU FTP server.
Suggestions, patches, and binary distributions for other
platforms are welcomed via
email.
For more information on the unsupported platforms please
see
notes on porting.
|
InfoSleuth |
InfoSleuth I: The MCC InfoSleuth Project will develop and deploy technologies for finding information in
corporate and in external networks, such as networks based on the emerging National
Information Infrastructure (NII). The InfoSleuth research is in the forefront of MCC's strategic
efforts on interoperability and interface technologies.
Overview: InfoSleuth is based on the MCC-developed Carnot technology that was successfully used to integrate heterogeneous information resources. To achieve this flexibility and openness, InfoSleuth integrates the following new technological developments in supporting mediated interoperation of data and services over information networks:
Communication Protocols: InfoSleuth is comprised of a network of cooperating agents communicating by means of the high-level query language KQML [FFM94]. Users specify requests and queries over specified ontologies via applet-based user interfaces. The dialects of the knowledge representation language KIF [GF92] and the database query language SQL are used internally to represent queries over specified ontologies. The queries are routed by mediation and brokerage agents to specialized agents for data retrieval from distributed resources, and for integration and analysis of results. Users interact with this network of agents via applets running under a Java-capable Web browser that communicates with a personalized intelligent User Agent. Agents advertise their services and process requests either by making inferences based on local knowledge, by routing the request to a more appropriate agent, or by decomposing the request into a collection of sub-requests and then routing these requests to the appropriate agents and integrating the results. Decisions about routing of requests are based on the ``InfoSleuth'' ontology, a body of metadata that describes agents' knowledge and their relationships with one another. Decisions about decomposition of queries are based on a domain ontology, chosen by the user, that describes the knowledge about the relationships of the data stored by resources that subscribe to the ontology. Agent Types: The following is an overview of the functionality of the agents in the system.
Ontology: An ontology may be defined as the specification of a representational vocabulary for a shared domain of discourse which may include definitions of classes, relations, functions and other objects [Gru93]. Ontologies in InfoSleuth are used to capture the database schema (e.g., relational, object-oriented, hierarchical), conceptual models (e.g., E-R models, Object Models, Business Process models) and aspects of the InfoSleuth agent architecture (e.g., agent configurations and workflow specifications). Rather than choose one universal ontology format, InfoSleuth allows multiple formats and representations, representing each ontology format with an ontology meta-model which makes it easier to integrate between different ontology types. We now discuss an enhancement of the 3-layer model for representation of ontologies presented in [JS96]. The three layers of the model are: Frame Layer, Meta-model Layer, and Ontology Layer. The Frame layer (consisting of Frame, Slot, and MetaModel classes) allows creation, population, and querying of new meta-models. Meta-model layer objects are instances of frame layer objects, and simply require instantiating the frame layer classes. Ontology layer objects are instances of meta-model objects. The objects in the InfoSleuth ontology are instantiations of the entity, attribute and relationship objects in the Meta-model layer. In our architecture, agents need to know about other's entities, called agents. Each ``agent'' has an attribute called ``name'' that is used to identify an agent during message interchange. The ``type'' of an agent is relevant for determining the class of messages it handles and its general functionality. InfoSleuth II: In one direction, InfoSleuth I software will be extended to interact with a distributed object infrastructure, in order to improve its reliability and fault tolerance. In the other direction, additional services will be added to facilitate the development of agent-based applications InfoSleuth II software to be developed can be divided into three categories: Basic InfoSleuth Agent Services and InfoSleuth Application Support Services. Basic InfoSleuth Agent Services: These services will extend the generic capabilities of InfoSleuth agents for a broad number of applications. Potential services include: Detection of Composite Events, Meta-Data Extraction, Cooperating Broker Agents, and Agent Authentication. InfoSleuth Application Support Services: These services will provide application specific capabilities. The services developed will be based on interests of the InfoSleuth II sponsors. Potential services include: Data Analysis and Data Mining, Data Warehousing and Inter-Repository Consistency, Planning and Re-Planning, Migrating and Mobile Agents, Support for Ubiquitous Collaboration, and Ontology Based Web Search. Distributed Object Infrastructure Interface: The InfoSleuth II project will extend InfoSleuth to utilize existing distributed object infrastructures such as CORBA, DCE and COM/OLE. The project will propose extensions to object oriented standards as are needed to support agent-based computing. This development will be coordinated with research in the MCC's Object Infrastructure project. Overview |
KARO (Knowledge Acquisition Environment with Reusable Ontologies) |
In diesem von der IBM Deutschland Entwicklung GmbH geförderten Projekt wurden Methoden
und Werkzeuge entwickelt zur Wiederverwendung von Commonsense Ontologien, d.h. sehr
allgemeinen Wissensbasen, die generelles Wissen über Raum, Objekte, Ereignisse, Zeit etc.,
enthalten und denen eine Theorie dieser Bereiche zugrunde liegt.
Neben den theoretischen Ergebnissen wurde auch ein Prototyp KARO (Knowledge Acquisition
Environment with Reusable Ontologies) entwickelt, mit dem der Knowledge Engineer bei der
Modellierung der Expertise unterstützt wird. Die Idee bei KARO ist, dem Knowledge Engineer
die gegenstandsbereichsunabhängigen Modelle von Raum, Zeit, etc. einer Commonsense
Ontologie als eine Art generische Bibliothek zur Modellierung von neuen Konzepten zur
Verfügung zu stellen. Dieser Wiederverwendungsprozeß kann in KARO mit formalen,
lexikalischen und graphischen Methoden und Werkzeugen durchgeführt werden. Die neu zu
definierenden Konzepte werden mit den Methoden von KARO in der Ontologie gesucht, den
Gegebenheiten angepaßt und in das Modell der Expertise integriert. KARO unterstützt den
Knowledge Engineer durch verschiedene Modellierungskriterien, an denen er sich bei Bedarf
orientieren kann. Die Entscheidung, ob und wie die Kriterien angewendet werden, bleibt dem
Knowledge Engineer überlassen und wird in KARO dokumentiert.
Overview |
Ungenaues Wissen |
"Ungenauigkeit" von Wissen umschließt in diesem Projekt vages, unsicheres, unvollständiges
und widersprüchliches Wissen. Als Darstellungsformalismus wurden signierte
Klauselprogramme gewählt. Die in Klauseln auftretenden Literale werden hierbei mit einem
Wahrheitswert versehen; es bestehen enge Bezüge zur annotierten Logik. Für derartige
Programme wurde eine Beweisprozedur entwickelt, die sich an der SLD-Resolution orientiert.
Schwerpunkt weiterer Untersuchungen waren die zugrundeliegenden Wahrheitswertemengen, zum einen in Bezug auf Beweisstrategien, zum anderen in Hinsicht auf die Modellierung ungenauen Wissens. Insbesondere sind die sogenannten Doppelverbände (engl. bilattices, nach Ginsberg) geeignet, vages, unsicheres, widersprüchliches und auch situationsabhängiges Wissen auszudrücken. Widersprüchliches Wissen tritt speziell dann auf, wenn Wissensbasen dezentral erstellt und schließlich zusammengeführt werden sollen. Für diese Problemstellung wurde eine Vorgehensweise entwickelt, widersprüchliche Angaben nach bestimmten Kriterien zu behandeln. Dabei wurde auch die Wahl passender Wahrheitswertemengen untersucht. Es ergibt sich ein flexibler Formalismus zur Darstellung, Verarbeitung und Integration von Wissen. Overview |
KARL (Knowledge Acquisition and Representation Language) |
KARL I:(Dissertationen von D. Fensel und J. Angele) KARL ist eine Sprache zur Modellierung von Wissen. In Anlehnung an KADS-I werden in KARL drei Ebenen von Wissen unterschieden: Domainebene, Taskeben und Strategieebene. Diese Ebenen sind über korrespondierende Modellierungsprimitive verknüpft, die in eine Frame-Logik und in eine Dynamische Logik eingebettet sind. Eine deklarative Semantik für das komplette Modell von Karl existiert, außerdem eine operationale Semantik, die als Fixpunkt Semantik ausgeführt ist. Auf der Basis dieser Semantik ist ein Interpreter realisiert, der die KARL-Spezifikationen effizient evaluieren kann.
KARL II: Seit der Definition der Sprache KARL in den wurden eine Reihe von Modellierungen durchgeführt. Das größte bisher entstandene Modell ist ein Modell zur Konfiguration von Aufzügen (VT), das im Rahmen eines internationalen Vergleiches unterschiedlicher Ansätze für das Knowledge Engineering aufgebaut wurde. Diese Erfahrungen brachten auch eine Reihe von Schwächen der Sprache KARL und der Realisierung ihres Interpreters zutage. Diese Schwächen wurden deshalb genauer analysiert und in eine erste Revision der Sprache KARL umgesetzt. Die wichtigsten Revisionen lassen sich wie folgt zusammenfassen:
An der Reimplementierung des KARL-Interpreters wird zur Zeit gearbeitet. Der neue Interpreter wird vollständig in JAVA erstellt und in die Sprache integriert. Overview |
SHELLEY |
SHELLEY is an integrated workbench for knowledge engineering. Shelley interactively supports the analysis and design phases of the KADS KBS
development methodology. Shelley is different from many other tools supporting knowledge
acquisition in two respects: (1) it is based on a methodology for knowledge acquisition; and (2)
it is designed to provide synergistic effects on using multiple tools simultaneously providing the
user with different views on the knowledge being acquired. Shelley is in actual use.
Overview |
KADS I+II |
KADS-II:
The overall goal of the KADS-II project was to further develop the KADS methodology to the point that it can
become a European standard for KBS development. The project is intended to take into account different layers of the methodological pyramid.
A number of shortcomings and `white areas' in KADS-I are identified, which are to be remedied in the KADS-II project. The identified major issues are:
Transformation approaches to KBS development: The KADS-I approach to design is based on a manual transformation of the knowledge model to the actual system. This process is inefficient and prone to errors. In KADS-II other methods for the construction of the computational model will be developed. One option is the use of transformation techniques that transform a formal specification into runnable code. Another approach that will be explored is the automatic configuration of the computational system from a set of basic components. Project Management Aspects: KADS-II will consider project management aspects such as flexible life cycle models, project planning, metrication, cost estimation, quality assurance and control, application definition, and the impact of of KBS development process on the organisation. In KADS-II the development of knowledge-based systems is essentially viewed as a modeling activity. A KBS is not a container filled with knowledge extracted from an expert, but an operational model that exhibits some desired behaviour observed or specified in terms of real-world phenomena. The use of models is a means of coping with the complexity of the development process. A model reflects, through abstraction of detail, selected characteristics of the empirical system in the real world that it stands for. In KADS, modeling at the knowledge level is an essential intermediate step in the development process. At the knowledge level one abstracts from the details of the representation to be used in a system. The knowledge model defines the types of knowledge that are to be elicited from the expert and that will have to be represented in the system. In addition, modeling at the knowledge level indicates a route towards the use of generic models for certain classes of tasks, since it is independent of the actual details of the system. These generic task models can be used for top-down knowledge acquisition and system development. Ideally an observer (knowledge engineer) constructs a conceptual model by abstracting from the problem solving behaviour of experts. This abstraction process is aided by the use of some interpretational framework, such as generic models of classes of tasks or of task domains. The conceptual model is a knowledge-level model of the domain expertise. The conceptual model is real-world oriented in the sense that it is phrased in real-world terminology and can thus be used as a communication vehicle between knowledge engineer and expert. The conceptual model does not take detailed constraints with regard to the artefact into account. The design model is a model of the artefact. It describes how the conceptual model is realised with particular AI techniques and methods. The idea is that the design model can be transformed into a detailed system design and subsequently into actual system code without further major decisions having to be made. So, where the conceptual model is essentially a knowledge level of a problem solving process, the design model is at the computational level, specifying the representational structures, and the computational processes in a system. The approach that will be taken in KADS-II is to develop modeling formalisms for both the conceptual and the design model of the KBS, and to develop transformation methods that transform the conceptual model into the design model. The modeling and transformation activities will be supported by libraries of generic components of such models. Although several research groups have proposed generic elements of problem solving models at the knowledge level, no uniform framework nor a standardized set of generic components have been identified so far. The modeling language developed in KADS-I has been used successfully as a tool in knowledge acquisition, but lacks a formal basis and has shortcomings in particular with respect to the domain knowledge. In KADS-II a synthesis of the current ideas about modeling problem solving processes at the knowledge level will be constructed. This synthesis will be based on the following principles:
There is a variety of possible routes towards formalisation.
The various approaches do not conflict with one another, but should rather be seen as complementary viewpoints on the same problem with a slightly different emphasis. Generic Components of KBS: Rapid and effective implementation of a knowledge level model is of the utmost importance for practical KBS development within the modelling paradigm. Currently there are no generic knowledge level components which support a constructive methodology. This means that it is not yet possible to construct arbitrary knowledge level models using knowledge level components. On the other hand large, course grained generic components often turn out to be inadequate for modelling real-life tasks. The problem is that the implementation details (i.e. computational level aspects) often determine the compliance of such a component with the task, user and socio organisational features. In this sense, currently there are no generic real-life tasks. Overview |
KIV |
KIV := Karlsruhe Interactive Verifier
KIV was originally developed for the verification of procedural programs but it fits well for verifying knowledge-based systems. It is based on algebraic specification means for the functional specification of components and dynamic logic for the algorithmic specification. The Karlsruhe Interactive Verifier (KIV) can also be used for the verification of conceptual and formal specifications of knowledge-based systems. Overview |
MIKE |
MIKE := Modellbasiertes und Inkrementelles Knowledge
Engineering
Ziel des MIKE-Projektes ist die Entwicklung einer Knowledge-Engineering-Methodik, die auf den folgenden Prinzipien basiert:
Der MIKE-Ansatz unterstützt die systematische Entwicklung von wissensbasierten Systemen. In MIKE wird der Entwicklungsprozeß in mehrere Phasen unterteilt, in denen jeweils unterschiedliche Aspekte des zu modellierenden Wissens bzw. des zu entwickelnden Systems betrachtet werden. Für jede Phase stehen spezifische Modellierungsprimitive zur Verfügung, um die Entwurfsentscheidungen zu dokumentieren. Ausgehend vom Modell der vorherigen Phase wird dieses in der anschließenden Phase - möglichst weitgehend strukturerhaltend - verfeinert. Der Entwicklungsprozeß über die verschiedenen Phasen wird zyklisch durchlaufen, so daß die Modelle inkrementell zusätzliche bzw. verfeinerte Anforderungen abdecken. Die bisherigen Arbeiten im MIKE-Projekt haben sich im wesentlichen auf die Neuentwicklung von monolithischen, rein wissensbasierten Systemen konzentriert. Es wurden mit Vorarbeiten die Grundlage geschaffen, die Entwicklung als Konfiguration eines Systems aus Komponenten unterschiedlicher Art beschreiben zu können. Neben der Neuentwicklung von Komponenten spielt ihre Wiederverwendung eine an Bedeutung gewinnende Rolle. In MIKE - wie in den meisten modellbasierten Ansätzen - wird unterschieden zwischen der vorgegebenen Aufgabenstellung (Task), die das zu entwickelnde System erfüllen soll (bzw. dem Problem, das es lösen soll), dem spezifischen Anwendungswissen (Domänenmodell) und einem vom speziellen Anwendungsbereich unabhängigen Vorgehen bei der Bearbeitung der Aufgabe (bzw. beim Lösen des Problems) (Problemlösungsmethode). Einen Schwerpunkt des MIKE-Projekts macht die konzeptuelle Beschreibung und insbesondere die Wiederverwendung von Problemlösungsmethoden aus. Im Bereich der konzeptuellen Beschreibung wurde ein stabiler Zustand erreicht, der sich auch in der Überarbeitung der Sprache KARL niedergeschlagen hat. Um Problemlösungsmethoden wiederverwenden zu können, sind sie in einer Bibliothek zur Verfügung zu stellen. Weiterhin bedarf es einer Beschreibung ihrer Funktionalität, um zeigen zu können, daß eine Problemlösungsmethode die von einer Aufgabenstellung vorgegebenen Ziele erfüllen kann. Dazu wurde ein erster Ansatz mit Vor-/Nachbedingungen entwickelt und in KARL integriert. Im allgemeinen ist es unwahrscheinlich, daß eine Anfrage an die Bibliothek eine Methode liefert, die genau die Anforderungen der Aufgabenstellung erfüllt. Vielmehr ist anzunehmen, daß eine spezifischere Methode erforderlich ist, die stärkere Annahmen über das zugrundeliegende Wissen macht. In diesem Zusammenhang wurde der Karlsruhe Interactive Verifier (KIV) dazu eingesetzt, die Äquivalenz der Funktionalität einer Problemlösungsmethode mit einer Aufgabenstellung zu beweisen. Dabei müssen Annahmen gemacht werden, die die Adaption an die spezielle Aufgabenstellung ermöglichen. Stehen alle erforderlichen Komponenten zur Verfügung (entweder neu entwickelt oder adaptiert wiederverwendet), können sie zu einem System zusammengefügt werden. Besondere Schwierigkeiten entstehen hierbei aus globalen Abhängigkeiten und Wechselwirkungen zwischen mehreren Komponenten. Ein weiteres Problem resultiert daraus, daß das Wissen über die Aufgabenstellung, über den Anwendungsbereich und über die Problemlösungsmethode vor dem Hintergrund einer jeweils spezifischen Welt formuliert und zu verstehen ist. Die Bereiche unterscheiden sich nicht nur durch verschiedene Sprachen; in den verschiedenen Bereichen wird in unterschiedlichen Konzepten und Beziehungen gedacht, jeder Bereich hat eine eigene Ontologie. Aktuelle Arbeiten haben zum Gegenstand, diese Ontologien und vor allem die in den jeweiligen Bereichen gemachten Annahmen mittels geeigneter Adapter aufeinander abzubilden. Die Adapter stellen somit die Möglichkeit bereit, alle Komponenten zu einem System zusammenzufügen. Mit diesen Vorarbeiten wurde ein Rahmenwerk abgesteckt für die Entwicklung von Systemen aus z.T. vordefinierten, wissensbasierten Komponenten. Die zukünftigen Arbeiten werden sich darauf konzentrieren, dieses Rahmenwerk weiter im Detail auszuarbeiten. Overview |