A Dialogue System for Coherent Reasoning with Inconsistent Knowledge Bases

Traditionally, the AI community assumes that a knowledge base must be consistent. Despite that, there are many applications where, due to the existence of rules with exceptions, inconsistent knowledge must be considered. One way of restoring consistency is to withdraw conflicting rules; however, this will destroy part of the knowledge. Indeed, a better alternative would be to give precedence to exceptions. This paper proposes a dialogue system for coherent reasoning with inconsistent knowledge, which resolves conflicts by using precedence relations of three kinds: explicit precedence relation, which is synthesized from precedence rules; implicit precedence relation, which is synthesized from defeasible rules; mixed precedence relation, which is synthesized by combining explicit and implicit precedence relations.


Introduction
A knowledge base is a set of rules representing the knowledge of an expert in a specific domain.Traditionally, the Artificial Intelligence (AI) community assumes that a knowledge base must be free of inconsistency; otherwise, it turns out to be useless for an automated reasoning system.This assumption is motivated by the ex falso quodlibet principle [1], which establishes that "from a falsehood, anything follows".According to this principle, an inconsistent knowledge base should force an automated reasoning system to collapse.
Despite that, there are many practical applications of automated reasoning where, due to the existence of rules with exceptions, inconsistent knowledge must be used (e.g., law, politics, and medicine) [2].For example, let ∆ be a knowledge base with the following pieces of knowledge: "penguins do not fly", "birds fly", and "Tweety is a bird".Then, since there is no counter evidence, it is coherent to infer "Tweety flies" from ∆ .Now, suppose that the new piece of knowledge "Tweety is a penguin" is inserted into ∆ , resulting in a new knowledge base ′ ∆ .Then, both "Tweety flies" and "Tweety does not fly" can be inferred from ′ ∆ , and that is not a coherent reasoning.One way of restoring the consistency of ′ ∆ is to withdraw one of its conflicting pieces of know- ledge [3], but this will destroy part of the knowledge.A better alternative would be to give precedence to the exception "penguins do not fly".In this case, only "Tweety does not fly" can be coherently inferred from ′ ∆ .Indeed, by using precedence relations, coherent reasoning in presence of inconsistency turns out to be possible.
In the last decades, reasoning with inconsistent knowledge has attracted great interest in the AI community.Nowadays, argumentation [4] is a common approach for coherent reasoning in presence of inconsistency, and several different formal models of argumentation have been proposed in the literature (e.g., [5]- [8]).
This paper proposes a system for coherent reasoning, based on dialogical argumentation and defeasible reasoning, which resolves conflicts by using precedence relations of three kinds: explicit precedence relation, which is synthesized from precedence rules; implicit precedence relation, which is synthesized from defeasible rules; mixed precedence relation, which is synthesized by combining explicit and implicit precedence relations.
The paper is organized as follows: Section 2 introduces the fundamentals of defeasible reasoning and explains how the three kinds of precedence relations are synthesized in our system; Section 3 describes the dialectical proof procedure on which our system is based; Section 4 presents some features of the dialogue system prototype implemented in Prolog; finally, Section 5 presents the conclusion of the paper.

Background
In this section, we start by defining the language used to specify knowledge bases in our dialogue system; then, we present the principles of defeasible reasoning with inconsistent knowledge; and, finally, we discuss how to synthesize three different kinds of precedence relations from the information declared in a knowledge base.

Knowledge Representation
An atom denotes an atomic proposition.A literal λ is an atom α or a negated atom α ¬ .Two literals λ and λ′ are complementary literals if λ α = and λ α ′ = ¬ , or λ α = ¬ and λ α ′ = .The literal  denotes a true proposition and it has no complementary literal.A conjunction is an expression 1 k λ λ ∧ ∧ , where each i λ ( ) A precedence rule is an expression ′ ≺ , where and ′ are labels of conflicting defeasible rules, stat- ing that the rule precedes the rule ′ (i.e., that the priority of rule is higher than the priority of the rule ′ ).Since precedence rules do not involve atoms of the logical language, they are considered as me- ta-knowledge, whose only purpose is to provide information necessary to resolve conflicts between defeasible rules.
A knowledgebase ∆ is a finite set of consistent labeled defeasible rules and precedence rules.For example, is a knowledgebase, where p , b , and f stand, respectively, for "penguin", "bird", and "fly".In this know- ledge base, the defeasible rule 2 : b f → states that "birds fly", the defeasible rule 3 : p f → ¬ states that "penguins do not fly", and the precedence rule 3 2 ≺ states that the defeasible rule 3 has precedence over the defeasible rule 2.

Defeasible Reasoning
As already said, a defeasible rule ϕ λ → states that the literals in ( ) ϕ Λ are reasons to believe in the literal λ , if there is no counter evidence to λ .In this context, the symbols ¬ , ∧ and → are not interpreted as in classical logic, since neither modus ponens (i.e., { } holds for defeasible rules.In fact, even when the antecedent of a defeasible rule is true, its consequent may be false.Defeasible reasoning is based on an inference rule called modus non excipiens [9].This inference rule differs from modus ponens because it has an implicit premise stating that the consequent of a defeasible rule follows from its antecedent, provided that there is no exception to the rule.Therefore, defeasible reasoning is a kind of reasoning that produces only a contingent demonstration of a literal λ .Anyway, a necessary (although not suf- ficient) condition to believe in a literal λ is that it can be, at least, defeasibly derived from the knowledge base.
A defeasible derivation tree of a literal λ from a knowledge base ∆ , denoted by , is a tree such that:  The root of ( ) labeled with a literal λ′ , there exists a defeasible rule ϕ λ′ → ∈ ∆ .
 If ϕ = , then the node labeled with λ′ is a leaf in ϕ λ λ = ∧ ∧ , then that node has exactly k children nodes, which are labeled with 1 , , k λ λ , respectively.A defeasible derivation tree is generated by a backward search procedure, similar to SLD-refutation [10].For example, a defeasible derivation tree of the literal u from 2  ∆ is depicted in Figure 1.

{ }
2 , 2 : ,3 : , 4 1: : ,5 : ,6 : A literal λ is defeasibly derivable from ∆ if, and only if, there exists a defeasible derivation tree ( ) λ ∆ ϒ .For example, as shown in Figure 2, both literals f ("Tweety flies") and f ¬ ("Tweety does not fly") are defeasibly derivable from the knowledge base 1  ∆ .Notice that defeasible derivation is a monotonic process, since the extension of ∆ with new knowledge cannot avoid the derivation of previously derived literals.Nevertheless, defeasible reasoning is a non-monotonic process, since the extension of ∆ with new knowledge can make a previously coherent conclusion becomes incoherent, and vice-versa.For example, consider the following knowledge base: , 2, 4 3 5 : where c , b , f , and s stand for "chicken", "bird", "fly", and "scared", respectively.Clearly, both f and f ¬ are defeasibly derivable from 3  ∆ , since However, because 3 2 ≺ , 2 A is considered stronger than 1 A and, hence, only f ¬ is a coherent conclusion from 3  ∆ .In other words, arguments 1 A and 2 A attack each other, but 2 A defeats 1 A .Now, suppose that 3 ∆ is extended, becoming can be constructed based on the extended 3 ∆ and, since 4 3 ≺ , the new argument 3 A defeats 2 A , and reinstates 1 A .As a result, the previously coherent conclusion f ¬ becomes an incoherent conclusion, and the previously incoherent conclusion f becomes a coherent conclusion.This idea is illustrated in Figure 3.
It is worthy noticing that, without the precedence rules 3 2 ≺ and 4 3 ≺ , the conflicts between the argu- ments could not be resolved and, consequently, neither f , nor f ¬ could be accepted as a coherent conclusion from 3  ∆ .When two conflicting defeasible rules have the same strength, we say that they block each other.

Precedence Relations
Let L ∆ be the set of labels used in a knowledge base where a , f , w , c , s , and b stand for "animal", "fly", "winged", "chicken", "scared", and "bird", re- Then, we have: An implicit precedence relation over defeasible rules declared in ∆ , based on the criterion of specificity [11], can also be defined.In this work, we adopt a criterion of specificity that favors two aspects of a defeasible rule: precision (amount of information in the rule's antecedent) and conciseness (number of steps to reach the rule's antecedent).Let : be a knowledge base where all presumptions are reasons to believe in 2 λ .
Then 1 is more specific than 2 , denoted by  is an irreflexive relation (since the specificity criterion is defined only for conflicting rules), ∆  is an asymmetric relation (since, if ′ , the antecedent of ′ is de- feasibly derived from the antecedent of , but not vice-versa), and ∆  is a transitive relation, with respect to conflicting rules (since, if ′ , ′ ′′ and ′′ ′′′ , then ′′′ ◊ and the antecedent of ′′′ is defeasibly derivable from the antecedent of , but not vice-versa).Therefore, ∆  is an implicit precedence relation over defeasible rules declared in ∆ .For example, considering 4  ∆ , we have: The synthesis of an implicit preference relation is based only on the syntax of the defeasible rules declared in a knowledge base and, therefore, it has the advantage of being a criterion independent of the application domain.However, not all precedence rules can be defined in terms of specificity and, frequently, a knowledge base also contains explicit precedence rules defined by a domain expert.In this case, a mixed preference relation (synthesized by combining explicit and implicit preference relations) may be used.Notice, however, that ∆ ∆ ∪   is not necessarily a strict partial order over L ∆ (since explicit and implicit precedence relations can disagree about the relative precedence of two defeasible rules).For example, for the knowledge base  is not a strict partial order over 5 L ∆ , as can be easily verified: To solve this problem, we propose an algorithm that combines explicit and implicit preference relations, by giving preference to explicit precedence rules.This algorithm starts with : Π is a cyclic relation, it finds the set W of the weakest edges in a shortest cycle in m ∆ Π , and defines :

The Dialectical Proof Procedure
As discussed in Section 2.2, arguments for and against a conclusion can be extracted from defeasible derivation trees.Arguments are similar to proofs but, since they can be defeated by stronger counterarguments, their conclusions cannot be warranted under all circumstances.In this section, we present the fundamentals of the dialectical proof procedure on which our system is based.Given a knowledge base ∆ , this proof procedure can where b , f , c , and s stand for "bird", "fly", "chicken", and "scared", respectively.Figure 5 shows a dialectical tree warranting that f ¬ is a coherent conclusion from 6 ∆ .Winners and losers are marked with W and L , respectively.
As said before, the agents play different roles in a dialogue: while pro defends the claim that λ is a cohe- rent conclusion from ∆ , con tries to raise doubts about that claim.Notice, however, that con does not de- fend the opposite claim (i.e., that the complement of λ is a coherent conclusion from ∆ ).Therefore, to win a dispute, pro must defeat the rules used by con ; whereas, to win a dispute, con can defeat or block the rules used by pro .Moreover, when pro wins a dispute, λ is accepted (and, consequently, the complement of λ is rejected); on the other hand, when con wins a dispute, λ is rejected (and there is no warranty that the complement of λ is accepted).Indeed, this proof procedure adheres to the open-world assumption [14], ac- cording to which the value of a literal can be unknown.For example, both p and p ¬ are rejected as coherent conclusions from , since the rules 1 and 2 block each other (notice that pro can agree with p and p ¬ because it is a credulous agent) (Figure 6).

The Dialogue System Prototype
A prototype1 of the proposed dialogue system was implemented in Prolog [15].It runs in interpreted mode, and its commands are executed as standard Prolog queries.The main commands offered by this prototype are described in Table 2.By default, the system uses a mixed precedence relation and runs in verbose mode.
In the knowledge representation language used in the prototype, the symbols ¬ , ∧ , → , and ≺ are replaced by the operators not, and, then, and precedes, respectively, the literal  is replaced by the keyword true, and defeasible rules can contain free variables.For instance, Figure 7 (left, top) shows a knowledge base coded in this new representation language and saved in a file named kb.pl.The command implemented by the predicate precedence_relations/1 shows the three kinds of precedence relations synthesized from a specific knowledge base.For instance, the precedence relations for the knowledge base kb.pl are shown in Figure 7 (left, bottom).
The command implemented by the predicate #/2 allows the user asking whether a literal is a coherent conclusion from a knowledge base.Only ground literals are allowed in queries and, at each query, each defeasible rule with variables is automatically replaced by one of its ground instances, according to the literal used in the query.For instance, the result of the query kb # fly (tina) is shown in Figure 7 (right).
The implemented prototype was tested with a series of benchmarking examples found in the literature and intuitively coherent results were obtained for all of them.
As future steps, we plan to study the formal properties of the dialogue system prototype, with respect to well known semantics for argumentation systems [5], as well as to develop a graphical interface to show the dialectical tree structure and the relations between its arguments and counterarguments.

Conclusions
The ability of dealing with inconsistent knowledge bases is relevant for many practical applications.As it is well known, in such applications, inconsistency arises mainly due to the existence of rules with exceptions.Thus, one way of coping with inconsistency is to give precedence to exceptions.Based on this idea, this paper proposes a dialogue system for coherent reasoning with inconsistent knowledge bases, which resolves conflicts among defeasible rules by using precedence relations of three different kinds.
More specifically, this paper 1) shows how explicit and implicit precedence relations can be automatically synthesized from an inconsistent knowledge base and also how they can be combined to synthesize a mixed precedence relation (where explicit precedence rules can override conflicting implicit precedence rules); 2) presents a dialectical proof procedure that can be used to decide whether a specific conclusion can, or cannot, be coherently inferred from an inconsistent knowledge base; 3) implements a prototype system for coherent reasoning with inconsistent knowledge bases.
Future extensions of this work are the study of the formal properties of the proposed system and the development of a graphical interface for it.
≠ be the set of defeasible rules of ∆ that are not presumptions; let

 5 ∆
, is a strict partial order over L ∆ and precedence relation over defeasible rules declared in ∆ .The general idea of this process is depicted in Figure4, considering an arbitrary situation involving eight labels.In this figure, explicit and implicit preference rules are represented by plain and dotted lines, respectively, and the precedence rules resulting from the transitive closure of the acyclic relation are represented by dashed lines.Particularly, for

Figure 4 .
Figure 4. Mixed precedence relation synthesis.. (a) Weakest edges in cycles in

Figure 5 .
Figure 5. Dialectical tree warranting that f ¬ is a coherent conclusion from 6 ∆ .

consequent of a defeasible rule can emerge from its conflicts with other defeasible rules.
, but c is not derivable from a w ∧ ).Intuitively, rule 3 is more concise than rule 2. of implicit precedence rules automatically synthesized from the defeasible rules declared in ∆ .Clearly, ∆

Table 2 .
Main commands offered by the dialogue system prototype..If the verbose mode is active, the user can see each step of the reasoning process; otherwise, he can see only the final result of that process.
Command Descriptionkb # literal Asks the system whether literal is a coherent conclusion from kb.precedence_relations (kb) Shows all the three precedence relations synthesized from kb.explicit Choose explicit precedence relation to resolve conflicts.implicitChoose implicit precedence relation to resolve conflicts.mixedChoose mixed precedence relation to resolve conflicts.verboseAlternate between verbose and non-verbose mode.