Business Intelligence Expert System on SOX Compliance over the Purchase Orders Creation Process

Abstract

The objective of this work is to define a decision support system over SOX (Sarbanes-Oxley Act) compatibility and quality of the Purchase Orders Creation Process based on Artificial Intelligence and Theory of Argumentation knowledge and techniques. This proposed model directly contributes to both scientific research artificial intelligent area and business practices. From business perspective it empowers the use of artificial intelligent models and techniques to drive decision making processes over financial statements. From scientific and research area the impact is based on the combination of 1) an Information Seeking Dialog Protocol in which a requestor agent inquires the business case, 2) a Facts Valuation based Protocol in which the previously gathered facts are analyzed, 3) the already incorporated initial knowledge of a human expert via initial beliefs, 4) the Intra-Agent Decision Making Protocol based on deductive argumentation and 5) the semi automated Dynamic Knowledge Learning Protocol. Last but not least the suggested way of integration of this proposed model in a higher level multiagent intelligent system in which a Joint Deliberative Dialog Protocol and an Inter-Agent Decision Deductive Argumentation Making Protocol are described.

Share and Cite:

J. Fernandez, Q. Martin and J. Rodriguez, "Business Intelligence Expert System on SOX Compliance over the Purchase Orders Creation Process," Intelligent Information Management, Vol. 5 No. 3, 2013, pp. 49-72. doi: 10.4236/iim.2013.53007.

1. Introduction

On 16th October 2001, Enron, US multinational company dedicated to gas and electricity market publishes its financial quarter results with 600 US millions dollars of losses, its stocks decrease from 90 dollars to 30 cents. This is the beginning of its bankruptcy, firing thousands of employees, significant loses on its shareholders, financial markets are collapsed by contagion and social alarm shoots up.

Only two months before, on August, Enron reached its historical maximum in the stock exchange market with 90 dollars per share, showing a healthy financial situation.

The social alarm had jumped, the financial irregular practices begin to be visible, and after Enron’s collapse, it is followed by companies like Global Crossing, WorldCom, Tyco or Adelphia between others. The principal stock markets of the world suffered big falls of price and the lack of credibility and confidence covered all financial markets.

In July, 2002, the government of the United States approved the Law SOX (Sarbanes-Oxley Act) in response to all these financial scandals, with the last aim to increase the governmental control on the economic and financial operations of the companies, to control the audits of his accounts, to protect the investors, to avoid massive dismissals and to try to return the calm to the financial markets.

This Law turns into a norm of obliged fulfillment into the United States, but at the same time, turns into a facto standard in the rest of the world due to the high degree of globalization and owed also mainly to that the companies with headquarters in the United States or that operate on its stock markets, consolidate its results worldwide on the basis of the results of his subsidiaries in the rest of the world.

The above mentioned forces that the subsidiaries of these multinationals in the rest of countries in spite of being out of the United States, have to fulfill also with the above mentioned Law, not to harm the parent company with regards to the fulfillment of the Law in the United States.

1.1. Problem Description

The problem here described is a decision problem with the following characteristics:

1) It is a decision problem: it is needed to take a decision over compatibility or not of a specific business case focused on the Purchase Orders Creation Process.

2) The decision should be based on evidences: those evidences will be the basis of the decision and will be the evidence in front of auditors and control government bodies.

3) It is needed an initial expert knowledge: this Law tells what it is needed to do but not how to do it. It is fundamental that this initial expert knowledge will come from a human expert with enough experience in this kind of cases.

4) The model should be able to learn dynamically from court decisions, government control bodies or other human experts to let the initial knowledge evolves and grows far beyond its initial state.

1.2. Special Contribution of This Model

Existing models based on Multiagent Systems and Theory of Argumentation show the following limitations:

1) They have been designed to solve other types of problems like medicine, legal, negotiations, ecommerce or learning ones.

2) They don’t have initial expert knowledge over SOX compatibility over the Purchase Orders Creation Process.

3) They don’t have any method to incorporate dynamic decisions coming from the courts or from the government control bodies to add this knowledge to the initial one.

This paper constitutes a novel approach to this kind of problems due to the fact that has an optimized structure to solve this kind of specific problems, adds an initial knowledge base coming from a real human expert in this matter, provides a learning method to dynamically incorporates court decisions and control government bodies decisions, letting the system evolves far beyond its initial state to improve its efficiency based on its accumulated experience.

1.3. Artificial Intelligence, Theory of Argumentation and SOX Regulation

In the present work a method to support decisions on the fulfillment of the SOX Law is designed, using both technologies of Artificial Intelligence and Theory of Argumentation.

More in detail, the objective of the presented work is on one side, to design a decision support intelligent expert system based on technologies of argumentative negotiation to check if certain economic and financial operations of the companies are compatible or not with the above mentioned Law, helping companies to take corrective actions before it will be too much late and giving support to the financial auditors in their decisions on if the economic and financial operations of a certain company are compliant or not with the SOX legislation, providing them a structured method based on recognized technologies of Artificial Intelligence, Negotiation Techniques and Argumentation Theory. On the other side, as secondary objective, this system will provide a measure of the quality of the analyzed business case according to previously defined criteria.

This work is based on two fundamental areas:

1) Theory of Argumentation in Multiagent Systems, inside Artificial Intelligence area.

2) Legal Normative of Financial SOX Audit and its relationship with Computation and Intelligent Systems.

With regards to the first point, the basics of the Theory of Argumentation are analyzed inside Artificial Intelligence area, at the same time that the basic principles of Multiagent Systems based on Theory of Argumentation are revised, too.

With regards to the second point, related to the financial SOX regulation, it is described the key points of this regulation as well as its relationship with Information Technologies and Artificial Intelligence. After this analysis, several recent scientific articles in this matter are revised too.

Nowadays, Artificial Intelligence area is really extensive due to the topics it covers, the quantity and quality of scientific studies and its connections with other areas of knowledge, as well as the areas in which it can have application, like Medicine, Engineering, Industrial Processes or Finance. In relation to the work here exposed, we are going to focus in one subarea of the Artificial Intelligence, Multiagent Systems, and its relationship with Theory of Argumentation On one hand, Artificial Intelligence tries to go closer to the human reasoning models, well for simulation purposes, or to be able to apply these reasoning models to different areas of the science with the objective of getting that certain systems or scientific and technological processes, will have an artificial reasoning behavior. On the other hand, the Theory of Argumentation with a wide history, attempts to model and to characterize from a theoretical point of view, the different patterns of the human reasoning, being based on its two fundamental bases that are Classic Logic and Mathematics.

One of the most important areas inside Artificial Intelligence, which in the last years has experienced an important scientific advance is the area of Multiagent Systems. This area provides the fundamental basis to model complex systems where all its elements interact among each other with the objective to reach individual or common objectives and where the interaction is critical to reach whatever objective. Inside the world of Computation, Information Technologies and Artificial Intelligence, the area of Multiagent Systems gets special relevance when we connect it with the Theory of Argumentation. It is in this moment when we can provide to complex Multiagent System an internal logic to let them to behave using simulated reasoning processes with solid bases on Formal Logic and mathematical models.

Here there are three typical examples of the use of Theory of Argumentation in different fields of the Artificial Intelligence:

1) Non monotonic reasoning. Here the Theory of Argumentation is used to identify, negotiate and solve inconsistencies inside the reasoning, and to generalize reasonings.

2) Reasoning and taking of decisions under uncertainty. Here the Theory of Argumentation is useful to make inferences and to combine the evidence concept at the same time.

3) Multiagent Systems. In this area, the Theory of Argumentation is especially useful to simulate reasoned interaction among the different agents of a certain system as already commented.

From a theoretical point of view, the argumentation can be defined as the interaction process among different arguments to reach a conclusion. This conclusion can be a statement, an action proposal, a preference, etc.

With regards to the SOX Law, it is formed by eleven titles, and each title cover different aspects of the Law:

Articles 302, 404 and 906 of the 67 articles reflected in the SOX Law, are the most important ones because they make responsible to the management and especially to the General Director and the Financial Director of all the financial reports presented by the company.

With regards to the article 302, Corporate Responsibility and Financial Reports, the effective legislation in US, forces to the companies quarterly and annually to publish their financial results.

The article 302 of the SOX Law forces the General Director and the Financial Director to certify personally inside the periodical published results report the following points:

1) Certification of Revision of the Report: Personal certification of the General Director and the Financial Director that they have reviewed the report.

2) Certification of Truthfulness: Personal certification of the General Director and the Financial Director that the report does not contain any material untrue statements or material omission of be considered misleading.

3) Certification of Financial Exact and Truthful Data: Personal certification of the General Director and the Financial Director that financial statements and related information fairly present the financial condition and the result in all material respects.

4) Certification of Internal Controls: Personal certification of the General Director and the Financial Director that they are responsible for internal controls and have evaluated these internal controls within the previous ninety days and have reported on their findings.

5) Certification over Publication of Deviations and Frauds: Personal certification of the General Director and the Financial Director that they have informed the auditor company of any deficiency detected in the design of the internal controls and any detected fraud.

6) Certification of Significant Changes in the Internal Controls: Personal certification of the General Director and the Financial Director about any change in the design of the internal controls and about whatever corrective action to repair whatever detected deficiency over the internal controls.

With regards to the article 404, Revision of the Internal Controls by Company Management, this article forces to include in the annual report where the results of the company are published, a report about the internal controls in effect inside this company that contains the following points:

1) Management Responsibility over the Internal Controls: The report over the internal controls included in the annual results report has to specify a sentence which states that the management of the company is responsible to define and maintain the needed internal controls for a right financial report process.

2) Verification and Report from the Management of the Company about the effectiveness of the internal controls: The report over the internal controls included in the annual results report has to inform on the results of the revision carried out by the management of the company about the effectiveness of the internal controls in effect inside this company.

3) Revision of the Previous Report for an Authorized Auditor Company: The authorized auditor company in charge of the audit of the financial results presented by the company should audit as well the report coming from the previous point about the effectiveness of the internal controls.

With regards to the article 906, Corporate Responsibility on the Financial Reports, this article is redundant with the article 302 previously explained and reinforces the General Director’s and Financial Director’s direct responsibility on the financial periodical results of the company.

This article clearly states the sanctions for General Director and Financial Director just in case of inadequate reports with errors of reports which don’t reflect faithfully the financial situation of the company.

The problem described before, is a decision making problem with the following main characteristics:

1) Decision making problem: at the end, it is needed to take a decision about the compatibility or not of the specific business case with this law.

2) Decision based on evidences: those evidences will be the support of the decision and will be the probe towards auditors and control organisms.

3) Needed initial expert non standardized knowledge: this Law states what should be done but not how should be done. This means that the source of the initial knowledge should be a human expert with enough experience in driving business cases inside a SOX compliant state.

4) Been able to learn from present court resolutions to be able to use this extra knowledge in the future: some kind of learning method is needed to let the initial knowledge evolve and growth far beyond its initial state.

This Law affects whatever economical or financial major process in a company, like for example purchasing cycle, financial cycle or sales cycle. Those major cycles are divided in different processes. For example, purchasing cycle can be divided in suppliers’ selection process, suppliers contracting process, approval of purchase orders, and so on. This kind of structure can be very well modeled with a Multiagent System (MAS) structure. Taking in mind as well that the final decision should be based on evidences, the Argumentation in combination with MAS is an optimal approach to model this kind of problems.

Present existing models using this kind of techniques like MAS and Argumentation show limitations like:

1) They are being designed mainly to solve other type of problems like medical, legal, negotiations, trading, education or business (COSSAC, CARNEADES, AAC, TAC, INTERLOC, ARGUGRID).

2) They don’t have an initial expert based of SOX compliant knowledge.

3) They don’t have a learning method able to incorporate court resolutions to the initial knowledge base.

The model here presented is a novel approach to solve this kind of problems due to the fact that it has an optimized structure to solve this specific problem, incorporates an initial expert knowledge base coming from the experience of a human expert and incorporates an specific learning protocol to add present court resolutions to the initial knowledge base, letting the system to evolve far beyond its initial knowledge state, letting the system to increase its efficiency as the times goes on based on its accumulated experience.

This article is structure as follows: Section 2 describes the State of the Art of both relevant areas in which this article is based on and states the starting point of this work. Section 3 describes the proposed model specifying the key elements as well as the main protocols of the system. Section 4 presents a possible integration of the previously proposed system with a higher level multiagent system. Sections 5 and 6 will provide a clear real example of the use or our proposed model over a real business case. Finally, Section 7 will remark the conclusions here obtained.

2. State of the Art

2.1. Theory of Argumentation in Artificial Intelligence

The Theory of Argumentation has been broadly studied and investigated throughout the years inside the areas of Philosophy and Mathematical Logic.

Nowadays Artificial Intelligence is an important field of application of the Theory of Argumentation and we can find traditional studies of this practical relationship in subjects like Decision Making, Logical Programming or Tentative Knowledge [1] Fox, Krause & Ambler, 1992; [2] Krause et al., 1995; [3] Dimpoulos, Nebel & Toni, 1999; [4] Dung, 1995.

There are as well more recent examples which show this relationship between Theory of Argumentation and Artificial Intelligence like: [5] Besnard & Hunter, 2008; [6] Bench-capon & Dune, 2007; [7] Kraus, Sycara & Evenchik, 1998; [8] IEEE Intelligent Systems on Argumentation 2007; [9] Rahwan & Simari, 2009.

There are as well some other important topics under investigation nowadays which show the wide range of possibilities of this relationship like for example: 1) Computational models of argumentation, 2) Argument based decisions making, 3) Deliberation based on argumentation, 4) Persuasion based on argumentation, 5) Search of information for inquiring based on argumentation, 6) Negotiation and resolution of conflicts based on argumentation, 7) Analysis of risks based on argumentation 8) Legal reasoning based on argumentation, 9) Electronic democracy based on argumentation, 10) Cooperation, coordination, and team building based on argumenttation, 11) Argumentation and game theory in Multiagent Systems, 12) Argumentation Human-Agent, 13) Modeling of preferences in argumentation, 14) Strategic behavior in argument based dialogues, 15) Deception, truthfulness and reputation in the interaction based on argumentation, 16) Computational complexity of the dialogues based on argumentation, 17) Properties of dialogues based on argumentation (success, termination, etc), 18) Hybrid models of argumentation and 19) Implementation of Multiagent Systems based on argumenttation.

There are two difference approaches about automatic argumentation: 1) Abstract Argumentation and 2) Deductive Argumentation. The Abstract Argumentation is focused in the coexistence of arguments without getting into detail of its meaning. It only takes care about the attack relationships among arguments and their accept ability or not and in which grade. One of the most important studies so far and whose concepts are still valid nowadays are the Abstract Argumentation Systems of [4] Dung (1995). [10] Boella, Hulstijn & Torre, 2005 proposed an extension of Dung’s model in which the arguments are dynamic elements not predefined in advance.

Models of Deductive Argumentation are another alternative to the Automatic Argumentation. They are deductive models based on formulas and based on Classical Logic. The arguments, opposite to the Abstract Argumentation, are complex elements that can be subdivided in elements or arguments of more simple structure. Deductive Argumentation is able to manage the complexity of the internal structure of the arguments. The key concept inside this type of argumentation is the logical deduction. The fundamental objective of whatever model of deductive argumentation is to reach a conclusion based on a support formed by arguments and reasoning of deductive logic. In the literature we find a recent study carried out by [5] Besnard and Hunter (2008) which is focused on Deductive Argumentation inside the area of Artificial Intelligence.

Basically, Deductive Argumentation consists on manage non evident information (information that is not known if it is or not acceptable or truthful) and should generate arguments to support or against this information so that after a process of deductive reasoning, the conclusion about its truthfulness or admissibility is reached.

There are two fundamental reasons why Theory of Argumentation gets special relevance in Multiagent Systems: 1) On one hand, Theory of Argumentation finds in Multiagent Systems a wide field of practical application, allowing Multiagent Systems to get benefits from an entire formal solid theory and with a wide history and where formal existent models in Theory of Argumentation offer a wide range of possibilities in the design of this kind of systems and 2) On the other hand, Multiagent Systems find in Theory of Argumentation a solid and formal base which allows to provide those systems with a syntactic and semantic structure which helps to the design of these kinds of systems and to reach their own objectives.

Multiagent Systems can use Theory of Argumentation and their formal models for internal reasoning for their individual agents or in share reasoning among all the agents of the system. The shared reasoning among the agents of the system consists in that agents dialog among each other with the final objective of reaching the common shared previously defined objective. The communication among the agents is driven for specific dialogue protocols and is a key important point to reach the final objective.

It is very important to remark in this point that the communication among the agents which conforms the Multiagent System is a key element to reach the objecttives of this system. This communication will be based on different types of dialogues. And it is in this communication and in these dialogues where Multiagent Systems are closely related to the Theory of Argumentation, because this last lets to give to these dialogues a formal structure based on preexisting argumentation models.

Basically, the success of a Multiagent System consists on achieving its objective for which was designed. The grade of success in getting this objective will depend in great measure on the fruitful communication among its agents. And thanks to the Theory of Argumentation, we can provide a solid formal base to this communication and their corresponding dialogues.

The design of Multiagent Systems, as well as the investigation of new formal models of argumentation, are two areas in continuous growth and whose advances impact very positively in getting more efficient Multiagent Systems at the time to reach the final objective.

One of the most influential works in the communication area of Multiagent Systems inside the Artificial Intelligence using argumentation technics has been the work carried out by Walton and Krabe in which it is described the basic concepts of communication dialogues and reasoning processes [11] Walton & Krabe, 1995. As state by Walton and Krabe those are the main dialogue types: 1) Dialogues based on information seeking, 2) Dialogues based on questions, 3) Dialogues based on persuasion, 4) Dialogues based on negotiation, 5) Dialogues based on deliberation, 6) Dialogues based on dialectical battles, 7) Dialogues based on commands, 8) Dialogues based on discovery of alternatives, 9) Non cooperative dialogues and 10) Educational dialogues.

[12] Cogan, Parsons and McBurney (2005) proposed a new type of dialogue between agents: the verification dialogues. [13] Amgoud and Hameurlain (2006) proposed a model to select the right movement in a dialogue between agents in terms of type of message and content to be transmitted. [14] Tang and Parsons (2005) designed a specific deliberation dialogue model in which the global action plan of the full Multiagent System is conformed by the union of the subplans of each agent after a deliberation process with the rest of the agents.

There are as well some other authors [15] Amgoud, Maudet & Parsons, 2000; [16] Reed, 1998 who propose modifications to the previously enumerated dialogues.  In all these dialogue types, messages are exchanged among the involved agents, according to several aspects like the dialogue type, the previous knowledge of the agents, the reasoning protocol or the argumentation technique. There are as well some other authors [17] Parsons & Wooldrige, 2003; [18] Sklar & Parsons, 2004 who have identified and formally defined the different types of messages that can be used in different dialogues, for example: 1) Messages of Assertion, 2) Messages of Acceptance, 3) Question Messages, 4) Challenged Messages, 5) Testing Messages and 6) Answer Messages. Those messages are defined in terms of a specific semantic implemented by preconditions and postconditions.

The relationship between Theory of Argumentation and Multiagent Systems is widely supported nowadays by the present scientific research community as we can see in the following examples: 1) In 2009 [19] Belsiotis, Rovatsos & Rahwan (2009) designed a dialogue model based on reasoning, deliberation and tentative knowledge to use Argumentation Theory over calculus of situation plans. 2) [20] Devereux and Reed (2009) proposed an specific model for strategic argumentation in rigorous persuasion dialogues in which it is push the concept of attacking not only the initial knowledge of the agents, but as well this missing knowledge that does not belong to the agent. 3) [21] Matt, Toni & Vaccari (2009) designed a model based on dominant decisions on argumentative agents. The idea behind this work is that all possible decisions provided by each agent will be value based on previously indicated preferences looking for maximize the final benefit. This mechanism is as well a procedure to autoexplain the winner decision. 4) [22] Wardeh, Bech-Capon & Coenen (2009) proposed a multi-party argument model based on the past experience of the agents to classify a specific case. This work promotes the idea that each agent uses data mining techniques and associative rules to solve the case based on its own experience. 5) [23] Morge and Mancarella (2009) proposed an argumentation model based on assumptions to drive the argumentation process between agents with the objective to reach the optimal agreement between all the agents. 6) [24] Thimm (2009) proposed an argumentation model for multiagent systems based on Defeasible Logic Programming in which each agent generates support and opposite arguments to answer the objective question. At the end the most feasible argument is selected to answer the initial question.

2.2. Intelligent Models Applied to SOX

The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire journals, and not as an independent document. Please do not revise any of the current designations.

Here it is shown how Information Technologies through Artificial Intelligence help and support Decision Making related to the SOX mandates this Law establishes. Some of these studies are previous to the SOX Law and they showed the existing concern about if companies publicshed truthful financial reports and suggest several intelligent systems to support financial auditors in their decision making processes to state if those reports were truthful or not.

[25] Changchit, Holsapple & Madden (1999) before the SOX Law, remarked the concern about truthful financial reports of companies and remarked the positive impact of using intelligent system to identify problems on the internal controls of those companies. It constitutes a good example of interaction between Artificial Intelligence and Financial Area. [26] Meservy (1986) designed an expert system to audit the set of internal controls of the companies. This work is as well before the publication of the SOX Law.

[27] O’Callaghan (1994) suggested an Artificial Intelligence application based on neural networks with backpropagation to simulate the revision of fixed actives of a company using a system of internal controls based on the COSO (Committee of Sponsoring Organizations of the Treadway Commission) model. In a recent work done by [28] Liu, Tang & Song (2009), it is presented an evaluation model of internal controls based on fuzzy logic, pattern classification and data mining with the objective to check the effectiveness of the internal controls of the companies.

[29] Kumar & Liu (2008) designed a model that uses techniques of patterns recognition to audit the internal controls and processes of the company. [30] Changchit & Holsapple (2004) designed an expert system to evaluate the internal controls by the management of the company. The final objective is to evaluate the effectiveness of the structure of the internal controls of the company.

[31] Korvin, Shipley & Omer, (2004) published an study about the internal possible controls that can be defined inside an computer system focused on the financial management of a company and to value using fuzzy sets logic, the risks over certain specific threats. [32] Deshmukh & Talluru (1998) designed another model to value risks on specific threats in the internal controls of the company. This work is based on fuzzy sets theory. This work will let the management of the company to decide if their internal controls are or not effective and to take appropriate actions.

In Reference [33] Fanning & Cogger (1998) proposed a Fraud Detection Model based on Neural Networks using as input the data published by the company in its periodical results. It is another example in which Artificial Intelligence provides its tools to the Financial Area. Fanning and Cogger based their study on other two previous studies which applied techniques of neural networks to Economy and Finances; [34] Coakley, Gammill & Brown, 1995; [35] Fanning & Cogger, 1994 and they combined them with statistical traditional techniques to create their model of prediction of financial fraudulent reports.

In Reference [36] Welch, Reeves & Welch, (1998) proposed a specific model to search financial fraud and support audit decisions based on the use of genetic algorithms. This work is focused on fraud research on suppliers of the government. This model looked for specific fraud patterns to identify evidences of these frauds. In Reference [37] Srivastava, Dutta & Johns, (1998) proposed an specific model to valuate and plan audits using functions of belief based on intelligent expert systems.

In Reference [38] Sarkar, Sriram & Joykutty, (1998) developed an expert system based on beliefs networks and using probabilistic models on the inference process.

It is needed to remark that the concern about truthful and clear financial reports existed before the SOX Law, but this Law states a clear legal framework with very well defined identification of responsibilities.

With the SOX Law in effect, companies are forced to establish certain internal controls inside key processes of the company to give visibility and transparency to all the operations carried out. Due to the existing high technological level nowadays, and due to the big managed volumes of information, it makes mandatory and needed the implementation of internal controls in the computer systems used by these companies.

For this reason, it is necessary to implement internal controls inside the information systems used by the areas of Purchasing, Sales and Finance and Control. These internal controls have been transformed into new requirements or functionalities that whatever information system should have to be compatible with the effective SOX Law.

The main objective of these internal controls is to monitor transactions or operations of purchases, sales or financial operations with the main objective that whatever operation will be visible to the management of the company and it will be made according to the rules and established processes. The General Director and the Financial Director are the responsible persons in charge of the certification in front of the control organisms about the truthfulness and transparency of all the operations, and that they have not been carried out fraudulent hidden operations with the corresponding negative impact for the shareholders of the company.

Nowadays and in relation to the model here design, after revising different international bibliographical sources and up to the best of our knowledge it isn’t found any publication that uses Multiagent Systems and Theory of Argumentation in the implementation of internal controls SOX with the objective of identify if a Purchase Orders Creation Process of an specific business case is or not compatible with the SOX Law supporting auditors and companies to take their appropriate decisions about this SOX compliance.

3. Proposed Model

The objective of the present work is to design an argumentative SOX compliant decision support system over the Purchase Orders Creation Process of the financial products and services Purchasing Cycle using technologies of both Artificial Intelligence and Argumentative Negotiation to support companies to identify non SOX compliant situations before it will be too much late and to support financial auditor to decide if the economic and financial periodical results published by those companies are or not compliant with the SOX Law.

As well it is explained how this system can be incurporated into a higher level multiagent intelligent expert system to cover the full financial purchasing cycle.

In general, in whatever company, there are six different key financial cycles: 1) Purchasing Cycle, 2) Inventory Cycle, 3) Sales Cycle, 4) Employees Payment Cycle, 5) Accounting Cycle, 6) Information Technologies Cycle (as support to other financial cycles), and 7) Cycle of Services Outsourcing.

The economic and financial results published by a company will be compatible with SOX Law, if all economic and financial operations that belong to these results are as well SOX compliant. As well, all those economic and financial operations are SOX compliant if all the projects or business cases that compose those results are SOX compliant too. A specific business case will be SOX compliant if all the financial cycles that constitute it, are compatible with the SOX Law.

The key processes that compose a typical Purchasing Cycle are usually: 1) Suppliers’ Selection, 2) Suppliers’ Contracting, 3) Approval of Purchase Orders, 4) Creation of Purchase Orders, 5) Documentary Receipt of Orders, 6) Imports, 7) Check of Invoices, 8) Approval of Invoices without Purchase Order and 9) Suppliers’ Maintenance. The Purchasing Cycle of a certain business case will be compatible with SOX regulation, if all its processes are SOX compliant. This proposed model is focused on the Purchase Orders Creation Process of the Purchasing Cycle and its compatibility with the SOX regulation.

The decision support system here designed, is going to be implemented by an argumentative intelligent expert agent which has the objective to help companies and auditors to decide if the Purchase Orders Creation Process followed in the analyzed business case is or not compatible with the SOX Law and as well as second objective to provide a measure of the quality of that process carried out in the analyzed business case.

The agent has being designed with a specific structure optimized to reach the final objective of the system. Those are the elements that compose this structure:

1) Agent’s Objective.

2) Initial Beliefs or Base Knowledge of the Agent.

3) Information Seeking Dialog Protocol.

4) Facts Valuation Protocol based on Agent’s Beliefs 5) Agent’s Valuation Matrix over the Business Case Facts based on its Beliefs of Knowledge Base.

6) Intra-Agent Decision Making Protocol (Intra-Agent Reasoning Process on SOX Compatibility based on Deductive Argumentation. Conclusive Individual Phase of the Agent).

7) Dynamic Knowledge Learning Protocol.

3.1. Agent’s Objective The agent’s main objective is to verify if the Purchase Orders Creation Process of the business case that is being analyzed is or not compatible with the SOX legislation.

As secondary objective, it will provide a measure of the quality of that process carried out in the analyzed business case. For both objectives, it will be check if every belief on the initial beliefs base matches or not with a fact of the facts base of the business case, and in case of matching, how much (quantitative value of this matching).

3.2. Beliefs or Base Knowledge

In this section it is gathered the initial knowledge of the agent as a set of beliefs. It represents the knowledge the agent has on the specific analyzed process without taking in mind any other possible knowledge derived from the experience and from the learning. The above mentioned beliefs will be enumerated and their characteristics will be indicated.

1) Creation of Purchase Orders:

This is a key belief of the knowledge base of this agent. The existence or not of a fact of the analyzed business case that matches to this belief, will be a key point for SOX compatibility as well as for the final valuation of the quality of the Purchase Orders Creation Process.

This is a critical factor form SOX legislation point of view. SOX legislation looks always for the transparency in all business cases of the company. And as well SOX legislation expects from the company that all decisions made by them look for the main interest of the investors according to the Law.

This belief mainly refers to verify if in the analyzed business case, the purchase orders creation has been made according to the following guidelines: 1) the previous existence of the approval of that purchase order, 2) before making any work or receiving any goods, the purchase order has been created before, 3) pricing, terms and conditions indicated in the purchase order document are the ones reflected in the contract and 4) once the services or goods have been received, the person who acts in the name of the company receiving this, reflects in a written and signed document this reception for further  revision.

2) Monitoring of Purchase Orders:

This is a key belief of the knowledge base of this agent. The existence or not of a fact of the analyzed business case that matches to this belief, will be a key point for SOX compatibility as well as for the final valuation of the quality of the Purchase Orders Creation Process.

This is a critical factor form SOX legislation point of view. SOX legislation looks always for the transparency in all business cases of the company This belief mainly analyzes if there is a periodical revision of the purchase orders to assure that the purchase orders creation process is the right one and to assure that there is no purchasing without the specific prior purchase order.

3.3. Information Seeking Dialog Protocol

This protocol is designed to let the agent interrogates the analyzed business case looking for relevant information to be analyzed later on to determine on the basis of the initial knowledge of the agent, which one is the degree of quality of the followed process in that business case, as well as to value if the above mentioned process has complied with SOX regulation. The agent inquires the business case according to the beliefs it has in its initial knowledge, and for every question, the agent will gather from the business case an answer with the needed detailed information accordingly to every belief.

This protocol is designed taking in mind two ideas: 1) one of the most important elements of an agent is its initial knowledge formed by its beliefs and 2) a business case can be considered as a set of facts which constitute all the information about how things were done along the life of the above mentioned business case. The aim of this protocol is to capture for every belief of the agent, the correspondent fact of the facts base of the business case which corresponds with the above mentioned belief. Once captured, it will be necessary to see how much it is in line with the specific belief of the agent both from a quality point of view and from SOX compliant point of view.

Basically this protocol consists on the idea that the agent asks to the business case, “how did you do this?”, and the business case will answer to the agent with the “arguments” or “evidences” of how it did it. Evidences that later on will be analyzed by the agent. It is necessary to keep in mind that the agent has a clear idea of how it is necessary to do the things in every stage of the business case based on its initial knowledge, and that what the agent is looking, is to analyze if inside the business case, things were done as should be.

This Information Seeking Dialog Protocol constitutes a phase in which the agent individually explores the whole documentation of the analyzed business case with the objective to compile as much evidences as possible on how things were done. Those beliefs as already comented, constitute the initial knowledge or base knowledge of the agent and represent the fundamental characteristics of the process that the agent is analyzing.

The Purchase Orders Creation Agent analyzes the Purchase Orders Creation Process and in the above metioned process there is a series of key characteristics. These kinds of details are “beliefs” of the agent and more important, inside these beliefs, inside its agent’s initial knowledge, the agent has a clear idea of how things should be done.

When the agent analyzes the business case with this protocol, it compiles all the facts of the business case which match with its beliefs. It can happen that for a certain belief a fact does not exist in the facts base of the business case, denoting steps inside the business case that they should have done and has not been like that. With this protocol, the agent will take this under consideration for coming stages at the time to value the quality of the process and take the appropriate decision about SOX compatibility according to this situation.

The inspection of the agent over the business case will be realized across a mediating agent which will facilitate the communication between both. This mediating agent represents the person responsible for the business case in the company, and for each question of the agent who analyzes the case, can seek inside the business case documentation to analyze the above mentioned documentation and to provide a response to the formulated question.

Here (Figure 1) it is presented the protocol in which the agent inquires the analyzed business case with the objective to gather needed information about its beliefs. This collected information will allow valuing the initial beliefs from SOX compatibility point of view and from quality point of view.

Let’s see in next the next section how to value these collected facts.

3.4. Facts Valuation Protocol Based on Agent’s Beliefs

This protocol allows the agent to be able to value the facts previously gathered as evidences with the Information Seeking Dialog Protocol. The valuation of these evidences will be carried out based on two approaches: 1) quality of the process and 2) compatibility with SOX legislation. Two weight factors have been assigned to each belief respectively for quality and for SOX compatibility. The weight of quality will denote the relevance of that belief in the global valuation of quality of the whole analyzed process. The weight of SOX compatibility will only denote if this specific belief is relevant or not from SOX compliant point of view. Qualities’ weight will be used in a numeric way to calculate the final quality of the specific analyzed process. SOX compatibilities’ weight won’t be used in a numeric way, it will indicate if that belief is or not relevant for the compatibility with SOX legislation.

Regarding valuation of quality, there will be numeric values inside the range [−10, 10], where −10 will denote a penalization in the valuation of quality, and 10 will denote the maximum value of quality. Regarding valuation of SOX compatibility, the possible values will be logical boolean values: true (t) or false (f). True denotes

Figure 1. Information seeking dialog protocol.

that this belief matches a fact of the facts base of the analyzed business case and therefore the process analyzed by this agent, regarding that belief, is compatible with the SOX legislation. False value will mean the opposite.

This is an example (Table 1):

This agent has two key beliefs composing the initial base knowledge of the agent: 1) Creation of Purchase Orders and 2) Monitoring of Purchase Orders. This is the valuation protocol for each of those beliefs (see Tables 2 and 3 below):

1) Creation of Purchase Orders:

2) Monitoring of Purchase Orders:

3.5. Agent’s Valuation Matrix Over the Business Case Facts Based on Its Beliefs or Knowledge Based

In this section, it is showed in table format (Table 4) all valuations gathered by the previous Facts Valuation Protocol based on Agent’s Beliefs over each one of the facts of the analyzed business case.

It is needed to highlight, as indicated before, that SOX compatibility weights are indicators of if that belief is or not relevant from SOX compatibility point of view. In the case of being a relevant belief for SOX compatibility, it will be indicated with an unitary weight (1), and its

Table 1. Facts valuation protocol based on agent’s beliefs.

Table 2. Purchase orders creation valuation protocol.

Table 3. Purchase orders monitoring valuation protocol.

value according to the previous protocol, will be true (t) meaning that it is SOX_COMPLIANT or false (f) meaning NON_SOX_COMPLIANT. In the case of being an irrelevant belief for SOX compatibility, its weight will be null (0), and their value won’t be relevant (it doesn’t apply, NA).

The final valuation of SOX compatibility of the whole agent over the specific process that is being analyzed, will be calculated by an inference rule describe more in detailed in the next protocol (Intra-Agent Decision Making Protocol). The final valuation of quality of the process analyzed by this agent, will be given by the weighted sum of all the quality values obtained in each one of the analyzed facts of the business case.

Table 5 describes more in detailed the Valuation Matrix over the Facts for the Purchase Orders Creation Process.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] J. Fox, P. Krause and S. Ambler, “Arguments, Contradictions and Practical Reasoning,” Proceedings of the 10th European Conference on Artificial Intelligence (ECAI-92), Vienna, 3-7 August 1992, pp. 623-627.
[2] P. Krause, S. Ambler, M. Elvang-Goransson and J. Fox, “A Logic of Argumentation for Reasoning under Uncertainty,” Computational Intelligence, Vol. 11, No. 1, 1995, pp. 113-131. doi:10.1111/j.1467-8640.1995.tb00025.x
[3] Y. Dimpoulos, B. Nebel and F. Toni, “Preferred Arguments Are Harder to Compute than Stable Extensions,” Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99), Stockholm, 31 July-6 August 1999, pp. 36-41.
[4] P. M. Dung, “On the Acceptability of Arguments and Its Fundamental Role in Nonmonotonic Reasoning, Logic Programming and N-Person Games,” Artificial Intelligence, Vol. 77, No. 2, 1995, pp. 321-357. doi:10.1016/0004-3702(94)00041-X
[5] P. Besnard and A. Hunter, “Elements of Argumentation,” The MIT Press, Cambridge, 2008.
[6] T. J. M. Bench-Capon and P. E. Dune, “Argumentation in Artificial Intelligence,” Artificial Intelligence, Vol. 171, No. 10-15, 2007, pp. 619-641. doi:10.1016/j.artint.2007.05.001
[7] S. Kraus, K. Sycara and A. Evenchik, “Reaching Agreements through Argumentation: A Logical Model and Implementation,” Artificial Intelligence, Vol. 104, No. 1-2, 1998, pp. 1-69. doi:10.1016/S0004-3702(98)00078-2
[8] I. Rahwan and P. McBurney, “Argumentation Technology,” IEEE Intelligent Systems, Vol. 22, No. 6, 2007, pp 21-23. doi:10.1109/MIS.2007.109
[9] I. Rahwan and G. Simari, “Argumentation in Artificial Intelligence,” Springer, New York, 2009.
[10] G. Boella, J. Hulstijn and L. Torre, “A Logic of Abstract Argumentation,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 29-41.
[11] D. N. Walton and C. W. Krabbe, “Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning,” Suny Press, Albany, 1995.
[12] E. Cogan, S. Parsons and P. McBurney, “New Types of Inter-Agent Dialogues,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in MultiAgent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 154-168.
[13] L. Amgoud and N. Hameurlain, “An ArgumentationBased Approach for Dialog Move Selection,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 128-141.
[14] Y. Tang and S. Parsons, “Argumentation-Based MultiAgent Dialogues for Deliberation,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 229-244.
[15] L. Amgoud, N. Maudet and S. Parsons, “Modelling Dialogues using Argumentation,” Proceedings of the 4th International Conference on Multi-Agent Systems (ICMAS-2000), Boston, 10-12 July 2000, pp. 31-38.
[16] C. Reed, “Dialogue Frames in Agent Communication,” Proceedings of the 3rd International Conference on Multi Agent Systems (ICMAS-98) Paris, 3-7 July 1998, pp. 246-253.
[17] S. Parsons, M. Wooldridge and L. Amgoud, “On the Outcomes of Formal Inter-Agent Dialogues,” ACM Press, New York, 2003.
[18] E. Sklar and S. Parsons, “Towards the Application of Argumentation-Based Dialogues for Education,” Proceedings of the 3rd International Conference on Autonomous Agents and Multi-Agent Systems, New York, 23 July 2004, pp. 1420-1421.
[19] A. Belesiotis, M. Rovatsos and I. Rahwan, “A Generative Dialogue System for Arguing about Plans in Situation Calculus,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 23-41.
[20] J. Devereux and C. Reed, “Strategic Argumentation in Rigorous Persuasion Dialogue,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 94-113.
[21] P.-A. Matt, F. Toni and J. Vaccari, “Dominant Decisions by Argumentation Agents,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in MultiAgent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 42-59.
[22] M. Wardeh, T. Bech-Capon and F. Coenen, “Multi-Party Argument from Experience,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in MultiAgent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 216-235.
[23] M. Morge and P. Mancarella, “Assumption-Based Argumentation for the Minimal Concession Strategy,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 114-133.
[24] M. Thimm, “Realizing Argumentation in Multi-Agent Systems Using Defeasible Logic Programming,” In: P. McBurney, I. Rahwan, S. Parsons and N. Maudet, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2009), Vol. 6057, Springer, Berlin, 2010, pp. 175-194.
[25] C. Changchit, C. Holsapple and D. Madden, “Positive Impacts of an Intelligent System on Internal Control Problem Recognition,” Proceedings of the 32nd Hawaii International Conference on System Sciences, Maui, 5-8 January 1999, p. 10.
[26] R. Meservy, “Auditing Internal Controls: A Computational Model of the Review Process (Expert Systems, Cognitive, Knowledge Acquisition, Validation, Simulation),” PhD Thesis, University of Minnesota, Minneapolis, 1985.
[27] S. O’Callaghan, “An Artificial Intelligence Application of Backpropagation Neural Networks to Simulate Accountants’ Assessments of Internal Control Systems Using COSO Guidelines,” PhD Thesis, University of Cincinnati, Cincinnati, 1994.
[28] F. Liu, R. Tang and Y. Song, “Information Fusion Oriented Fuzzy Comprehensive Evaluation Model on Enterprises’ Internal Control Enviroment,” Proceedings of the 2009 Asia-Pacific Conference on Information Processing, Shenzhen, 18-19 July 2009, pp. 32-34. doi:10.1109/APCIP.2009.16
[29] A. Kumar and R. Liu, “A Rule-Based Framework Using Role Patterns for Business Process Compliance,” In: N. Bassiliades, G. Governatori and A. Paschke, Eds., Proceedings of the International Symposium on Rule Representation, Interchange and Reasoning on the Web, Vol. 5321, Orlando, 30-31 October 2008, pp. 58-72. doi:10.1007/978-3-540-88808-6_9
[30] C. Changchit and C. W. Holsapple, “The Development of an Expert System for Managerial Evaluation of Internal Controls,” Intelligent Systems in Accounting, Finance and Management, Vol. 12, No. 2, 2004, pp. 103-120. doi:10.1002/isaf.246
[31] A. Korvin, M. Shipley and K. Omer, “Assessing Risks Due to Threats to Internal Control in a Computer-Based Accounting Information System: A Pragmatic Approach Based on Fuzzy Set Theory,” Intelligent Systems in Accounting, Finance and Management, Vol. 12, No. 2, 2004, pp. 139-152. doi:10.1002/isaf.249
[32] A. Deshmukh and L. Talluru, “A Rule-Based Fuzzy Reasoning System for Assesing the Risk of Management Fraud,” Intelligent Systems in Accounting, Finance & Management, Vol. 7, No. 4, 1998, pp. 223-241. doi:10.1002/(SICI)1099-1174(199812)7:4%3C223::AID-ISAF158%3E3.0.CO;2-I
[33] K. M. Fanning and K. O. Cogger, “Neural Network Detection of Management Fraud Using Published Financial Data,” International Journal of Intelligent Systems in Accounting, Finance & Management, Vol. 7, No. 1, 1998, pp. 21-41. doi:10.1002/(SICI)1099-1174(199803)7:1%3C21::AID-ISAF138%3E3.0.CO;2-K
[34] J. Coakley, L. Gammill and C. Brown, “Artificial Neural Networks in Accounting and Finance,” Oregon State University, Corvallis, 1995.
[35] K. M. Fanning and K. O. Cogger, “A Comparative Analysis of Artificial Neural Networks Using Financial Distress Prediction,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 3, 1994, pp. 241-252.
[36] O. J. Welch, T. E. Reeves and S. T. Welch, “Using a Genetic Algotithm-Based Classifier System for Modeling Auditor Decision Behaviour in a Fraud Setting,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 7, No. 3, 1998, pp. 173-186. doi:/10.1002/(SICI)1099-1174(199809)7:3<173::AID-ISAF147>3.0.CO;2-5
[37] R. P. Srivastava, S. K. Dutta and R. W. Johns, “An Expert System Approach to Audit Planning and Evaluation in the Belief-Function Framework,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 5, No. 3, 1996, pp. 165-183.
[38] S. Sarkar, R. S. Sriram and S. Joykutty, “Belief Networks for Expert System Development in Auditing,” International Journal of Intelligent Systems in Accounting, Finance and Management, Vol. 5, No. 3, 1998, pp. 147-163. doi:/10.1002/(SICI)1099-1174(199609)5:3<147::AID-ISAF108>3.0.CO;2-F
[39] M. Capobianco, C. Chesñevar and G. R. Simari, “An Argument-Based Framework to Model an Agent’s Beliefs in a Dynamic Environment,” In: I. Rahwan, P. Moraitis and C. Reed, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2004), Vol. 3366, Springer, Berlin, 2005, pp. 95-110.
[40] T. Fukumoto and H. Sawamura, “Argumentation-Based Learning,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 17-35.
[41] D. Capera, J. P. Georgé, M. P. Gleizes and P. Glize, “Emergence of Organisations, Emergence of Functions,” AISB03 Convention, 2003.
[42] R. Razavi, J. Perrot and N. Guelfi, “Adaptive Modeling: An Approach and a Method for Implementing Adaptive Agents,” Massively Multi-Agent Systems, Vol. 3446, 2005, pp. 136-148.
[43] D. Weyns, K. Schelfthout, T. Holvoet and O. Glorieux, “Role Based Model for Adaptive Agents,” BASYS04 Convention, 2004.
[44] F. Zambonelli, N. R. Jennings and M. Wooldridge, “Developing Multiagent Systems: The Gaia Methodology,” ACM Transactions on Software Engineering and Methodology, Vol. 12, No. 3, 2003, pp. 317-370.
[45] S. Ontañon and E. Plaza, “Arguments and Counterexamples in Case-Based Joint Deliberation,” In: N. Maudet, S. Parsons and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2006), Vol. 4766, Springer, Berlin, 2007, pp. 36-53.
[46] S. Parsons and E. Sklar, “How Agents Alter Their Beliefs after an Argumentation-Based Dialogue,” In: S. Parsons, N. Maudet, P. Moraitis and I. Rahwan, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2005), Vol. 4049, Springer, Berlin, 2006, pp. 297-312.
[47] A. Kakas, N. Maudet and P. Moraitis, “Layered Strategies and Protocols for Argumentation-Based Agent Interaction,” In: I. Rahwan, P. Moraitis and C. Reed, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2004), Vol. 3366, Springer, Berlin, 2005, pp. 64-77.
[48] S. Rodriguez, Y. de Paz, J. Bajo and J. M. Corchado, “Social-Based Planning Model for Multiagent Systems,” Expert Systems with Applications, Vol. 38, No. 10, 2011, pp. 13005-13023. doi:/10.1016/j.eswa.2011.04.101
[49] J. M. Corchado and R. Laza, “Constructing Deliberative Agents with Case-Based Reasoning Technology,” International Journal of Intelligent Systems, Vol. 18, No. 12, 2003, pp. 1227-1241. doi:/10.1002/int.10138
[50] J. M. Corchado, R. Laza, L. Borrajo, J. C. Yanes and M. Valiño, “Increasing the Autonomy of Deliberative Agents with a Case-Based Reasoning System,” International Journal of Computational Intelligence and Applications, Vol. 3, No. 1, 2003, p. 101 doi:/10.1142/S1469026803000823
[51] M. Esteva, J.-A. Rodríguez-Aguilar, C. Sierra, P. Garcia and J. L. Arcos, “On the Formal Specifications of Electronic Institutions,” In: F. Dignum and C. Sierra, Eds., Argumentation in Multi-Agent Systems (ArgMAS 2001), Vol. 1991, Springer, Berlin, 2001, pp. 126-147. doi:/10.1007/3-540-44682-6_8
[52] J. F. Hübner, J. S. Sichman and O. Boissier, “Using the MOISE+ for a Cooperative Framework of MAS Reorganisation,” In: A. L. C. Bazzan and S. Labidi, Eds., Advances in Artificial Intelligence-SBIA 2004, Vol. 3171, Springer, Berlin, 2004, pp. 506-515.
[53] H. Van D. Parunak and J. J. Odell, “Representing Social Structures in UML,” In: M. J. Wooldridge, G. Weiß and P. Ciancarini, Eds., Agent-Oriented Software Engineering II, Vol. 2222, Springer, Berlin, 2002, pp. 1-16.
[54] M. Morge and P. Mancarella, “The Hedgehog and the Fox. An Argumentation-Based Decison Support System,” Proceedings of the 4th International Workshop on Argumentation in Multi-Agent Systems (ArgMAS 2007), Springer, Berlin, 2008, pp. 55-68.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.