[CostAT] Coherence-based argumentation models for normative agents

Abstract :
In this talk coherence-based models are proposed as an alternative to argumentation models for the reasoning of normative agents and normative deliberation. The model is based on Thagard’s theory of cognitive coherence and exploits the coherence relations that exist between claims and conclusion of arguments. A coherence-based model is intended to introduce more flexibility in the process of deliberation and agreement generation among normative agents. The basic coherence philosophy and what makes it interesting in the context of normative agents that deliberate to regulate a domain of interest are discussed.

This paper shows the application of coherence models to an argumentation model in a normative, regulated environment. I'm interested not in tris particular application, but in the coherence theory (Thagard).

Coherence estudies associations between pieces of information. It tríes to separate information in sets that mutually support the data. In some way, it can be consideres as a constraint satisfacción problem.

Different types of coherence can be identified: deductive, explanatory, deliberative, analoogous or conceptual, depending on the type of information. The Thagard model is a model of deductive coherence. It can be considered as a constraint satisfacción problem. But the main difference is that it does not try to maximize the partition (not the optimal -it is not needed to find a solution-)

Coherence applied to argumentation sees positive relates info as supporting arguments and negative weights as attacks to a claim.

Problem (general) How the coherence weights are calculated? Well, it is addressed in the questions: depends (roughly) on the number of arguments supporting a hypothesis.

Something interesting in the conclusiones: it can model different tupes of agente (utility maximizares, norm abiders, altruistic...) What about diferentt personalities? And a possibility for us: introduction of contexto as part of the future work.

[AT workshop] Session 2

On the use of argumentation in agreement technologies
(Henry Prakken)

Agents need argumentation (i) for their internal reasoning and (ii) for their interaction with other agents. Explaining basic things about argumentation process: argument attacks. The situation of the dialog can be modeled in a graph colored by defining in and out arguments (Dung, 1995). And there is a sound and complete game that allows to determine if an argument is feasible or not without having to calculate the entire network: an argument A is feasible when there is a winning strategy for A follow.ing the game rules.

Problem: it is asumed that all information is centralized and static (a single theory -KB-) So dialogue game systems are developed. He's using the Walton & Kreebe dialogue types (without eristic :-) I've seen this a lot of times already.

An Interesting thing: blocking behavior (always asking why) It can be solve by using sanctions:
social sanctions (i wont talk you any more)
shift of burden of proof by a third party (referee): q since r // why r? // referee: you must defend not-r

I already knew most of these things (thanks to Stella)

"Prof. Kripke, let me introduce Prof. Nash", or
Logic for Automated Mechanism Design
(Mike Wooldridge)

In MAS the interaction is done by mechanisms = protocol + self-interest and agents are the participants in these mechanisms. So mech. can't be treated as simple protocols. (ex. sniping in eBay -bidding in the last 5 min. trying to be the last bidder-). A MAS can predict the sniping behav. of users in eBay?  The environment of an agent is a mechanism too, that contains other agents that act strategically to achieve their own goals

The formalization used is ATL (alternate-time logic), introduced in 1997 top analyze games. It defines a branching-time model as a graph and CTL is the logic used to talk about branching-time structures, extending propositional logic with path quantifiers (A,E) and tense modalities (F, G, X, U).

CTL sais when something is inevitable or possible, but it hasn't notion of strategy action nor agency (it's a problem to model mechanisms.... and service-based applications too). ATL is intended to overcome these limitations. The basic expression is

$\langle \langle C \rangle \rangle \phi$

meaning "coalition C can cooperate to ensure that $\phi$. The idea is that, using coalitions, we can model who is going to achieve a property (a coalition can be an individual entity or even an empty set -modeling 'nature'-). An example about social choice (voting) mechanism. Now, mechanisms can be validated.The logic can capture dependencies among agents, as stressfulness (all goals met), veto (j needs i to achieve its goal), mutual dependence (all agents are mutual dependence... veto relationship)

(note: but we can't model actions yet, so I guess it isn't useful for us)

A concrete application about social laws (normative systems). Objectives will be ATL formulae $phi$ and mechanisms are behavioral constraints $\beta$ To avoid undesirable behaviors, we have to cut out some transitions. An effective social law $(\phi,\beta) \models \phi$. But compute this is a NP-hard problem. An example with the typical train organization in a tunnel. But you cannot model just the properties you want to avoid. The properties you want to preserve have to be modeled too in order to have system doing useful things.

But, what to do with non-compliance? The idea isto incentive compatibility and, to do this, we need preferences (a prioritized list of goal formulae). I like this idea: the utility of the agent comes from this list, from a worst (and weak) rule to the best (and stronger) rule. For instance, related with resources, have it assigned often and for a long time.

Blogged with the Flock Browser

Science and Technology of Agreement

El próximo mes de junio se celebra en Barcelona el workshop internacional Science and Technology of Agreement. Son los días 19 y 20, así que me coincide con el congreso de Ibiza sobre Metaversos.  Lástima que la superposición cuántica no funcione para personas ni para gatos. Esta vez me toca elegir, aunque trataré de estar al menos el primer día en Barcelona. Más información: