[AT workshop] Session 2

On the use of argumentation in agreement technologies
(Henry Prakken)

Agents need argumentation (i) for their internal reasoning and (ii) for their interaction with other agents. Explaining basic things about argumentation process: argument attacks. The situation of the dialog can be modeled in a graph colored by defining in and out arguments (Dung, 1995). And there is a sound and complete game that allows to determine if an argument is feasible or not without having to calculate the entire network: an argument A is feasible when there is a winning strategy for A follow.ing the game rules.

Problem: it is asumed that all information is centralized and static (a single theory -KB-) So dialogue game systems are developed. He’s using the Walton & Kreebe dialogue types (without eristic :-) I’ve seen this a lot of times already.

An Interesting thing: blocking behavior (always asking why) It can be solve by using sanctions:
social sanctions (i wont talk you any more)
shift of burden of proof by a third party (referee): q since r // why r? // referee: you must defend not-r

I already knew most of these things (thanks to Stella)

«Prof. Kripke, let me introduce Prof. Nash», or
Logic for Automated Mechanism Design
(Mike Wooldridge)

In MAS the interaction is done by mechanisms = protocol + self-interest and agents are the participants in these mechanisms. So mech. can’t be treated as simple protocols. (ex. sniping in eBay -bidding in the last 5 min. trying to be the last bidder-). A MAS can predict the sniping behav. of users in eBay?  The environment of an agent is a mechanism too, that contains other agents that act strategically to achieve their own goals

The formalization used is ATL (alternate-time logic), introduced in 1997 top analyze games. It defines a branching-time model as a graph and CTL is the logic used to talk about branching-time structures, extending propositional logic with path quantifiers (A,E) and tense modalities (F, G, X, U).

CTL sais when something is inevitable or possible, but it hasn’t notion of strategy action nor agency (it’s a problem to model mechanisms…. and service-based applications too). ATL is intended to overcome these limitations. The basic expression is

\(\langle \langle C \rangle \rangle \phi\)

meaning «coalition C can cooperate to ensure that \(\phi\). The idea is that, using coalitions, we can model who is going to achieve a property (a coalition can be an individual entity or even an empty set -modeling ‘nature’-). An example about social choice (voting) mechanism. Now, mechanisms can be validated.The logic can capture dependencies among agents, as stressfulness (all goals met), veto (j needs i to achieve its goal), mutual dependence (all agents are mutual dependence… veto relationship)

(note: but we can’t model actions yet, so I guess it isn’t useful for us)

A concrete application about social laws (normative systems). Objectives will be ATL formulae \(phi\) and mechanisms are behavioral constraints \(\beta\) To avoid undesirable behaviors, we have to cut out some transitions. An effective social law \((\phi,\beta) \models \phi\). But compute this is a NP-hard problem. An example with the typical train organization in a tunnel. But you cannot model just the properties you want to avoid. The properties you want to preserve have to be modeled too in order to have system doing useful things.

But, what to do with non-compliance? The idea isto incentive compatibility and, to do this, we need preferences (a prioritized list of goal formulae). I like this idea: the utility of the agent comes from this list, from a worst (and weak) rule to the best (and stronger) rule. For instance, related with resources, have it assigned often and for a long time.

Blogged with the Flock Browser