[AT workshop] Session 4

Reputation and confidence for artificial intelligent entities. A cognitive approach
(Jordi Sabater)

Trust deals with uncertainly and risky situations. A little difference: reputation (very similar) is one of hte mechanism to build trust and it is a social element. How it is used in a computer-based systems? Three layers (approaches): security, institutional and social. Trust and reputation are meaningful in the social approach. If we have a storngly ruled system (institutional approach) we do’t need trust, just to follow the rules. Then, a cognitive model of reputation is needed.

A social evaluation is the evaluation by a social entity of some property (mental, physical or social) related with been social. Reputation is then a voice (something that is said) about a social property. But agents do not have to beleave this reputation measures: agents (as people) has no responsibility about spreading social evaluations. When people believes what other people sais, then reputation matches with image (what an agent believes in, consideres as true facts).Reputation means communication and gossiping is the channel used to transmit reputation measures. Images and reputation are based on facts, which have two measures: value and strength -> repage mechanism.

This repage cognitive computational model has to be inserted in an agent. It is important that (i) reputation model can be isolated from other reasoning mechanisms (planners, decision making tools); and (ii) be proactive: do not wait to be asked about reputation, but provide information to the rest of elements. Using a BDI (beliefs, desires and intentions) model with multicotext logics and bridge rules to integrate the context of teh repage mechanism into the context of beliefs, desires and intentions. In the logic, the difference between images and reputation is a ¿reified? difference. An argumentation model is used

Psychopharmacology of agreement
(Adolf Tobeña)

There’s lots of corrdination, obbidion, … but few agreements among humans. ANd the second point of the speech is that humans need drugs. And these facts «llevan» to psychiatric aspects of agreement: why patients are more trending to cooperate/agree after been treated?

Usually, xanthines (caffeine, tobacco) are present during negotiations and bargaining processes. 5 years ago was demostrated that oxitocin increases trust in humas. Furthermore, they observed that participants trend to not change the trusting behaviour even after knowning they had been betrayed (50% trials) and the brain was actually don’t responding as been betrayed (e.g. activity in brain areas related with dissgust).

booster drugs for agreements (prosocial, protrust)

  • alcohol, cannabinoids
  • xanthines, nicotine
  • oxytocine, prolactine, NPY
  • estrogens

and antiagreement drugs are (indice paranidogenic, autistic and antisocial behaviors)

  • cocaine, amphetamines
  • LSD, mescaline, psilocibine
  • androgens

But they’ve observe that testosterone had a possitive effect on human bargaining behavior…. and they did it on women!!!! They shown that one sunlingual dose os testosterone in women cause a substantial increasein fair bargaining, reducing cinflics and increasing efficiency on social interactions. ANd usinga placebo they demonstrate that was a real effect (the believed testosterone group behaves as the group without testosterone. And in men? Other group showed that high levels of testosterone (natural measuring) reject low (unfair) ultimatum game offers: $5/$40. Testosterone has influence in how the rest of the people consider others as leaders. Testosterone redcuces conciuos detrection of signals (face expressions) serving social correlations ->  a high probability of entering into a fight is related with risk/venturesome behavior (you accept more faces as neutral)

Blogged with the Flock Browser

[AT workshop] Session 3

The neural basis of empathy and coordination
(Christian Keysers)

1.- feeling the intentions of others
The neurons involved in a concrete movement (grasping something), surprisingly, respond also when the action is seen (about the 10% of the neurons – mirror neurons-).  Interesting: you can «run simulations» in your mind and the brain behaves as if the real action is being performed. But, what happened if you see a not human (f.i. a robot) doing the same action?. The active areas in the brain of the observer are the same. That is, your brain is «learning» how to do this action.

How about sounds?. The set of neurons dedicated to do, see or hear something is different. In humans, experiments done where about to hear the result of actions performed by the hands or by the mouth (clearly separated in the brain). The correspondent motor areas are not activated, but the area that responds to the stimuli does.

SO, how do we coordinate each other? Because the coordinate system of the other doing an action is not our own coordinate system and the active area in the brain is different. The mirror system transform back and forth between sensory an motor representations, providing the basis for optimal coordination of observed and executed actions

2.- why do we cooperate?
It is related with emotional behavior. Experiments done with pleasant and disgusting smells. Again, the response of the brain is very similar when we feels disgust or when we see someone felling disgusted (by their expression in the face) And impairing simulation with real stimuli can damage the brain (so we cannot properly distinguish the correct emotion/sensation). Emotional simulation and empathy are linked too? It seems to be, and it is not exclusive for disgust. Pain in self and in others overlaps, but disgust and joy overlaps too, so it is difficult to identify the correct emotion. Any way, this facts motivate us to cooperate: we share the same things than others (empathy).

Cooperation and generosity
(Paul van Lange)

Generosity: behaving more cooperatively than the others. Noise refers to unintended errors that affect interaction outcomes. Noise is a matter of fact in social systems and undermines cooperation. But generosity can (or not) cope with noise.

To understand social situations one needs to understand dependence, interests and information availability (al least).imperfect information appears in partner preferences or discrepancies about outcomes and intentions (why he’s not responding my emails?).

But the amount of generosity to apply has to be biased. The optimal balance between reciprocity, generosity o stingy has to be found (e.g. tit-for-tat: nice, forgiving, retaliatory and clear…. but it does not repair)

After a lot of results, seems that, under negative noise, generosity (i) build trust, (ii) pair well with reciprocity, and (iii) -I missed this one-. Besides: communication helps (when noise happens, inform the other -say sorry-); individuals copes with noise better than representatives and empathy is effective.

NOTA: ¿que ocurre si se introduce la generosidad como un factor  más en el demostrador mWater a la hora de gestionar las agrupaciones de usuarios autoorganizadas? Parece que puede ser una buena variable para mantener una gestión óptima en el problema de los comunes.

Blogged with the Flock Browser

[AT workshop] Session 2

On the use of argumentation in agreement technologies
(Henry Prakken)

Agents need argumentation (i) for their internal reasoning and (ii) for their interaction with other agents. Explaining basic things about argumentation process: argument attacks. The situation of the dialog can be modeled in a graph colored by defining in and out arguments (Dung, 1995). And there is a sound and complete game that allows to determine if an argument is feasible or not without having to calculate the entire network: an argument A is feasible when there is a winning strategy for A follow.ing the game rules.

Problem: it is asumed that all information is centralized and static (a single theory -KB-) So dialogue game systems are developed. He’s using the Walton & Kreebe dialogue types (without eristic :-) I’ve seen this a lot of times already.

An Interesting thing: blocking behavior (always asking why) It can be solve by using sanctions:
social sanctions (i wont talk you any more)
shift of burden of proof by a third party (referee): q since r // why r? // referee: you must defend not-r

I already knew most of these things (thanks to Stella)

«Prof. Kripke, let me introduce Prof. Nash», or
Logic for Automated Mechanism Design
(Mike Wooldridge)

In MAS the interaction is done by mechanisms = protocol + self-interest and agents are the participants in these mechanisms. So mech. can’t be treated as simple protocols. (ex. sniping in eBay -bidding in the last 5 min. trying to be the last bidder-). A MAS can predict the sniping behav. of users in eBay?  The environment of an agent is a mechanism too, that contains other agents that act strategically to achieve their own goals

The formalization used is ATL (alternate-time logic), introduced in 1997 top analyze games. It defines a branching-time model as a graph and CTL is the logic used to talk about branching-time structures, extending propositional logic with path quantifiers (A,E) and tense modalities (F, G, X, U).

CTL sais when something is inevitable or possible, but it hasn’t notion of strategy action nor agency (it’s a problem to model mechanisms…. and service-based applications too). ATL is intended to overcome these limitations. The basic expression is

\(\langle \langle C \rangle \rangle \phi\)

meaning «coalition C can cooperate to ensure that \(\phi\). The idea is that, using coalitions, we can model who is going to achieve a property (a coalition can be an individual entity or even an empty set -modeling ‘nature’-). An example about social choice (voting) mechanism. Now, mechanisms can be validated.The logic can capture dependencies among agents, as stressfulness (all goals met), veto (j needs i to achieve its goal), mutual dependence (all agents are mutual dependence… veto relationship)

(note: but we can’t model actions yet, so I guess it isn’t useful for us)

A concrete application about social laws (normative systems). Objectives will be ATL formulae \(phi\) and mechanisms are behavioral constraints \(\beta\) To avoid undesirable behaviors, we have to cut out some transitions. An effective social law \((\phi,\beta) \models \phi\). But compute this is a NP-hard problem. An example with the typical train organization in a tunnel. But you cannot model just the properties you want to avoid. The properties you want to preserve have to be modeled too in order to have system doing useful things.

But, what to do with non-compliance? The idea isto incentive compatibility and, to do this, we need preferences (a prioritized list of goal formulae). I like this idea: the utility of the agent comes from this list, from a worst (and weak) rule to the best (and stronger) rule. For instance, related with resources, have it assigned often and for a long time.

Blogged with the Flock Browser