[AT workshop] Session 4

Agreement, Conferencias/Charlas Comentarios desactivados en [AT workshop] Session 4

Reputation and confidence for artificial intelligent entities. A cognitive approach
(Jordi Sabater)

Trust deals with uncertainly and risky situations. A little difference: reputation (very similar) is one of hte mechanism to build trust and it is a social element. How it is used in a computer-based systems? Three layers (approaches): security, institutional and social. Trust and reputation are meaningful in the social approach. If we have a storngly ruled system (institutional approach) we do't need trust, just to follow the rules. Then, a cognitive model of reputation is needed.

A social evaluation is the evaluation by a social entity of some property (mental, physical or social) related with been social. Reputation is then a voice (something that is said) about a social property. But agents do not have to beleave this reputation measures: agents (as people) has no responsibility about spreading social evaluations. When people believes what other people sais, then reputation matches with image (what an agent believes in, consideres as true facts).Reputation means communication and gossiping is the channel used to transmit reputation measures. Images and reputation are based on facts, which have two measures: value and strength -> repage mechanism.

This repage cognitive computational model has to be inserted in an agent. It is important that (i) reputation model can be isolated from other reasoning mechanisms (planners, decision making tools); and (ii) be proactive: do not wait to be asked about reputation, but provide information to the rest of elements. Using a BDI (beliefs, desires and intentions) model with multicotext logics and bridge rules to integrate the context of teh repage mechanism into the context of beliefs, desires and intentions. In the logic, the difference between images and reputation is a ¿reified? difference. An argumentation model is used

Psychopharmacology of agreement
(Adolf Tobeña)

There's lots of corrdination, obbidion, ... but few agreements among humans. ANd the second point of the speech is that humans need drugs. And these facts "llevan" to psychiatric aspects of agreement: why patients are more trending to cooperate/agree after been treated?

Usually, xanthines (caffeine, tobacco) are present during negotiations and bargaining processes. 5 years ago was demostrated that oxitocin increases trust in humas. Furthermore, they observed that participants trend to not change the trusting behaviour even after knowning they had been betrayed (50% trials) and the brain was actually don't responding as been betrayed (e.g. activity in brain areas related with dissgust).

booster drugs for agreements (prosocial, protrust)

  • alcohol, cannabinoids
  • xanthines, nicotine
  • oxytocine, prolactine, NPY
  • estrogens

and antiagreement drugs are (indice paranidogenic, autistic and antisocial behaviors)

  • cocaine, amphetamines
  • LSD, mescaline, psilocibine
  • androgens

But they've observe that testosterone had a possitive effect on human bargaining behavior.... and they did it on women!!!! They shown that one sunlingual dose os testosterone in women cause a substantial increasein fair bargaining, reducing cinflics and increasing efficiency on social interactions. ANd usinga placebo they demonstrate that was a real effect (the believed testosterone group behaves as the group without testosterone. And in men? Other group showed that high levels of testosterone (natural measuring) reject low (unfair) ultimatum game offers: $5/$40. Testosterone has influence in how the rest of the people consider others as leaders. Testosterone redcuces conciuos detrection of signals (face expressions) serving social correlations ->  a high probability of entering into a fight is related with risk/venturesome behavior (you accept more faces as neutral)

Blogged with the Flock Browser

[AT workshop] Session 3

Agreement, Conferencias/Charlas Comentarios desactivados en [AT workshop] Session 3

The neural basis of empathy and coordination
(Christian Keysers)

1.- feeling the intentions of others
The neurons involved in a concrete movement (grasping something), surprisingly, respond also when the action is seen (about the 10% of the neurons - mirror neurons-).  Interesting: you can "run simulations" in your mind and the brain behaves as if the real action is being performed. But, what happened if you see a not human (f.i. a robot) doing the same action?. The active areas in the brain of the observer are the same. That is, your brain is "learning" how to do this action.

How about sounds?. The set of neurons dedicated to do, see or hear something is different. In humans, experiments done where about to hear the result of actions performed by the hands or by the mouth (clearly separated in the brain). The correspondent motor areas are not activated, but the area that responds to the stimuli does.

SO, how do we coordinate each other? Because the coordinate system of the other doing an action is not our own coordinate system and the active area in the brain is different. The mirror system transform back and forth between sensory an motor representations, providing the basis for optimal coordination of observed and executed actions

2.- why do we cooperate?
It is related with emotional behavior. Experiments done with pleasant and disgusting smells. Again, the response of the brain is very similar when we feels disgust or when we see someone felling disgusted (by their expression in the face) And impairing simulation with real stimuli can damage the brain (so we cannot properly distinguish the correct emotion/sensation). Emotional simulation and empathy are linked too? It seems to be, and it is not exclusive for disgust. Pain in self and in others overlaps, but disgust and joy overlaps too, so it is difficult to identify the correct emotion. Any way, this facts motivate us to cooperate: we share the same things than others (empathy).

Cooperation and generosity
(Paul van Lange)

Generosity: behaving more cooperatively than the others. Noise refers to unintended errors that affect interaction outcomes. Noise is a matter of fact in social systems and undermines cooperation. But generosity can (or not) cope with noise.

To understand social situations one needs to understand dependence, interests and information availability (al least).imperfect information appears in partner preferences or discrepancies about outcomes and intentions (why he's not responding my emails?).

But the amount of generosity to apply has to be biased. The optimal balance between reciprocity, generosity o stingy has to be found (e.g. tit-for-tat: nice, forgiving, retaliatory and clear.... but it does not repair)

After a lot of results, seems that, under negative noise, generosity (i) build trust, (ii) pair well with reciprocity, and (iii) -I missed this one-. Besides: communication helps (when noise happens, inform the other -say sorry-); individuals copes with noise better than representatives and empathy is effective.

NOTA: ¿que ocurre si se introduce la generosidad como un factor  más en el demostrador mWater a la hora de gestionar las agrupaciones de usuarios autoorganizadas? Parece que puede ser una buena variable para mantener una gestión óptima en el problema de los comunes.

Blogged with the Flock Browser

[AT workshop] Session 2

Agreement, Conferencias/Charlas Comentarios desactivados en [AT workshop] Session 2

On the use of argumentation in agreement technologies
(Henry Prakken)

Agents need argumentation (i) for their internal reasoning and (ii) for their interaction with other agents. Explaining basic things about argumentation process: argument attacks. The situation of the dialog can be modeled in a graph colored by defining in and out arguments (Dung, 1995). And there is a sound and complete game that allows to determine if an argument is feasible or not without having to calculate the entire network: an argument A is feasible when there is a winning strategy for A follow.ing the game rules.

Problem: it is asumed that all information is centralized and static (a single theory -KB-) So dialogue game systems are developed. He's using the Walton & Kreebe dialogue types (without eristic :-) I've seen this a lot of times already.

An Interesting thing: blocking behavior (always asking why) It can be solve by using sanctions:
social sanctions (i wont talk you any more)
shift of burden of proof by a third party (referee): q since r // why r? // referee: you must defend not-r

I already knew most of these things (thanks to Stella)

"Prof. Kripke, let me introduce Prof. Nash", or
Logic for Automated Mechanism Design
(Mike Wooldridge)

In MAS the interaction is done by mechanisms = protocol + self-interest and agents are the participants in these mechanisms. So mech. can't be treated as simple protocols. (ex. sniping in eBay -bidding in the last 5 min. trying to be the last bidder-). A MAS can predict the sniping behav. of users in eBay?  The environment of an agent is a mechanism too, that contains other agents that act strategically to achieve their own goals

The formalization used is ATL (alternate-time logic), introduced in 1997 top analyze games. It defines a branching-time model as a graph and CTL is the logic used to talk about branching-time structures, extending propositional logic with path quantifiers (A,E) and tense modalities (F, G, X, U).

CTL sais when something is inevitable or possible, but it hasn't notion of strategy action nor agency (it's a problem to model mechanisms.... and service-based applications too). ATL is intended to overcome these limitations. The basic expression is

$latex \langle \langle C \rangle \rangle \phi$

meaning "coalition C can cooperate to ensure that $latex \phi$. The idea is that, using coalitions, we can model who is going to achieve a property (a coalition can be an individual entity or even an empty set -modeling 'nature'-). An example about social choice (voting) mechanism. Now, mechanisms can be validated.The logic can capture dependencies among agents, as stressfulness (all goals met), veto (j needs i to achieve its goal), mutual dependence (all agents are mutual dependence... veto relationship)

(note: but we can't model actions yet, so I guess it isn't useful for us)

A concrete application about social laws (normative systems). Objectives will be ATL formulae $latex phi$ and mechanisms are behavioral constraints $latex \beta$ To avoid undesirable behaviors, we have to cut out some transitions. An effective social law $latex (\phi,\beta) \models \phi$. But compute this is a NP-hard problem. An example with the typical train organization in a tunnel. But you cannot model just the properties you want to avoid. The properties you want to preserve have to be modeled too in order to have system doing useful things.

But, what to do with non-compliance? The idea isto incentive compatibility and, to do this, we need preferences (a prioritized list of goal formulae). I like this idea: the utility of the agent comes from this list, from a worst (and weak) rule to the best (and stronger) rule. For instance, related with resources, have it assigned often and for a long time.

Blogged with the Flock Browser

[AT Workshop] Session 1

Agreement, Conferencias/Charlas Comentarios desactivados en [AT Workshop] Session 1

Towards the biological basis of cooperation
(Arcadi Navarro)

Talking about genome and human evolution. The interesting thing: the effects on social behavior.

After a very interesting introduction to genomic, begins trying to relate genetics with social behavior: because to cooperate can have some explanations in our genes (and this can be the explanation of why humans have  been a successful specie): genetic variability for behavioral traits is considerable. The problem is that this is very difficult to interpret. Fortunately, there are some genetics related with economic behavior that can be studied and replicated in labs.

Example: the ultimate game: people trend to make 50:50 offers and to reject less that 30% (not an reasonable decision from an economic point of view). But chimpanzees behaves as rational maximizes in an ultimatum game. Both species have evolve completely different behaviors. Why? we have to study this from a genetic perspective. -> agents playing games are as chimpanzees. And researchers are discovered that serotonin makes individuals to be more generous (just a joke: men have more serotonin than women). Or even between MZ twins, differences in the acceptance threshold in ultimatum game have been observed. Examples with more genes.

Measuring Strategic Uncertainly and Risk in Coordination-, entry-Games and lotteries with fMRI
(Rosemari Nagel)

Uncertainty can be classified as

  • exogenous (risk): know the prob. of all possible states of the world (objective prob.)
  • endogenous: in absence of endogenously given prob.;  -> strategic uncertainty (SU) e.g. outcomes depends on social interaction -games- (subjective prob.)

How brain solve individual or strategic uncertainty? Can we predict choices and brain activity in games?
Results: people behaves similarly in lottery and coordination games, but not in entry games. And the activity in the brain increases in lottery -> coordination -> entry. Some graphics about the different parts of the brain active while playing each type of game. Similar activity in entry games of risk lovers and risk averse people.

Summarizing, the entry games create mode strategic uncertainty as predicted by the nature of the mixed equilibrium which also involves levels of reasoning.

Blogged with the Flock Browser

AIWS Workshop

General Comentarios desactivados en AIWS Workshop

Así escrito parece algo importante, pero es sólo el workshop que voy a montar en septiembre para presentar los trabajos de la asignatura, con call for papers, proceedings y todo incluido. Vamos, que si me apuran habrá hasta cofee break :-)

Si me da tiempo hasta hago una web para el mini-congreso. Ay, como me gusta complicarme la vida, ¡con lo fácil que sería pedir un trabajo y fuera!. Bueno, os iré contando cómo van las cosas.

Actualización sept 2008: Finalmente hicimos el workshop el día 22 de septiembre. Toda la información en la web del AIWS '08.

Science and Technology of Agreement

Agentes, Congresos Comentarios desactivados en Science and Technology of Agreement

El próximo mes de junio se celebra en Barcelona el workshop internacional Science and Technology of Agreement. Son los días 19 y 20, así que me coincide con el congreso de Ibiza sobre Metaversos.  Lástima que la superposición cuántica no funcione para personas ni para gatos. Esta vez me toca elegir, aunque trataré de estar al menos el primer día en Barcelona. Más información:

WP Theme & Icons by N.Design Studio | Modified by M. Rebollo
RSS Entradas Acceder
Blog logo: MC MECHANIC-HAND FIXING HAND Homage to MC Escher. (c) Shane Willis