A Trust Aggregation Engine that Uses Contextual Information
Enhacing traditional aggregation approaches by the inclusion of dynamics and being situation-aware (context).
Dynamics, based on Hysteresis of Trust and Betrayal (Straker, 2008), introducing 3 properties as asymmetry, maturity and distinguishable past. Thier model is called SinAlpha. Asymmetry penalizes intermittent behav., maturity avoids selection with few evidences and distiguishably prevents fast forgiveness. But the sin approach is not better than the linear model (experimentally).
Situation-aware is covered by contextual fitness. It is something very similar to CBR: clustering, stereotype extraction, analysis of similarity. Interestng for me: Multidimensional context representation for situational trust (Rehak, Gregor & Pechoucek, 2006). To do: deal with newcomers (first-encounter)
Preliminary Results on Reputation Systems: Balancing Quantity and Quality
Agents in a network with a ranking that models reputation. A global measure (untractable for very large open systems as web). Two axioms: transitivity and strict transitiviy (good as first approach, but it can not be generalized). They refines the values in different interactions. What does it happend when there are loops? A lot of things to explain: ‘random’ initial ranking that he promises is not affecting to the final result (I can’t believe this), no weight/importance, the use of group size (stricti trans.) is questionable….
An Interaction-oriented Model of Trust Alignment
Well, I’ve seen this a lot of times: how to align trust concepts. Particularly, I prefer to have an standard on this part. Because we can continue with that: for example, ACL; why can’t we align ACL ontologies so the agents can speack in any language? In this case: agents share the same sintactis (about trust) and the semantics has to be aligned (what does it means to have a 0.8 confidence?) Implemented using inductive llogic programming (scalability?)
Supplier performance in a Digital Ecosystem
Deals with partnership selection in cases where negotiation/argumentation is involved.
(Inciso: OMG bolitas paseándose por la pantalla, a DocThreeC le encantará esta plantilla ¿se lo digo? luego nos torturará).
To define the model of trust she begins with the ontology, similarity, expectations (see the invited talk of Carles Sierra this morning). By using past experiences, the probab. distribution of expected observation is modified. So, at the end, you do not take into accout the information about the exact object, but the similar ones too.
(otro inciso: ¡qué garrillas tiene la chica de las traspas! parece Ana Obregón :-D)
She continues explainig how the trust value is calculated using all these things and a bit (quickly) about similarity.
On Norm Internalization. A Position Paper
Daniel Villatoro (as guess star) in behalf of Rosaria Conte
How agents internalize existing norms and incorporates them to their behavior. He begins talking about goal internalization. At the beginnig, you behav. is directed by norms, but when yo asimilated them then you behaves in that way not to avoid a punishment, but because you want to behave in that way (for example, to stop when light is red in a semaphore). Too fast to listen and to write at the same time (you know, I’m a man), but it is a very interesting thing and I think that is related with adaptive organizations. I have to read the paper this this idea on mind. An intertesting point: urgency is a factor that affects to the speed at which intentions are internalized.
It is integrated with EMIL-A (BDI), N-Bel -> N-Goal -> N-Intentions that are Internalized as a conformed behav. A comment: this work is about people, not artificial agents.