[STA] Information-based agency

John Debenham is going to explain this approach. This is the main reason I’m in Barcelona today. It’s going to be hard, so pay attention please.

Agents negotiate in a very uncertain environment about the world states, integrity of messages, enactment of commitments, completeness of contracts or validity of information, so we do know that we need a more predictable world for agents.

If uncertainty is the problem then information is the solution. Information-based agency is a framework on techniques to manage and exchange information. Basics
an agent \(\alpha\) interacts with negotiation and with information agents
a common ontology O is assumed organised as an is-a hierarchy
Sim(#) measures similarity between concepts
world model contains random variables \({X_1, X_n}\) For each one, ‘zero information’ is represented by a decay limit distribution Di, where when there is not information \(\lim_{t \to \infty}X_i=D_i\) That is: the information is useless after a period of time.

Singed contracts and arguments and information are all commitments \(\varphi\). All similar statements of over related concepts (for example, statement about a topic from the same user -weather forecasting-) are expected to have the same decay function.

Information can help us to establish Trust mechanism and model intimacy (keeping private information) that organises the type of information (motivations, goals…) and an abstraction of the ontology.

The big picture of information-based agents is something like this (look for it)

Blogged with the Flock Browser

[STA] Regulations and conventions

Pablo Noriega’s speech.

Internet is a phenomenon that has change how we think about intelligence, dividing it into three different skills: technological, pragmatical and social.

[Copiar aquí las notas del papel]

Another example about the behaviour or public (viewers) in «pelota vasca» game. There are conventions about the meaning of raising a hand or catching the ball (bidding and accepting a bid) You need to know them if you don’t want to get into trouble!

The norm management is

  • conditional: translated intro logic rules with LHS and RHS
  • time-dependent: with deadlines added to the rules (with an spacial predicate) to establish the validity ir the rules
  • action-dependent:

What’s the difference between convention and commitment? A convention is more general term and includes, among others, the commitments, but also norms or rules.

Blogged with the Flock Browser

[STA] Logics, emotions and agreements

John Meyer, from Univ. Utrech.

They’ve been working with BDI agents a lot of time. Cognitive agents are well known. Now he’s revisiting BDI agents and some platforms before presenting the language thy’re using to program the agents: 3APL and 2APL. Both a mixture of imperative and logic programming languages.

3APL uses rules \kappa <- \beta | \pi that are rules with a guard (so, they’re actually as ECA rules) And they’re used to build plans. The control flows in the typical cycle sense-reason-action. How do it more practical? with a new language: 2APL (A Practical Agent Programming Language). See more at http://www.cs.uu.nl/2apl. One step more: the BDI+ agents that goes beyond BDI agents b adding emotions (as influence to deliberation) and normative systems with reinforcement of rules.

Emotional agents

They combine emotions with rationality, with provides heuristics in decision-making to reduce non-determinism. So the achieve a more natural human behaviour. 4 basic types of emotions:

  • happiness: pursuit of goals. Things go well -> any thing is needed
  • sadness: when in the pursuit of a plan things go wrong -> replanning
  • anger: being frustrated (more severe that sadness) -> try harder to achieve the plan
  • fear: maintainance goal threatened -> try to restore it

So emotions are basically relation with planning and goal achievement tasks. The full OCC model comprise elicitation conditions for 22 emotions, as well as quantitative and qualitative aspects. For instance: hope. Hope is being pleased about the prospect of a desirable goal. Fear and hope (curious) are very close related:
\(hope(\pi,\kappa) ->[do(\pi)](satisfaction(\pi,\kappa) and disappointment(\pi,\kappa))\)
fear….

But emotions are not constant nor equal. They have different intensity. And we have to deal withg it (and I ask myself: it’s related with decay function in information-based agentes?)

An example of an application of this proposal: Boon Companion Project: a robot (a physical robot) for elderly people. iCat Philips (expressive faces) and GATE subproject for virtual character in games. In this last case, they’re using the Theory of Mind for trying to guess the mental state of other agents just by looking its behaviour.

My mind is flying again. If an institution have severe norms,. maybe agents are sad inside it and then they’re not efficient. Or norms are too severe and agents trend to be sad inside the organisation. Sanctions can be written in order to get agents sad or anger, so the response of sanctioned agents can be different. There is any measure of the happiness of the sadness of a society? Are these values related with entropy of information-based agents? And one more thing if agents can control emotions may prefer to influence in the emotions of the other to achieve a better deal (f.i. calming an anger agent could reduce the price of a transactions) ….. a lot of questions without answer.

Other research (and very interesting lines): agents that decide its own autonomy degree (f.i.l crisis management), or self-explaining agents (explain why they’ve chosen something).

Blogged with the Flock Browser