[STA] Logics, emotions and agreements

John Meyer, from Univ. Utrech.

They’ve been working with BDI agents a lot of time. Cognitive agents are well known. Now he’s revisiting BDI agents and some platforms before presenting the language thy’re using to program the agents: 3APL and 2APL. Both a mixture of imperative and logic programming languages.

3APL uses rules \kappa <- \beta | \pi that are rules with a guard (so, they’re actually as ECA rules) And they’re used to build plans. The control flows in the typical cycle sense-reason-action. How do it more practical? with a new language: 2APL (A Practical Agent Programming Language). See more at http://www.cs.uu.nl/2apl. One step more: the BDI+ agents that goes beyond BDI agents b adding emotions (as influence to deliberation) and normative systems with reinforcement of rules.

Emotional agents

They combine emotions with rationality, with provides heuristics in decision-making to reduce non-determinism. So the achieve a more natural human behaviour. 4 basic types of emotions:

  • happiness: pursuit of goals. Things go well -> any thing is needed
  • sadness: when in the pursuit of a plan things go wrong -> replanning
  • anger: being frustrated (more severe that sadness) -> try harder to achieve the plan
  • fear: maintainance goal threatened -> try to restore it

So emotions are basically relation with planning and goal achievement tasks. The full OCC model comprise elicitation conditions for 22 emotions, as well as quantitative and qualitative aspects. For instance: hope. Hope is being pleased about the prospect of a desirable goal. Fear and hope (curious) are very close related:
\(hope(\pi,\kappa) ->[do(\pi)](satisfaction(\pi,\kappa) and disappointment(\pi,\kappa))\)
fear….

But emotions are not constant nor equal. They have different intensity. And we have to deal withg it (and I ask myself: it’s related with decay function in information-based agentes?)

An example of an application of this proposal: Boon Companion Project: a robot (a physical robot) for elderly people. iCat Philips (expressive faces) and GATE subproject for virtual character in games. In this last case, they’re using the Theory of Mind for trying to guess the mental state of other agents just by looking its behaviour.

My mind is flying again. If an institution have severe norms,. maybe agents are sad inside it and then they’re not efficient. Or norms are too severe and agents trend to be sad inside the organisation. Sanctions can be written in order to get agents sad or anger, so the response of sanctioned agents can be different. There is any measure of the happiness of the sadness of a society? Are these values related with entropy of information-based agents? And one more thing if agents can control emotions may prefer to influence in the emotions of the other to achieve a better deal (f.i. calming an anger agent could reduce the price of a transactions) ….. a lot of questions without answer.

Other research (and very interesting lines): agents that decide its own autonomy degree (f.i.l crisis management), or self-explaining agents (explain why they’ve chosen something).

Blogged with the Flock Browser