Anticipation (artificial intelligence)

Jump to navigation Jump to search

In artificial intelligence, anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model.

Reaction, proaction and anticipation

Elementary forms of artificial intelligence can be constructed using a policy based on simple if-then rules. An example of such a system would be an agent following the rules

If it rains outside, 
  take the umbrella. 
Otherwise 
  leave the umbrella home

A system such as the one defined above might be viewed as inherently reactive because the decision making is based on the current state of the environment with no explicit regard to the future. An agent employing anticipation would try to predict the future state of the environment (weather in this case) and make use of the predictions in the decision making. For example

If the sky is cloudy and the air pressure is low, 
  it will probably rain soon 
     so take the umbrella with you. 
Otherwise 
  leave the umbrella home.

These rules appear more proactive, because they explicitly take into account possible future events. Notice though that in terms of representation and reasoning, these two rule sets are identical, both behave in response to existing conditions. Note too that both systems assume the agent is proactively

  • leaving the house, and
  • trying to stay dry.

In practice, systems incorporating reactive planning tend to be autonomous systems proactively pursuing at least one, and often many, goals. What define anticipation in an AI model is the explicit existence of an inner model of the environment for the anticipatory system (sometimes including the system itself). For example, if the phrase it will probably rain were computed on line in real time, the system would be seen as anticipatory.

In the 1985, Robert Rosen defined an anticipatory system as follows [1]:

A system containing a predictive model of itself and/or its
environment, which allows it to change state at an instant in accord
with the model's predictions pertaining to a latter instant.

To some extent, this applies to any system incorporating machine learning. At issue is how much of a system's behaviour should or indeed can be determined by reasoning over dedicated represenations, how much by on-line planning, and how much must be provided by the system's designers.

Anticipation in evolution and cognition

The anticipation of future states is also a major evolutionary and cognitive advance (Sjolander 1995). Anticipatory agents belonging to Rosen's definition are closer to humans capabilities of taking decisions at a certain time T taking into account the effects of their own actions at different future timescales T+k. Machine learning methods started to integrate these capabilities in an implicit form as in reinforcement learning systems (Sutton & Barto, 1998; Balkenius, 1995[2]) where they learn to anticipate future rewards and punishments caused by current actions (Sutton & Barto, 1998). Moreover anticipation enhanced performance of machine learning tecniques to face with complex environments where agents have to guide their attention to collect important information to act (Balkenius & Hulth, 1999).

From Anticipation to Curiosity

Juergen Schmidhuber modifies error back propagation algorithm to change neural network weights in order to decrease the mismatch between anticipated states and states actually experienced in the future (Schmidhuber - Adaptive curiosity and adaptive confidence, 1991). He introduces the concept of curiosity for agents as a measure of the mismatch between expectations and future experienced reality. Agents able to monitor and control their own curiosity explore situations where they expect to engage with novel experiences and are generally able to deal with complex environments more than the others.

See also

References

  1. Anticipatory Systems, Robert Rosen, 1985, Pergamon Press
  2. Balkenius, C. (1995). Natural Intelligence in Artificial Creatures. Lund University Cognitive Studies, 37. ISBN 91-628-1599-7.

Links