Jump to content

Stanford Research Institute Problem Solver

From Wikipedia, the free encyclopedia

The Stanford Research Institute Problem Solver, known by its acronym STRIPS, is an automated planner developed by Richard Fikes and Nils Nilsson in 1971 at SRI International.[1] The same name was later used to refer to the formal language of the inputs to this planner. This language is the base for most of the languages for expressing automated planning problem instances in use today; such languages are commonly known as action languages. This article only describes the language, not the planner.

Definition[edit]

A STRIPS instance is composed of:

  • An initial state;
  • The specification of the goal states – situations that the planner is trying to reach;
  • A set of actions. For each action, the following are included:
    • preconditions (what must be established before the action is performed);
    • postconditions (what is established after the action is performed).

Mathematically, a STRIPS instance is a quadruple , in which each component has the following meaning:

  1. is a set of conditions (i.e., propositional variables);
  2. is a set of operators (i.e., actions); each operator is itself a quadruple , each element being a set of conditions. These four sets specify, in order, which conditions must be true for the action to be executable, which ones must be false, which ones are made true by the action and which ones are made false;
  3. is the initial state, given as the set of conditions that are initially true (all others are assumed false);
  4. is the specification of the goal state; this is given as a pair , which specify which conditions are true and false, respectively, in order for a state to be considered a goal state.

A plan for such a planning instance is a sequence of operators that can be executed from the initial state and that leads to a goal state.

Formally, a state is a set of conditions: a state is represented by the set of conditions that are true in it. Transitions between states are modeled by a transition function, which is a function mapping states into new states that result from the execution of actions. Since states are represented by sets of conditions, the transition function relative to the STRIPS instance is a function

where is the set of all subsets of , and is therefore the set of all possible states.

The transition function for a state , can be defined as follows, using the simplifying assumption that actions can always be executed but have no effect if their preconditions are not met:

=         if and
  = otherwise

The function can be extended to sequences of actions by the following recursive equations:

A plan for a STRIPS instance is a sequence of actions such that the state that results from executing the actions in order from the initial state satisfies the goal conditions. Formally, is a plan for if satisfies the following two conditions:

Extensions[edit]

The above language is actually the propositional version of STRIPS; in practice, conditions are often about objects: for example, that the position of a robot can be modeled by a predicate , and means that the robot is in Room1. In this case, actions can have free variables, which are implicitly existentially quantified. In other words, an action represents all possible propositional actions that can be obtained by replacing each free variable with a value.

The initial state is considered fully known in the language described above: conditions that are not in are all assumed false. This is often a limiting assumption, as there are natural examples of planning problems in which the initial state is not fully known. Extensions of STRIPS have been developed to deal with partially known initial states.

A sample STRIPS problem[edit]

A monkey is at location A in a lab. There is a box in location C. The monkey wants the bananas that are hanging from the ceiling in location B, but it needs to move the box and climb onto it in order to reach them.

Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
Goal state:    Have(bananas)
Actions:
               // move from X to Y
               _Move(X, Y)_
               Preconditions:  At(X), Level(low)
               Postconditions: not At(X), At(Y)
               
               // climb up on the box
               _ClimbUp(Location)_
               Preconditions:  At(Location), BoxAt(Location), Level(low)
               Postconditions: Level(high), not Level(low)
               
               // climb down from the box
               _ClimbDown(Location)_
               Preconditions:  At(Location), BoxAt(Location), Level(high)
               Postconditions: Level(low), not Level(high)
               
               // move monkey and box from X to Y
               _MoveBox(X, Y)_
               Preconditions:  At(X), BoxAt(X), Level(low)
               Postconditions: BoxAt(Y), not BoxAt(X), At(Y), not At(X)
               
               // take the bananas
               _TakeBananas(Location)_
               Preconditions:  At(Location), BananasAt(Location), Level(high)
               Postconditions: Have(bananas)

Complexity[edit]

Deciding whether any plan exists for a propositional STRIPS instance is PSPACE-complete. Various restrictions can be enforced in order to decide if a plan exists in polynomial time or at least make it an NP-complete problem.[2]

Macro operator[edit]

In the monkey and banana problem, the robot monkey has to execute a sequence of actions to reach the banana at the ceiling. A single action provides a small change in the game. To simplify the planning process, it make sense to invent an abstract action, which isn't available in the normal rule description.[3] The super-action consists of low level actions and can reach high-level goals. The advantage is that the computational complexity is lower, and longer tasks can be planned by the solver.

Identifying new macro operators for a domain can be realized with genetic programming.[4] The idea is, not to plan the domain itself, but in the pre-step, a heuristics is created that allows the domain to be solved much faster. In the context of reinforcement learning, a macro-operator is called an option. Similar to the definition within AI planning, the idea is, to provide a temporal abstraction (span over a longer period) and to modify the game state directly on a higher layer.[5]

See also[edit]

References[edit]

  1. ^ Richard E. Fikes, Nils J. Nilsson (Winter 1971). "STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving" (PDF). Artificial Intelligence. 2 (3–4): 189–208. CiteSeerX 10.1.1.78.8292. doi:10.1016/0004-3702(71)90010-5. S2CID 8623866.
  2. ^ Tom Bylander (September 1994). "The Computational Complexity of Propositional STRIPS Planning". Artificial Intelligence. 69 (1–2): 165–204. CiteSeerX 10.1.1.23.199. doi:10.1016/0004-3702(94)90081-7.
  3. ^ Haslum, Patrik (2007). Reducing Accidental Complexity in Planning Problems. Proceedings of the 20th International Joint Conference on Artificial Intelligence. pp. 1898–1903.
  4. ^ Schmid, Ute (1999). Iterative macro-operators revisited: Applying program synthesis to learning in planning (Technical report). School of Computer Science Carnegie Mellon University. doi:10.21236/ada363524.
  5. ^ Sutton, Richard S and Precup, Doina and Singh, Satinder (1999). "Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning". Artificial Intelligence. 112 (1–2). Elsevier: 181–211. doi:10.1016/s0004-3702(99)00052-1.{{cite journal}}: CS1 maint: multiple names: authors list (link)

Further reading[edit]