Planning
I think planning is a super interesting field to get into, that generalizes really well.
See Path Planning for more specifics?
Introduction
The planning book is basically everything I want to know.
Outline: p. 5
Planning encompasses a lot of things. In robotics, we talking about motion planning and trajectory planning. Control theory could include mechanical systems such as cars or aircraft, electrical systems such as noise filters, or even systems arising in areas as diverse as chemistry, economics, and sociology. In artificial intelligence, thereβs planning to solve a Rubikβs Cube or building a stack of blocks.
Basic Ingredients of Planning#card
 State
 Time
 Actions
 Initial and Goal states
 A criterion
 Feasibility
 Optimality
 A plan
Discrete Planning
Discrete Feasible Planning
Note: It is often convenient to express Formulation 2.1 as a directed state transition graph:
 The set of vertices is the state space $X$.
 A directed edge from $xβX$ to $xβ²βX$ exists in the graph if and only if there exists an action $uβU(x)$ such that $xβ²=f(x,u)$.
 The initial state and goal set are designated as special vertices in the graph.
Examples
Moving on a 2D Grid
 Suppose that a robot moves on a grid in which each grid point has integer coordinates of the form $(i,j)$. The robot takes discrete steps in one of four directions (up, down, left, right), each of which increments or decrements one coordinate.
 Using Formulation 2.1.:

Let $X$ be the set of all integer pairs of the form $(i,j)$, in which $i,jβZ$.

Let $U=(0,1),(0,β1),(1,0),(β1,0)$. Let $U(x)=U$ for all $xβX$.

The state transition equation is $f(x,u)=x+u$, in which $xβX$ and $uβU$ are treated as twodimensional vectors for the purpose of addition. For example, if $x=(3,4)$ and $u=(0,1)$, then $f(x,u)=(3,5)$.

Suppose for convenience that the initial state is $x_{I}=(0,0)$.

Many interesting goal sets are possible. Suppose, for example, that $X_{G}=(100,100)$.
 It is easy to find a sequence of actions that transforms the state from $(0,0)$ to $(100,100)$.
 We can make the problem more interesting by adding obstacles and fences so that $X$ becomes finite. Very complicated labyrinths can be constructed.

Oftentimes, we use a State Transition Graph to implementFormulation 2.1 directly, composed of vertices and edges.
Related:#idontunderstand
General Forward Search
These are basic graph search algorithms, but with the understanding that the state transition graph is revealed incrementally.
There are 3 Kinds of States
 Unvisited: States that have not been visited yet. Initially, this is every state except $x_{I}$.
 Dead: States that have been visited, and for which every possible next state has also been visited.
 Alive: States that have been encountered, but possibly have unvisited next states. These are considered alive. Initially, the only alive state is $x_{I}$.
 BreadthFirst Search
 DepthFirst Search
 Dijkstraβs Algorithm
 AStar Algorithm
 BestFirst Search
 Iterative Deepening
 Backward Search
 Bidirectional Search
2.2.4 A Unified View of the Search Methods
All of the planning methods from this section followed the same basic template:
 Initialization: Let the search graph, $G(V,E)$, be initialized with $E$ empty and $V$ containing some starting states. For forward search, $V=x_{I}$; for backward search, $V={x_{G}}$. If bidirectional search is used, then $V={x_{I},x_{G}}$. It is possible to grow more than two trees and merge them during the search process. In this case, more states can be initialized in $V$. The search graph will incrementally grow to reveal more and more of the state transition graph.
 Select Vertex: Choose a vertex $n_{cur}βV$ for expansion; this is usually accomplished by maintaining a priority queue. Let $x_{cur}$ denote the state associated with $n_{cur}$.
 Apply an Action: In either a forward or backward direction, a new state, $x_{new}$, is obtained. This may arise from $x_{new}=f(x,u)$ for some $uβU(x)$ (forward) or $x=f(x_{new},u)$ for some $uβU(x_{new})$ (backward).
 Insert a Directed Edge into the Graph: If certain algorithmspecific tests are passed, then generate an edge from $x$ to $x_{new}$ for the forward case,or an edge from $x_{new}$ to $x$ for the backward case. If $x_{new}$ is not yet in $V$ , it will be inserted into $V$ .
 Check for Solution: Determine whether $G$ encodes a path from $x_{I}$ to $x_{G}$. If there is a single search tree, then this is trivial. If there are two or more search trees, then this step could be expensive.
 Return to Step 2: Iterate unless a solution has been found or an early termination condition is satisfied, in which case the algorithm reports failure.
Discrete Optimal Planning
Rather than being satisfied with any sequence of actions that leads to the goal set, suppose we would like a solution that optimizes some criterion, such as time, distance, or energy consumed.