With a depth-first search, the domain reduction algorithm goes back up one node when it is unable to comply with a constraint. It reduces the domain of available variables at node n-1 to make sure it will have an unused variable in the next node n.
By considering more variables (propagating to neighboring states in the US map coloring example), the domain can be reduced to a single value at a certain node.
Solving the problem faster
By working on the most constraints nodes first, the algorithm doesn’t have to work back up as much as if it starts with least constraints first.
Obviously, the main constraint of using only a small number of values (colors in the example) is the most limiting one. Being able to increase the number of values can drastically simplify the problem.
In case of uncertainty on the exact number of values, find a range of values that solve the problem (such as in the airplane assignment example).
Used most constraints first
Propagate through domain reduced to single algorithm
In the absence of definite result, find a narrow range of max values where it doesn’t work, and min values where it does
Using lines on pictures of real-world objects, the edges between shapes could serve to identify the number of objects in it.
The different intersections possible generally form two types of trihedral vertexes to identify shapes:
A second approach uses convex and concave lines and boundaries between objects in a more theoretical domain, to identify trihedral vertexes between 3 faces, where objects are always considered in a general position of perspective where unusual cases are not considered.
These constraints create 18 different possibilities of junctions. With this catalog of positions, shapes can be identified to know if an object can exist in the domain defined above… but not in the real world.
Towards robot vision
Adding cracks, shadows, non-trihedral vertexes and light to the theoretical approach, the domain complexity increased with more than a 1000 junctions possibilities.
However, in the same manner as the theoretical approach, an algorithm that identifies junctions one by one with this catalog, a much wider array of objects can be identified to determine what their actual shape is and or if some ambiguity is left.
How human play chess: mixing analysis, strategy, tactics to choose a move
Look ahead and evaluate
British museum algorithm
To evaluate the potential of the next situation of the board, (methods 2 and 3) we can use a linear scoring polynomial to give number to situations and choose the best.
In the resulting tree of possibilities:
the branching factor is the number b of branches at each at each level
the depth is the number d of levels
the leaves or terminal nodes are the resulting b^d possibilities
In Chess, there are about 10^120 leaf nodes, making it impossible to compute.
Method: using a scoring system for final leaves of the trees, the algorithm simulates a maximizing player who chooses higher values, while a minimizing player who chooses lower values.
Alpha-Beta is a method to discard useless branches to reduce computations for Minimax.
From terminal nodes, the algorithm evaluates the best choice for the maximizing and minimizing players going up one node at a time. It then cuts off branches that would not be chosen by either players at the nodes they play.
Depending on the game and time available, the branching factor of the tree, the tree can be extended not completely but to a certain number of levels only, ensuring that a good answer can always be reached in time, even though it may not be the single best answer.
Deep Blue essentially used all these methods, plus some extra improvements.
Deep Blue = Minimax + Alpha-Beta + Progressive Deepening
+ Parallel Computing + Opening Book + End Game Tricks
+ Uneven Tree Development
Uneven Tree Development: some situation may need to expand the search further to ensure better choices (take the opponents’ Queen)
Getting closer to the goal is generally considered good, but it may lead to dead ends or non-optimal choices.
Knowing the minimum path length to the goal, the search algorithm records the length of path already extended and always extends the shortest path first until all paths are longer than the known shortest length.
Branch & Bound
With the same principle, if the shortest path is not known yet (no “Oracle”) paths are extended to the goal and recorded. Other paths are extended until the are as long or longer than the current shortest path to the goal.
The algorithms extends the first path and then sorts them by length.
Branch & Bound + Extended List
In addition to the branch and bound algorithm, new branches that lead back to node previously extended with a longer path are discarded.
Branch & Bound + Admissible Heuristic
The estimated remaining distance to the goal is added to the path extended. Only the shortest path extended + remaining estimated distance is extended until the goal is reached. All longer paths are discarded.
In the algorithm, instead of sorting extensions to put the shortest paths (leading to many calculations), only test if the shortest path leads to the goal.
In certain cases (especially not about maps), the Admissible Heuristic may lead to problems. The consistency heuristic uses a stronger condition, namely that the distance to the goal from a given node x, minus the distance to the goal from another node y, in absolute value, has to be inferior to the distance between x and y.
Search trees represent all the possibilities to search for the quickest path without coming back to previous paths. They are particularly used for quickest paths on maps with nodes (intersections), but not exclusively. They are primarily about choices, and finding the best sequence of choices.
British Museum Algorithm = complete expansion of all paths
Depth First vs Breadth First Search Algorithms
Depth First Search Algorithm starts by going down one level from the left by convention until the goal is reached. The program goes back up to the previous node if the goal is not reached, a process called “back up” or “backtracking“.
Breadth First Search Algorithm expands the search level by level, exploring all possibilities of one level before going down.
Both Depth First and Breadth First Algorithms use enqueued lists to extend the paths to goals and discard paths that lead to points that have already been extended.
Hill Climbing Search Algorithm
The search tree is always extended towards the node that is closer to the goal.
Hill Climbing is an informed search making use of certain heuristic information.
Beam Search Algorithm
A beam search algorithm is limiting the number of nodes extended at each node. It only extends nodes that go towards the goal, according to certain heuristics.
En-queuing the next nodes if the goal is not reached, the algorithms process as follow:
Depth First: new nodes go to front of the queue
Breadth First: new nodes go to front of the queue
Hill Climbing: sort the node closest to the goal first
Beam: keep only a small number of nodes in the queue
Best First Search Algorithm
Chooses of all en-queued nodes to extend the one that is closest to the goal.
Search algorithms can be used in continuous spaces. However, Hill Climbing can encounter certain problems, such as:
getting blocked at a local maximum
the telephone pole problem: getting confused at a flat level between high rising parts of the space
especially in high dimensional spaces, getting fooled by a particular configuration of space borders that prevent extensions as all paths seem to lead away from the goal