In this post are gathered the tutorials and exercises, called “mega-recitations”, to apply the concepts presented in the Artificial Intelligence course of Prof. Patrick Winston. Continue reading “24. Reviews and Exercises”

## 8. Constraints: Search, Domain Reduction

https://www.youtube.com/watch?v=d1KyYyLmGpA

## Domain Reduction Algorithm

### Vocabulary

**Variable**: something that can have an assignment*v***Value**: something that can be assigned*x***Domain**: set of all different values*d***Constraint**: limit on variable values*c*

With a depth-first search, the **domain reduction algorithm** goes back up one node when it is unable to comply with a constraint. It reduces the domain of available variables at node *n-1* to make sure it will have an unused variable in the next node *n*.

By considering more variables (propagating to neighboring states in the US map coloring example), the **domain can be reduced to a single value** at a certain node.

### Solving the problem faster

By working on the **most constraints nodes first**, the algorithm doesn’t have to work back up as much as if it starts with least constraints first.

Obviously, the main constraint of using only a small number of values (colors in the example) is the most limiting one. Being able to **increase the number of values** can drastically simplify the problem.

In case of **uncertainty** on the exact number of values, find a **range of values** that solve the problem (such as in the airplane assignment example).

## Resources allocations

- Used most constraints first
- Propagate through domain reduced to single algorithm
- In the absence of definite result, find a narrow range of max values where it doesn’t work, and min values where it does

## 6. Search: Games, Minimax, and Alpha-Beta

## How a computer can play games (like Chess)

- How human play chess: mixing analysis, strategy, tactics to choose a move
- If-Then rules
- Look ahead and evaluate
- British museum algorithm

To evaluate the potential of the next situation of the board, (methods 2 and 3) we can use a **linear scoring polynomial** to give number to situations and choose the best.

In the resulting tree of possibilities:

- the
**branching factor**is the number*b*of branches at each at each level - the
**depth**is the number*d*of levels - the
**leaves or terminal nodes**are the resulting*b^d*possibilities

In Chess, there are about *10^120* leaf nodes, making it impossible to compute.

## Minimax

Method: using a scoring system for final leaves of the trees, the algorithm simulates a **maximizing player** who chooses higher values, while a **minimizing player** who chooses lower values.

### Alpha-Beta

Alpha-Beta is a method to **discard useless branches** to reduce computations for Minimax.

From terminal nodes, the algorithm evaluates the **best choice for the maximizing and minimizing players going up one node at a time**. **It then cuts off branches that would not be chosen** by either players at the nodes they play.

### Progressive Deepening

Depending on the game and time available, the branching factor of the tree, the tree can be **extended not completely but to a certain number of levels only**, ensuring that a good answer can always be reached in time, even though it may not be the single best answer.

## Deep Blue

Deep Blue essentially used all these methods, plus some extra improvements.

Deep Blue = Minimax + Alpha-Beta + Progressive Deepening

+ Parallel Computing + Opening Book + End Game Tricks

+ Uneven Tree Development

**Uneven Tree Development**: some situation may need to expand the search further to ensure better choices (take the opponents’ Queen)

## 5. Search: Optimal, Branch and Bound, A*

## Optimal search trees

**Finding the best possible sequence of choices.**

Getting closer to the goal is generally considered good, but it may lead to dead ends or non-optimal choices.

### Oracle

Knowing the minimum path length to the goal, the search algorithm records the length of path already extended and always **extends the shortest path first** until all paths are longer than the known shortest length.

## Branch & Bound

With the same principle, if the shortest path is not known yet (no “Oracle”) paths are extended to the goal and recorded. **Other paths are extended until the are as long or longer than the current shortest path to the goal.**

The algorithms extends the first path and then sorts them by length.

### Branch & Bound + Extended List

In addition to the branch and bound algorithm, **new branches that lead back to node previously extended with a longer path are discarded**.

### Branch & Bound + Admissible Heuristic

The **estimated remaining distance** to the goal is added to the path extended. Only the shortest path extended + remaining estimated distance is extended until the goal is reached. All longer paths are discarded.

### A* = Branch & Bound + Extended List + Admissible Heuristic

In the algorithm, **instead of sorting extensions** to put the shortest paths (leading to many calculations), only **test if the shortest path leads to the goal**.

### Consistency Heuristic

In certain cases (especially not about maps)**, the Admissible Heuristic may lead to problems.** The consistency heuristic uses a stronger condition, namely that the distance to the goal from a given node *x, *minus the distance to the goal from another node *y*, in absolute value, has to be inferior to the distance between *x* and *y*.

## 4. Search: Depth-First, Hill Climbing, Beam

## Search trees

Search trees represent all the possibilities to search for the quickest path without coming back to previous paths. They are particularly used for quickest paths on maps with nodes (intersections), but not exclusively. They are primarily about choices, and **finding the best sequence of choices**.

**British Museum Algorithm **= complete expansion of all paths

## Depth First vs Breadth First Search Algorithms

**Depth First Search Algorithm** starts by going down one level from the left by convention until the goal is reached. The program goes back up to the previous node if the goal is not reached, a process called “back up” or “**backtracking**“.

**Breadth First Search Algorithm** expands the search level by level, exploring all possibilities of one level before going down.

Both Depth First and Breadth First Algorithms use **enqueued lists** to extend the paths to goals and **discard paths that lead to points that have already been extended**.

## Hill Climbing Search Algorithm

The search tree is always **extended towards the node that is closer to the goal**.

Hill Climbing is an **informed search** making use of certain heuristic information.

## Beam Search Algorithm

A beam search algorithm is** limiting the number of nodes extended** at each node. It only extends nodes that go towards the goal, according to certain heuristics.

## Commands summary

En-queuing the next nodes if the goal is not reached, the algorithms process as follow:

**Depth First**: new nodes go to front of the queue**Breadth First**: new nodes go to front of the queue**Hill Climbing**: sort the node closest to the goal first**Beam**: keep only a small number of nodes in the queue

## Best First Search Algorithm

Chooses of all en-queued nodes to **extend the one that is closest to the goal**.

## Continuous space

Search algorithms can be used in continuous spaces. However, Hill Climbing can encounter certain problems, such as:

- getting
**blocked at a local maximum** - the
**telephone pole problem**: getting confused at a flat level between high rising parts of the space - especially in
**high dimensional spaces**, getting fooled by a particular configuration of space borders that prevent extensions as all paths seem to lead away from the goal