15. Learning: Near Misses, Felicity Conditions

One-shot learning

Learning in human-like way, in one shot: learning something definite from each example.

The evolving model

Comparing an initial model example, a seed, with a near miss or another example, the evolving model understands an important characteristic for each new near miss or example compared.

The evolving model develops a set of heuristics to describe the seed, specializing with near misses (reducing the potential matches) or generalizing with examples (broadening the potential matches) the characteristics of the seed.

  • Require link heuristic: specialization
  • Forbid link heuristic: specialization
  • Extend set heuristic: generalization
  • Drop link heuristic: generalization
  • Climb tree heuristic: generalization

Felicity conditions

The teacher and learner must know about each other to achieve the best learning. The learner must talk to himself to understand what he is doing.

How to package ideas better

To better communicate ideas to others in order to achieve better results, the following 5 characteristics makes communication more effective.

  • Symbol: ease to remember the idea
  • Slogan: focus the idea
  • Surprise: catch the attention
  • Salient: one thing to stand out
  • Story: helps transmission to people

5. Search: Optimal, Branch and Bound, A*

Optimal search trees

Finding the best possible sequence of choices.

Getting closer to the goal is generally considered good, but it may lead to dead ends or non-optimal choices.


Knowing the minimum path length to the goal, the search algorithm records the length of path already extended and always extends the shortest path first until all paths are longer than the known shortest length.

Branch & Bound

With the same principle, if the shortest path is not known yet (no “Oracle”) paths are extended to the goal and recorded. Other paths are extended until the are as long or longer than the current shortest path to the goal.

The algorithms extends the first path and then sorts them by length.

Branch & Bound + Extended List

In addition to the branch and bound algorithm, new branches that lead back to node previously extended with a longer path are discarded.

Branch & Bound + Admissible Heuristic

The estimated remaining distance to the goal is added to the path extended. Only the shortest path extended + remaining estimated distance is extended until the goal is reached. All longer paths are discarded.

A* = Branch & Bound + Extended List + Admissible Heuristic

In the algorithm, instead of sorting extensions to put the shortest paths (leading to many calculations), only test if the shortest path leads to the goal.

Consistency Heuristic

In certain cases (especially not about maps), the Admissible Heuristic may lead to problems. The consistency heuristic uses a stronger condition, namely that the distance to the goal from a given node x, minus the distance to the goal from another node y, in absolute value, has to be inferior to the distance between x and y.

4. Search: Depth-First, Hill Climbing, Beam

Search trees

Search trees represent all the possibilities to search for the quickest path without coming back to previous paths. They are particularly used for quickest paths on maps with nodes (intersections), but not exclusively. They are primarily about choices, and finding the best sequence of choices.

British Museum Algorithm = complete expansion of all paths

Depth First vs Breadth First Search Algorithms

Depth First Search Algorithm starts by going down one level from the left by convention until the goal is reached. The program goes back up to the previous node if the goal is not reached, a process called “back up” or “backtracking“.

Breadth First Search Algorithm expands the search level by level, exploring all possibilities of one level before going down.

Both Depth First and Breadth First Algorithms use enqueued lists to extend the paths to goals and discard paths that lead to points that have already been extended.

Hill Climbing Search Algorithm

The search tree is always extended towards the node that is closer to the goal.

Hill Climbing is an informed search making use of certain heuristic information.

Beam Search Algorithm

A beam search algorithm is limiting the number of nodes extended at each node. It only extends nodes that go towards the goal, according to certain heuristics.

Commands summary

En-queuing the next nodes if the goal is not reached, the algorithms process as follow:

  • Depth First: new nodes go to front of the queue
  • Breadth First: new nodes go to front of the queue
  • Hill Climbing: sort the node closest to the goal first
  • Beam: keep only a small number of nodes in the queue

Best First Search Algorithm

Chooses of all en-queued nodes to extend the one that is closest to the goal.

Continuous space

Search algorithms can be used in continuous spaces. However, Hill Climbing can encounter certain problems, such as:

  • getting blocked at a local maximum
  • the telephone pole problem: getting confused at a flat level between high rising parts of the space
  • especially in high dimensional spaces, getting fooled by a particular configuration of space borders that prevent extensions as all paths seem to lead away from the goal

2. Reasoning: Goal Trees and Problem Solving

Problem reduction

Take a complicate problem and transform it into a simpler problem.

Start with safe transformations, the ones you are sure will work in any case. Then apply heuristic transformations, the ones that could work.

The problem simplification schema, may create “and node“, where the problem forks in several sub problems and “or node” where the problem may be solved with either one or another transformation. The resulting schema is usually called a “problem reduction tree“, “and/or tree” or “goal tree“.

In an “or node”, it helps to understand the depth of functional composition (number of transformations to be applied after and “or” options of the branch) and the simplicity of solving each options to complete the problem resolution.


Everything depends on the domain of the problem and the knowledge required to solve it. Knowledge about knowledge, meta-knowledge, is power to solve problems.

  1. Start by evaluating what kind of knowledge is involved.
  2. Understand how the knowledge is represented. Each category of knowledge has its own way of being represented.
  3. Know how the knowledge is used.
  4. Know how much knowledge is required to solve the problem.
  5. Know what exactly knowledge does to solve the problem.