Continued from previous class
Events diagrams must always be arranged in a way so that there are final nodes and no loops. Recording probabilities in tables for each event, the tables are filled by repeating experience so as to know the probabilities and occurrences of each event.
Several models can be drawn for a given set of events. To know which model is right, the Bayesian probabilities formulas can be used to confirm if events are independent or not, make them easier to compute, and choose the more appropriate model.
- P(a/b) = P(a,b) / P(b)
- P(a/b) P(b) = P(a,b) = P(b/a) P(a)
- P(a/b) = P(b/a) P(a) / P(b)
Defining a as a class, and b as the evidence, the probability of the evidence given the class can be obtained through these formulas.
P(class/evidence) = P(evidence/class) P(class) / P(evidence)
Using the evidence from experience, classes can inferred by analyzing the results and corresponding probabilities.
Given the data from experience / simulation, the right model can be sorted as it better corresponds to the probabilities. This allows to select between 2 existing models.
However if multiple models can be created, volumes of data make it impossible to compare them all. The solution is to use two models and compare them recursively. At each trial, the losing model is modified for improvements until a model fits certain criteria for success.
A trick is to use the sum of the logarithms rather than the probabilities, as large numbers of trials will make numbers too small to compute properly.
To avoid local maxima, a radical rearrangement of structure is launched after a certain number of trials.
This Bayesian structure discovery works quite well in situations when a diagnosis must be completed: medical diagnosis, lie-detector, symptoms of aircraft or program not working…