Computer vision is a key aspect of artificial intelligence that is critical to many applications, from robots movements to self-driving cars and from medical imaging to products recognition in manufacturing plants. This MIT course presents the issues of computer vision and how they are handled with Convolutional Neural Networks together with the latest domains of research and state-of-the-art algorithms architectures.
Machine learning makes use of multiple mathematical formulas and relations to implement the different tasks it can handle. Gathered in the following “cheat sheets” by Afshine and Shervine Amidi, the concepts for supervised and unsupervised learning, deep learning together with machine learning tips and tricks, probabilities, statistics algebra and calculus reminders, are all presented in details with the underlying math.
Based on the Stanford course on Machine Learning (CS 229), the cheat sheets summarize the important concepts of each branch with simple explanations and diagrams, such as the following table cover underfitting and overfitting.
|Symptoms||• High training error|
• Training error close to test error
• High bias
|• Training error slightly lower than test error||• Very low training error|
• Training error much lower than test error
• High variance
|Deep learning illustration|
|Possible remedies||• Complexify model|
• Add more features
• Train longer
|• Perform regularization|
• Get more data
The main machine learning cheat sheets can be found here:
- Supervised Learning
Results about linear models, generative learning, support vector machines and kernel methods
- Unsupervised Learning
Formulas about clustering methods and dimensionality reduction
- Deep Learning
Main concepts around neural networks, backpropagation and reinforcement learning
- Machine Learning Tips and Tricks
Good habits and sanity checks to make sure that your model is trained the right way
Other mathematics and coding cheat sheets can be found here:
- Probabilities and Statistics
Formulas about combinatorics, random variables, main probability distributions, and parameter estimation
- Linear Algebra and Calculus
Matrix-vector notations as well as algebra and calculus properties
- Getting started with Matlab
Main features and good practices to adopt
The complete cheat sheets can also be found on Github.
Neural networks come in a wide range of shapes and functions, with diverse architectures and parameters for input, hidden and output nodes as well as convolutive or recurrent nodes.
Regrouped in a convenient summary by Fjodor Van Veen, the most popular architectures for neural networks have been cataloged with detailed descriptions for each type of neural network. The complete post with explanations on the use and goals of each network can be be found on the Asimov Institute “the neural network zoo“.
In this post are gathered the tutorials and exercises, called “mega-recitations”, to apply the concepts presented in the Artificial Intelligence course of Prof. Patrick Winston. Continue reading “24. Reviews and Exercises”
Bayesian Story Merging
By using the probability model discovery previously studied, certain concepts and ideas can be analyzed and merged if similar.
By analyzing the correspondences between clusters of two sets of data, certain data subsets regular correspondences can be sorted out. According to Prof. Patrick Winston, this system of correspondences is very likely to be present in human intelligence.
Applications of AI
- Scientific approach: understanding how AI works
- Engineering approach: implementing AI applications
The most interesting applications are not to replace people with artificial intelligence but to work in tandem with people.
Using a lot of computing power and data becomes more common, but an interesting question is how little information is needed to work a certain problem.
The system translates stories into a internal language to understand stories and display them in diagrams. It allows to read stories on different levels, and use different “personas” to understand stories differently.
Humans may not be intelligent enough to build a machine that is as intelligent as them.