Computer vision is a key aspect of artificial intelligence that is critical to many applications, from robots movements to self-driving cars and from medical imaging to products recognition in manufacturing plants. This MIT course presents the issues of computer vision and how they are handled with Convolutional Neural Networks together with the latest domains of research and state-of-the-art algorithms architectures.
A hypothetical architecture of existing machine learning algorithms to automate the self-sustained generation of new scientific and technological advances.
While many Artificial Intelligence researchers, companies and engineers are focusing their activities on the creation of Artificial General Intelligence with the hope of one day reaching the Technological Singularity, this paper takes a different approach. It presents a conceptual, speculative yet pragmatic organization of concepts to directly reach exponential scientific and technological progress. This paper is an attempt to design a process that automates the theories generation part of research and development through existing artificial intelligence algorithms and supporting technologies.
Notably relying upon recent developments in the conceptual and technical environment of graph neural networks, as well as upon multiple other technologies, the protocol breaks down the generation of scientific and technological theories in simple steps and propounds existing algorithm candidates to handle each step. With multiple scientific, mathematical and algorithmic references and sources for advanced readers, this paper nevertheless tries to make use of the simplest terms to facilitate the comprehension of a wider audience with minimal background in artificial intelligence.
Even though this paper describes a process that is still purely speculative for now, the Singularity Protocol does present a credible, structured and detailed approach to generate new scientific and technological theories at scale. And though it still needs to go through numerous adaptations, tests and computing challenges, this protocol is exclusively built upon existing technologies and it introduces a plan to gather and structure technical, financial and human resources so as to rapidly develop and implement an organization that could soon lead to the Technological Singularity. Continue reading “The Singularity may be nearer than General AI: introducing the Singularity Protocol”
Machine learning makes use of multiple mathematical formulas and relations to implement the different tasks it can handle. Gathered in the following “cheat sheets” by Afshine and Shervine Amidi, the concepts for supervised and unsupervised learning, deep learning together with machine learning tips and tricks, probabilities, statistics algebra and calculus reminders, are all presented in details with the underlying math.
Based on the Stanford course on Machine Learning (CS 229), the cheat sheets summarize the important concepts of each branch with simple explanations and diagrams, such as the following table cover underfitting and overfitting.
|Symptoms||• High training error|
• Training error close to test error
• High bias
|• Training error slightly lower than test error||• Very low training error|
• Training error much lower than test error
• High variance
|Deep learning illustration|
|Possible remedies||• Complexify model|
• Add more features
• Train longer
|• Perform regularization|
• Get more data
The main machine learning cheat sheets can be found here:
- Supervised Learning
Results about linear models, generative learning, support vector machines and kernel methods
- Unsupervised Learning
Formulas about clustering methods and dimensionality reduction
- Deep Learning
Main concepts around neural networks, backpropagation and reinforcement learning
- Machine Learning Tips and Tricks
Good habits and sanity checks to make sure that your model is trained the right way
Other mathematics and coding cheat sheets can be found here:
- Probabilities and Statistics
Formulas about combinatorics, random variables, main probability distributions, and parameter estimation
- Linear Algebra and Calculus
Matrix-vector notations as well as algebra and calculus properties
- Getting started with Matlab
Main features and good practices to adopt
The complete cheat sheets can also be found on Github.
This series of articles dives deeper into the actual applications of Machine Learning that are currently in use in many current technological processes and devices.
Through these posts entitled “Machine Learning is Fun!”, Adam Geitgey guides us step by step through the concepts, data, algorithms, code, results and pitfalls of machine learning applications from image, face and speech recognition to language translation and more. It also gathers several different sources for more details on each application and its development.
This series is really dense with detailed code, but it is also explained very clearly, step by step, with detailed illustration. It notably covers the use of a Convolutional Neural Network (including Generative Adversarial Network) and Recurrent Neural Network, together with some of their most prominent applications in daily life. It is a real course not to be missed for any ML developer!
Here is the list of posts with direct links:
- Part 1: The world’s easiest introduction to Machine Learning
- Part 2: Using Machine Learning to generate Super Mario Maker levels
- Part 3: Deep Learning and Convolutional Neural Networks
- Part 4: Modern Face Recognition with Deep Learning
- Part 5: Language Translation with Deep Learning and the Magic of Sequences
- Part 6: How to do Speech Recognition with Deep Learning
- Part 7: Abusing Generative Adversarial Networks to Make 8-bit Pixel Art
- Part 8: How to Intentionally Trick Neural Networks
Neural networks come in a wide range of shapes and functions, with diverse architectures and parameters for input, hidden and output nodes as well as convolutive or recurrent nodes.
Regrouped in a convenient summary by Fjodor Van Veen, the most popular architectures for neural networks have been cataloged with detailed descriptions for each type of neural network. The complete post with explanations on the use and goals of each network can be be found on the Asimov Institute “the neural network zoo“.