Reinforcement Learning is one of the most exciting parts of Machine Learning and AI, as it allows for the programming of agents taking decisions in both virtual and real-life environments. This MIT course presents the theoretical background as well as the actual Deep Q-Network algorithm, that power some of the best Reinforcement Learning applications.
Formalizing social interactions for robot fleets
The development of functional robots makes the management of fleets of robots of critical importance for robotics and the broader artificial intelligence. The following study aims at formalizing a broad range of social interactions that can be observed in natural environments, standardizing examples for computer programming of complex interactions between different robots.

Continue reading “Formalizing social interactions for robot fleets”
Computer vision and convolutional neural networks
Computer vision is a key aspect of artificial intelligence that is critical to many applications, from robots movements to self-driving cars and from medical imaging to products recognition in manufacturing plants. This MIT course presents the issues of computer vision and how they are handled with Convolutional Neural Networks together with the latest domains of research and state-of-the-art algorithms architectures.
Continue reading “Computer vision and convolutional neural networks”
The Singularity may be nearer than General AI: introducing the Singularity Protocol
A hypothetical architecture of existing machine learning algorithms to automate the self-sustained generation of new scientific and technological advances.
Abstract
While many Artificial Intelligence researchers, companies and engineers are focusing their activities on the creation of Artificial General Intelligence with the hope of one day reaching the Technological Singularity, this paper takes a different approach. It presents a conceptual, speculative yet pragmatic organization of concepts to directly reach exponential scientific and technological progress. This paper is an attempt to design a process that automates the theories generation part of research and development through existing artificial intelligence algorithms and supporting technologies.
Notably relying upon recent developments in the conceptual and technical environment of graph neural networks, as well as upon multiple other technologies, the protocol breaks down the generation of scientific and technological theories in simple steps and propounds existing algorithm candidates to handle each step. With multiple scientific, mathematical and algorithmic references and sources for advanced readers, this paper nevertheless tries to make use of the simplest terms to facilitate the comprehension of a wider audience with minimal background in artificial intelligence.
Even though this paper describes a process that is still purely speculative for now, the Singularity Protocol does present a credible, structured and detailed approach to generate new scientific and technological theories at scale. And though it still needs to go through numerous adaptations, tests and computing challenges, this protocol is exclusively built upon existing technologies and it introduces a plan to gather and structure technical, financial and human resources so as to rapidly develop and implement an organization that could soon lead to the Technological Singularity. Continue reading “The Singularity may be nearer than General AI: introducing the Singularity Protocol”
Conceptual and mathematical summary for machine learning
Machine learning makes use of multiple mathematical formulas and relations to implement the different tasks it can handle. Gathered in the following “cheat sheets” by Afshine and Shervine Amidi, the concepts for supervised and unsupervised learning, deep learning together with machine learning tips and tricks, probabilities, statistics algebra and calculus reminders, are all presented in details with the underlying math.
Based on the Stanford course on Machine Learning (CS 229), the cheat sheets summarize the important concepts of each branch with simple explanations and diagrams, such as the following table cover underfitting and overfitting.
Underfitting | Just right | Overfitting | |
Symptoms | • High training error • Training error close to test error • High bias | • Training error slightly lower than test error | • Very low training error • Training error much lower than test error • High variance |
Regression illustration | ![]() | ![]() | ![]() |
Classification illustration | ![]() | ![]() | ![]() |
Deep learning illustration | ![]() | ![]() | ![]() |
Possible remedies | • Complexify model • Add more features • Train longer | • Perform regularization • Get more data |
The main machine learning cheat sheets can be found here:
- Supervised Learning
Results about linear models, generative learning, support vector machines and kernel methods - Unsupervised Learning
Formulas about clustering methods and dimensionality reduction - Deep Learning
Main concepts around neural networks, backpropagation and reinforcement learning - Machine Learning Tips and Tricks
Good habits and sanity checks to make sure that your model is trained the right way
Other mathematics and coding cheat sheets can be found here:
- Probabilities and Statistics
Formulas about combinatorics, random variables, main probability distributions, and parameter estimation - Linear Algebra and Calculus
Matrix-vector notations as well as algebra and calculus properties - Getting started with Matlab
Main features and good practices to adopt
The complete cheat sheets can also be found on Github.