A hypothetical architecture of existing machine learning algorithms to automate the self-sustained generation of new scientific and technological advances.
Abstract
While many Artificial Intelligence researchers, companies and engineers are focusing their activities on the creation of Artificial General Intelligence with the hope of one day reaching the Technological Singularity, this paper takes a different approach. It presents a conceptual, speculative yet pragmatic organization of concepts to directly reach exponential scientific and technological progress. This paper is an attempt to design a process that automates the theories generation part of research and development through existing artificial intelligence algorithms and supporting technologies.
Notably relying upon recent developments in the conceptual and technical environment of graph neural networks, as well as upon multiple other technologies, the protocol breaks down the generation of scientific and technological theories in simple steps and propounds existing algorithm candidates to handle each step. With multiple scientific, mathematical and algorithmic references and sources for advanced readers, this paper nevertheless tries to make use of the simplest terms to facilitate the comprehension of a wider audience with minimal background in artificial intelligence.
Even though this paper describes a process that is still purely speculative for now, the Singularity Protocol does present a credible, structured and detailed approach to generate new scientific and technological theories at scale. And though it still needs to go through numerous adaptations, tests and computing challenges, this protocol is exclusively built upon existing technologies and it introduces a plan to gather and structure technical, financial and human resources so as to rapidly develop and implement an organization that could soon lead to the Technological Singularity.

The introduction, background and principles of the Singularity Protocol, reproduced hereafter, will give you a brief overview of the paper. Read the whole Singularity Protocol paper to get the complete details of each step and sources to algorithmic, mathematical and technological references.
The objective of this paper is to spark interest and discussions (don’t hesitate to leave your comments below!) on the Technological Singularity, which already seems to be within reach, if not to actually launch the protocol’s deployment, provided the required human, financial and computing resources can be harnessed. Do not hesitate to contact the author if you are interested in collaborating towards this goal.
Introduction
Background: Artificial General Intelligence and the Technological Singularity
The creation of Artificial General Intelligence – AGI – is expected to initiate an irrevocable change that will transform humanity beyond anything we can imagine. Though it is difficult to estimate what AGI could unravel beforehand, one of the main expectations is that AGI will be able to develop science and create new technologies much more rapidly than humans ever could. This event is usually referred to as the “Singularity”, or more precisely the “Technological Singularity”: the beginning of exponential scientific and technological progress.
Whether AGI can ever be created remains, however, to be demonstrated and done. Though important progress in the field of Artificial Intelligence – AI – has been made in recent years, thanks to the availability of cheap computing power that now allows to process large amounts of data, the creation of AGI still remains a distant perspective for now.
Many hurdles, such as natural language understanding, consciousness, intentionality or even just properly defining and structuring human and machine intelligence themselves, need to be passed, or skirted, to build an AGI. And that is without even mentioning building an ethical AGI: one that doesn’t launch the apocalyptic “Kill All Humans” scenario.
But, maybe, there could be another way: using existing AI technologies to attain the Singularity without first creating an Artificial General Intelligence.
Writing with a lot of precautions and ifs and coulds, and counting on some major adaptation of existing technologies that are yet to be completed, the “protocol” (for lack of a better word) devised hereafter could put us on track towards exponential science and technology: the Singularity without the AGI, or maybe, before the AGI.
This protocol could be proved to be totally impractical because of the complexity of its source data, the necessary collaboration of people with conflicting interests or a number of other challenges. It could also be completely useless for now because of the enormous amounts of computational power it would require before any practical result can be reached. It may be quite obvious to many AI experts or even already being implemented in a similar pattern by certain AI companies. However, in any case, this protocol will at least have the merit of being written down in the present paper and hopefully spark a discussion, or help others with their AI experiments.
The Singularity Protocol principles and outline
Many artists, philosophers and scientists, including Albert Einstein who described how “combinatory play” is essential to productive thinking, have remarked that the creation of new ideas primarily, if not entirely, results from the combination of existing ideas, elements and concepts. New ideas, concepts, theories, etc. can be generated by associating and/or reorganizing existing ideas, sometimes from very diverse domains, in a new, original combination.
The entire protocol introduced here is thus built upon this central principle: new scientific and technological progress can be created by combining existing science and technologies.
This protocol is an attempt to break down the theoretical part of the research and development – R&D – process into a sequence of simple operations so as to be processed by a machine. It proposes to match each operation with existing technologies and AI algorithms that appear to be good candidates to handle the task at hand. This protocol is also conceived to support a reinforcement learning system that aims at making it self-improving over time, thanks to a feedback loop that could be supported and enhanced by the Blockchain technology.
The protocol is articulated around three key operational tasks, corresponding to the three main sections of this paper:
- Structure and label a graph database of existing scientific and technological advances detailing their specific features and utilities
- Generate new theoretical advances by combining existing advances and/or their features from the database and prioritize these new theories for experimental testing
- Collect feedback from experimental testing for recurrent improvement and reward feedback providers to incentivize the use of the protocol
The heart of the protocol thus lies in the generation of a vast number of theoretical new innovations (see Figure 2) and the ordering of these theories according to their desirability and/or practicality, which would be summed up in a utility score (or vector). The most interesting potential advances should then be tested through simulation and/or laboratory experiment to confirm or invalidate these theories and their actual utility.
The main advantage that the Singularity Protocol aims at providing is to drastically reduce the time spent on the conceptual part of the scientific research and technological engineering pipeline. As researchers and engineers go through a very time-consuming process of gathering thoughts to build new theories before experimental trials can begin (often leading nowhere), the automation of “combinatorial creativity”, the combination of ideas and concepts to form new theories, could be greatly sped up and expanded upon thanks to computer algorithms.
By making computers do the heavy lifting, crossing many features and advances together in a profusion of combinations, before assessing the potential of each combination to keep only the most interesting for experimental verification, machines can play a big role in R&D. This “brute force” capacity for combinatorial creativity could lead us toward an exponential development of science and technology.
Obviously, combining all sciences and technologies together is no small task. In order to initiate the process and obtain the first practical results faster, the protocol could, and most probably should, be made more effective by focusing on a reduced domain. In the beginning, it should also especially focus on areas of science and technologies where simulation is easier and lab tests are not mandatory.
If the ideas presented here can ever be successfully implemented in mathematical and computer sciences labs, or astrophysical simulators, it would be much easier in a subsequent stage to expand upon these results to other, more test-demanding fields, such as chemistry, biology, medicine, etc.
By reducing the domain upon which this protocol is tried, the initial testing and fine tuning phase should be much simpler than trying to embrace too vast and diverse subjects. Though this paper takes a general approach and ultimately aims at combining the most diverse fields, the initial scaling down of the scientific and technological domains to be considered will be all the more appropriate that this limitation will reduce the protocol’s complexity, and therefore the resources required to develop it to rapidly reach a first success.
Though it is nowhere close to any success, it should be emphasized that the final aim of this protocol is to realize the expression of Humboldt and make “infinite use of finite means”. The goal of aggregating, substituting and reorganizing scientific theories, technologies, features and concepts together is to obtain an ever-growing source of new technologies and grow the original database. Besides this self-sustaining database growth, the protocol’s algorithms would also be programmed to be self-improving, so as to create better and better theories, faster and faster, and ultimately lead us towards the Technological Singularity.
Instead of developing an AGI hoping for the Singularity and not the Apocalypse, if an AGI can ever be created at all, we could rather (or in parallel) work on this protocol to directly aim at the Singularity. And though much of the existing technologies proposed here to handle the different operations still need some improvements and adaptations, and a lot of testing, the Singularity could be within our reach sooner if we can manage to make this protocol a reality.
Continue reading the main sections of the Singularity Protocol paper.
Before you proceed, make sure to understand the conceptual framework of Graph Networks, which is especially well summarized in the second reference hereafter.
References
- Albert Einstein (1954) “Ideas and opinions” – see letter from 1945 in response to the publishing in the same year of “An Essay on the Psychology of Invention in the Mathematical Field” by Jacques S. Hadamard
- Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu (2018) “Relational inductive biases, deep learning, and graph networks” arXiv:1806.01261
- Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean (2013) “Distributed Representations of Words and Phrases and their Compositionality” arXiv:1310.4546 – See also TensorFlow Tutorial: Vector Representations of Words
- Hanjun Dai, Bo Dai, Le Song (2016) “Discriminative Embeddings of Latent Variable Models for Structured Data” arXiv:1603.05629
- Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, George E. Dahl (2017) “Neural Message Passing for Quantum Chemistry” arXiv:1704.01212
- Wengong Jin, Regina Barzilay, Tommi Jaakkola (2018) “Junction Tree Variational Autoencoder for Molecular Graph Generation” arXiv:1802.04364
- Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia (2018) “Learning Deep Generative Models of Graphs” arXiv:1803.03324
- Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann (2018) “NetGAN: Generating Graphs via Random Walks” arXiv:1803.00816
- Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, Gang Hua (2017) “CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training” arXiv:1703.10155
- Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, Koray Kavukcuoglu (2017) “Hierarchical Representations for Efficient Architecture Search” arXiv:1711.00436
Share your thoughts in the comments below!
Below are some interesting exchanges that happened on Reddit after the initial post of the Singularity Protocol here: https://www.reddit.com/r/singularity/comments/9eydpw/the_singularity_may_be_nearer_than_general_ai/
DISCUSSION 1
/MercuriusExMachina
I’m only an interested bystander in all of this.
Please let me know if I understand correctly.
tl;dr: use narrow AI for scientific advance.
/MLpadawan
Yes, that would be the “core” of the protocol. The rest of the protocol aims at creating a graph database to train and feed the core generative algorithms and making it self-sustaining and self-improving (via reinforcement learning and blockchain infrastructure support).
/redditrambler
Why would this not be general AI?
/MLpadawan
Because it relies on experimental feedback from R&D laboratories, scientists and engineers to fuel the Reinforcement Learning system that will lead to more scientific and tech innovations.
The protocol could however be a stepping stone towards General AI.
DISCUSSION 2
In response to a somewhat unrelated comment:
/boytjie
quoting previous comment: “as a form of AGI made up of interconnected federations of narrow AI.”
I’ve always claimed that wouldn’t work. Will I have to eat my words?
/MysticAnarchy
Idk, why do you think it won’t work? I’m no AI/tech expert.
/boytjie
I worked extensively with ANI (Artificial Narrow Intelligence = expert systems) in the 1980’s and early ‘90’s and developed quite a few (then AI) systems. There was a (super-duper weak) school of thought who advanced the theory that compiled ANI systems might result in AGI but they weren’t taken seriously. If AGI is created from compiled ANI systems, I would be there centre of a humble pie feast.
/MysticAnarchy
Lol, well you sound like your much better qualified then I am to comment on this subject then I am, so forgive my ignorance but why was this not taken seriously? It seems to me like the most logical way to progress towards AGI without having to develop a super advanced learning algorithm right out the gates.
/boytjie
why was this not taken seriously?
Intelligence is far more complex than a mashed together series of specialised and dedicated skills. IMO there is still a ‘secret ingredient’ above ML – so everyone that thinks AGI is imminent because of neural nets and ML is wrong.
/MysticAnarchy
So do you think that “secret ingredient” would be something along the lines of a general organising principle, similar to how consciousness interacts with neurochemistry ? Or something entirely different?
I should probably specify, when using the term AGI, are you referring to general intelligence on par with human capacity, or intelligence that matches human intelligence in terms of understanding and deriving meaning from data? Because I definitely think we are a long way from the latter.
/boytjie
quoting previous comment: So do you think that “secret ingredient”…
I feel there is a quantum dimension to intelligence. When we are more skilled in manipulating the quantum world, we will be able to add the “secret ingredient”.
AGI = probably better than human intelligence on all measurable dimensions. Robust and error free. No need to specify anything – there are no conditions whatsoever. It’s just flat-out equal or better than human intelligence (and that would be a genius human).
/MysticAnarchy
Very interesting, thanks for sharing your thoughts, pretty cool you’ve had experience in the field since the 80s. If you don’t mind me asking, what do you think about the Singularity or when an AGI reaches ASI and can improve and grow itself exponentially beyond our comprehension?
/boytjie
quoting previous comment: If you don’t mind me asking, what do you think about the Singularity or when an AGI reaches ASI and can improve and grow itself exponentially beyond our comprehension?
Well… wrap your brain around this. Our thinking (thoughts) are resting on a set (one set of many) of assumptions that define our thinking. Riddle me this – AGI is a man-made term and AI (if it’s recursive) is approaching human incomprehension rapidly on its way to ASI. It will be AGI for an hour on fast hardware and a day on slow hardware. It will not stop at AGI. Why? It’s on the way to ASI and will keep improving until the laws of physics break down. That’s a showstopper! That’s the Singularity.
DISCUSSION 2′
/PresentCompanyExcl
Reinforcement learning exists, but not in a state where it will do what you describe.
/MLpadawan
Yes, much remains to be done (if it can be done).
DISCUSSION 3
/Thorium_troll
Blockchain, graph databases, reinforcement learning. You forgot quantum.
/MLpadawan
I guess you can overlook it as a catalog of buzzwords.
To go deeper however, implementing a reinforcement learning process upon graph networks currently implies complexity (which is a problem in itself). If you have ideas to simplify everything, please share.
Nevertheless, the protocol’s core GN framework is clear: Relational inductive biases, deep learning, and graph networks
And narrow applications are already in use: Junction Tree Variational Autoencoder for Molecular Graph Generation
/Thorium_troll
Sounds like you have done more research than I thought.
That part that seemed silly to be was inclusion of blockchain. The blockchain is inherently slower than a centralised database or file system, this a tradeoff for the coordination needed for decentralisation. At least that’s my opinion and current blockchain systems are extremely slow and bandwidth limited.
Reinforcement learning is data hungry (for example the OpenAI 5 needed >100 years of training) and it unstable so it often needs to be reset. It’s is often unstable for data far outside it’s training set. All this means it’s not suitable for use on a blockchain. Maybe it will someday, but you can say that about anything, and it’s not particularly useful to have more than one speculative component in a plan.
Overall it seems like you’re earnest about this, and I think that’s great. And when pressed, you’re honest that it’s a sketch and about the complexities and difficulties.
My constructive criticism is presentation. It just comes of as a little manic and grandiose to a casual reader. Perhaps it would be better to present it as it is: a sketch of a hypothesis, and stay away from paper format and grand titles until there is enough technical substance to justify it in the eyes of a potential reader.
You could go further into the scientific method and predict ways it could be disproven, removing all non essential elements, and looking at how each remaining element could be disproven. And also showing the minimum viable way to prove that it, or that the new ideas inside would work as needed. I guess that would be a variation on “Relational inductive biases, deep learning, and graph networks”, except that paper doesn’t seem to present any results, only ideas (correct me if I’m wrong though I only had time to read the abstract).
/MLpadawan
The blockchain would be used to securely retrieve data to a centralized database, maybe as a form of timestamp, while the data go directly to the centralized database (not through the blockchain). This has the advantage of also enabling a corresponding back transfer of a cryptocurrency reward. This model is convenient because feedback providers can then be considered as bitcoin miners through their participation.
The blockchain part may be especially fuzzy, but yes, everything is hypothetical, exponentially complex and probably beyond current computing capacities of Google.
Thanks for your constructive criticism. A future v2 should obviously appear less manic and grandiose to the casual reader. The detailed experimental methodology to build or break the protocol is under way, but it seems that it is much more a problem of making it rather than breaking it, and making it self-sustained. The generative part is already in use in a focused, narrow and very spatially deterministic domain: molecule graph generation.
The Graph Networks is a fundamental paper, it actually formalizes and consolidates the GN environment from numerous sources into a very solid system. Deepmind is supposed to provide a consolidated mathematical system soon too. For now all its sources provide ample, but not standardized, mathematical and algorithmic background.
/Thorium_troll
> The blockchain would be used to securely retrieve data to a centralized database, maybe as a form of timestamp, while the data go directly to the centralized database (not through the blockchain). This has the advantage of also enabling a corresponding back transfer of a cryptocurrency reward.
That makes sense – just use it for where security or decentralisation are essential. It would certainly help avoid reward hacking.
> A future v2 should obviously appear less manic and grandiose to the casual reader.
That’s just my impression though, maybe it’s just my bias. I can be overly critical :p
> The Graph Networks is a fundamental paper
I should have a read of that as it sounds interesting, and I’ve never really understood the excitement behind graph networks – probably because I haven’t delved into it. Deepmind has a bunch of really promising papers out (the lstm memory one, and MERLIN too, and probably more), it will be cool to how they bring them all together (if they have a plan).
Excellent article summarizing methods and codes to use data on wikipedia and the web.
Wikipedia Data Science: Working with the World’s Largest Encyclopedia