A hypothetical architecture of existing machine learning algorithms to automate the self-sustained generation of new scientific and technological advances.
While many Artificial Intelligence researchers, companies and engineers are focusing their activities on the creation of Artificial General Intelligence with the hope of one day reaching the Technological Singularity, this paper takes a different approach. It presents a conceptual, speculative yet pragmatic organization of concepts to directly reach exponential scientific and technological progress. This paper is an attempt to design a process that automates the theories generation part of research and development through existing artificial intelligence algorithms and supporting technologies.
Notably relying upon recent developments in the conceptual and technical environment of graph neural networks, as well as upon multiple other technologies, the protocol breaks down the generation of scientific and technological theories in simple steps and propounds existing algorithm candidates to handle each step. With multiple scientific, mathematical and algorithmic references and sources for advanced readers, this paper nevertheless tries to make use of the simplest terms to facilitate the comprehension of a wider audience with minimal background in artificial intelligence.
Even though this paper describes a process that is still purely speculative for now, the Singularity Protocol does present a credible, structured and detailed approach to generate new scientific and technological theories at scale. And though it still needs to go through numerous adaptations, tests and computing challenges, this protocol is exclusively built upon existing technologies and it introduces a plan to gather and structure technical, financial and human resources so as to rapidly develop and implement an organization that could soon lead to the Technological Singularity.
The introduction, background and principles of the Singularity Protocol, reproduced hereafter, will give you a brief overview of the paper. Read the whole Singularity Protocol paper to get the complete details of each step and sources to algorithmic, mathematical and technological references.
The objective of this paper is to spark interest and discussions (don’t hesitate to leave your comments below!) on the Technological Singularity, which already seems to be within reach, if not to actually launch the protocol’s deployment, provided the required human, financial and computing resources can be harnessed. Do not hesitate to contact the author if you are interested in collaborating towards this goal.
Background: Artificial General Intelligence and the Technological Singularity
The creation of Artificial General Intelligence – AGI – is expected to initiate an irrevocable change that will transform humanity beyond anything we can imagine. Though it is difficult to estimate what AGI could unravel beforehand, one of the main expectations is that AGI will be able to develop science and create new technologies much more rapidly than humans ever could. This event is usually referred to as the “Singularity”, or more precisely the “Technological Singularity”: the beginning of exponential scientific and technological progress.
Whether AGI can ever be created remains, however, to be demonstrated and done. Though important progress in the field of Artificial Intelligence – AI – has been made in recent years, thanks to the availability of cheap computing power that now allows to process large amounts of data, the creation of AGI still remains a distant perspective for now.
Many hurdles, such as natural language understanding, consciousness, intentionality or even just properly defining and structuring human and machine intelligence themselves, need to be passed, or skirted, to build an AGI. And that is without even mentioning building an ethical AGI: one that doesn’t launch the apocalyptic “Kill All Humans” scenario.
But, maybe, there could be another way: using existing AI technologies to attain the Singularity without first creating an Artificial General Intelligence.
Writing with a lot of precautions and ifs and coulds, and counting on some major adaptation of existing technologies that are yet to be completed, the “protocol” (for lack of a better word) devised hereafter could put us on track towards exponential science and technology: the Singularity without the AGI, or maybe, before the AGI.
This protocol could be proved to be totally impractical because of the complexity of its source data, the necessary collaboration of people with conflicting interests or a number of other challenges. It could also be completely useless for now because of the enormous amounts of computational power it would require before any practical result can be reached. It may be quite obvious to many AI experts or even already being implemented in a similar pattern by certain AI companies. However, in any case, this protocol will at least have the merit of being written down in the present paper and hopefully spark a discussion, or help others with their AI experiments.
The Singularity Protocol principles and outline
Many artists, philosophers and scientists, including Albert Einstein who described how “combinatory play” is essential to productive thinking, have remarked that the creation of new ideas primarily, if not entirely, results from the combination of existing ideas, elements and concepts. New ideas, concepts, theories, etc. can be generated by associating and/or reorganizing existing ideas, sometimes from very diverse domains, in a new, original combination.
The entire protocol introduced here is thus built upon this central principle: new scientific and technological progress can be created by combining existing science and technologies.
This protocol is an attempt to break down the theoretical part of the research and development – R&D – process into a sequence of simple operations so as to be processed by a machine. It proposes to match each operation with existing technologies and AI algorithms that appear to be good candidates to handle the task at hand. This protocol is also conceived to support a reinforcement learning system that aims at making it self-improving over time, thanks to a feedback loop that could be supported and enhanced by the Blockchain technology.
The protocol is articulated around three key operational tasks, corresponding to the three main sections of this paper:
- Structure and label a graph database of existing scientific and technological advances detailing their specific features and utilities
- Generate new theoretical advances by combining existing advances and/or their features from the database and prioritize these new theories for experimental testing
- Collect feedback from experimental testing for recurrent improvement and reward feedback providers to incentivize the use of the protocol
The heart of the protocol thus lies in the generation of a vast number of theoretical new innovations (see Figure 2) and the ordering of these theories according to their desirability and/or practicality, which would be summed up in a utility score (or vector). The most interesting potential advances should then be tested through simulation and/or laboratory experiment to confirm or invalidate these theories and their actual utility.
The main advantage that the Singularity Protocol aims at providing is to drastically reduce the time spent on the conceptual part of the scientific research and technological engineering pipeline. As researchers and engineers go through a very time-consuming process of gathering thoughts to build new theories before experimental trials can begin (often leading nowhere), the automation of “combinatorial creativity”, the combination of ideas and concepts to form new theories, could be greatly sped up and expanded upon thanks to computer algorithms.
By making computers do the heavy lifting, crossing many features and advances together in a profusion of combinations, before assessing the potential of each combination to keep only the most interesting for experimental verification, machines can play a big role in R&D. This “brute force” capacity for combinatorial creativity could lead us toward an exponential development of science and technology.
Obviously, combining all sciences and technologies together is no small task. In order to initiate the process and obtain the first practical results faster, the protocol could, and most probably should, be made more effective by focusing on a reduced domain. In the beginning, it should also especially focus on areas of science and technologies where simulation is easier and lab tests are not mandatory.
If the ideas presented here can ever be successfully implemented in mathematical and computer sciences labs, or astrophysical simulators, it would be much easier in a subsequent stage to expand upon these results to other, more test-demanding fields, such as chemistry, biology, medicine, etc.
By reducing the domain upon which this protocol is tried, the initial testing and fine tuning phase should be much simpler than trying to embrace too vast and diverse subjects. Though this paper takes a general approach and ultimately aims at combining the most diverse fields, the initial scaling down of the scientific and technological domains to be considered will be all the more appropriate that this limitation will reduce the protocol’s complexity, and therefore the resources required to develop it to rapidly reach a first success.
Though it is nowhere close to any success, it should be emphasized that the final aim of this protocol is to realize the expression of Humboldt and make “infinite use of finite means”. The goal of aggregating, substituting and reorganizing scientific theories, technologies, features and concepts together is to obtain an ever-growing source of new technologies and grow the original database. Besides this self-sustaining database growth, the protocol’s algorithms would also be programmed to be self-improving, so as to create better and better theories, faster and faster, and ultimately lead us towards the Technological Singularity.
Instead of developing an AGI hoping for the Singularity and not the Apocalypse, if an AGI can ever be created at all, we could rather (or in parallel) work on this protocol to directly aim at the Singularity. And though much of the existing technologies proposed here to handle the different operations still need some improvements and adaptations, and a lot of testing, the Singularity could be within our reach sooner if we can manage to make this protocol a reality.
Continue reading the main sections of the Singularity Protocol paper.
Before you proceed, make sure to understand the conceptual framework of Graph Networks, which is especially well summarized in the second reference hereafter.
- Albert Einstein (1954) “Ideas and opinions” – see letter from 1945 in response to the publishing in the same year of “An Essay on the Psychology of Invention in the Mathematical Field” by Jacques S. Hadamard
- Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu (2018) “Relational inductive biases, deep learning, and graph networks” arXiv:1806.01261
- Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean (2013) “Distributed Representations of Words and Phrases and their Compositionality” arXiv:1310.4546 – See also TensorFlow Tutorial: Vector Representations of Words
- Hanjun Dai, Bo Dai, Le Song (2016) “Discriminative Embeddings of Latent Variable Models for Structured Data” arXiv:1603.05629
- Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, George E. Dahl (2017) “Neural Message Passing for Quantum Chemistry” arXiv:1704.01212
- Wengong Jin, Regina Barzilay, Tommi Jaakkola (2018) “Junction Tree Variational Autoencoder for Molecular Graph Generation” arXiv:1802.04364
- Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia (2018) “Learning Deep Generative Models of Graphs” arXiv:1803.03324
- Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann (2018) “NetGAN: Generating Graphs via Random Walks” arXiv:1803.00816
- Jianmin Bao, Dong Chen, Fang Wen, Houqiang Li, Gang Hua (2017) “CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training” arXiv:1703.10155
- Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, Koray Kavukcuoglu (2017) “Hierarchical Representations for Efficient Architecture Search” arXiv:1711.00436
Share your thoughts in the comments below!