In the introductory discussion entitled ‘Technology Catalysts’ it was suggested that just 6 key areas of technology developments, i.e. energy, AI, robotics, genetics, nanotechnology and space, might provide a sufficient framework in which to make some predictions as to how technology might change the human ecosystem. However, while speculation about each of these technologies might range far into the future, it has to be recognised that any prediction must become increasingly inaccurate, possibly to the point of being pointless, if extrapolated too far into the future. Therefore, as in the previous discussion of ‘energy developments’, the scope of any ‘predictions’ will be generally constrained to the next 100 years or so and, as such, may not necessarily be that profound in terms of its technical evolution, as in the case of energy, although many will still disagree with the conclusions outlined below,
On the basis of a limited review, it is believed that energy will not be a technical problem that cannot be overcome in the next 100 years, at least, in the developed economies. This said, the poor in all economies might still struggle to afford the rising costs of energy, which may then widen the divide between the ‘haves and have-nots’ and lead to further social disruption.
Today, the availability of plentiful energy is considered a fundamental necessity to modern urban life, especially within the developed economies. While it is believed that technical solutions can be reasonably extrapolated into the future to address the primary issues of concern, these solutions may not be equitable to all. Equally, whether any of these solutions fully resolve the issue of environmental pollution might be questioned along with the impact analysis of climate change. In contrast, it is believed that the combination of AI and robotics may have a far more profound impact on human society, although the exact scope of this change may depend on what you believe is possible. For example, Ray Kurzweil, who is Google’s Director of Engineering and a well-known futurist, predicts that a technological singularity will happen sometime in the next 30 years.
“The year 2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘
In complete contrast, Miguel Nicolelis , who is a neuroscientist at Duke University has a very different view, which we might characterise in terms of a composite of his quotes:
“Computers will never replicate the human brain and that the technological singularity is a bunch of hot air. The brain is not computable and no engineering can reproduce it. You cannot predict whether the stock market will go up or down because you cannot compute it. You could have all the computer chips in the world and you will not create consciousness”
Finally, there are others such as Paul Allen, a cofounder of Microsoft who has a more conservative assessment of progress towards AI, which might be articulated in the following somewhat paraphrased quote:
Kurzweil’s reasoning rests on the Law of Accelerating Returns , which are not physical laws. They are assertions about how past rates of progress can predict the future rate. Therefore, like other attempts to forecast the future from the past, these ‘laws’ will work until they don’t. More problematically for the singularity, these kinds of extrapolations derive much of their overall exponential shape from supposing that there will be a constant supply of increasingly more powerful computing capabilities. For the Law to apply and the singularity to occur circa 2045, the advances in capability have to occur not only in computer hardware but also in the software we create to run on these more capable computers. To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software will require an understanding of the foundations of human cognition, and we are just scraping the surface of this.
Given the scope of these different viewpoints, it would appear that some caution is required before simply assuming that the exponential inference of Moore’s laws will simply lead to some form of super intelligence, where humanity suffers the same fate as the Neanderthals. Of course, none of these positions necessarily mean that AI and robotics may not have a profound effect on human society over the next 100 years. However, before we start to speculate further, we possibly need to outline some terms of reference initially linked to earlier discussions, i.e. Artificial Intelligence (AI) and AI Concepts, that use two distinct ideas of AI.
- Weak AI is based on the reasonable assumption that some thinking-like
features can be added to computers to make them more capable of undertaking
a range of specific tasks, but where intelligence may be limited to
coded software instructions.
- Strong AI is a more speculative assumption, somewhat similar in scope to the concept of an AI singularity, where some future computational system begins to think on a level at least equal to humans after which it may be capable of engineering its own future development.
The definitions above are somewhat historic in scope in that they have been in use for over 50 years. However, over this time, the somewhat binary idea of weak or strong AI has been developed to describe a much wider range of different skills and cognitive abilities, which has led to an extended AI vocabulary.
- Machine learning might initially be seen as being linked to weak
AI, but now enhanced with statistics and mathematical optimizations,
such that computer systems can ‘learn’ to improve their performance
by exposure to data without the need to follow explicitly programmed
- Deep learning is a relatively new concept that can utilize a range
of increasingly sophisticated algorithms to process information using
layered neural networks. These learning algorithms have proved successful
in just the last few years in analysing vast amounts of data, such that
they appear to exhibit a degree of ‘intelligence’ in the results
- Computer vision might be described as another application of neural
networks to analyse images by detecting the edges and textures of objects
in an image and to then classify the image against known objects.
- Natural language is another developing ability of computer systems
incorporating neural networks, which can process text and language in
order to derive meaning, which appears both natural and grammatically
- Cognitive computing is another relatively new term that describes
a system constructed of multiple AI subsystem types, i.e. machine learning,
deep learning, image process and natural language processing etc, which
allows a greater level of human-computer interaction that might appear
- Robotic automation is another process where a computer software
system might automatically capture and interpret existing tasks, which
might have previously been restricted to human operators, after which
it might be performed by a robotic system.
- Artificial general intelligence (AGI) might be described as a more modern interpretation of strong AI, where machines can perform any intellectual task currently undertaken by a human being. While a range of abilities might be assigned to AGI, some also attribute it with the ability of intuition, emotion and aesthetic judgement. However, it needs to be highlighted that such machines do not exist at this time.
What we might realise from these definitions is that the development of evermore cognitive AI systems could undoubtedly have a profound effect on human society long before strong AGI might appear on some future horizon. It might also be a reasonable assumption that cognitive AI will also have a complimentary impact on the development of more autonomous robotic systems, such that they become increasingly pervasive throughout the human ecosystem. However, one of the biggest and most immediate impacts may be in terms of jobs, where AI and robotic systems may become increasingly capable of doing, not only manual blue-collar jobs, but professional white-collar jobs.
But what about the possibility of strong AGI in the next 100 years?
While there is some agreement with Paul Allen’s assessment above, i.e. that we are only just starting to understand the full complexity of human cognition and its underlying neurophysiology, it is possible that another path of development may be followed in this timeframe described as the hybrid AI paradigm. However, it is highlighted that this paradigm was originally developed in 2003, which predates some remarkable developments in cognitive AI that started to emerge after 2011. While these more recent developments might support Ray Kurzweil’s idea of a Law of Accelerating Returns, it is still far from certain that strong AGI will appear within the next 100 years. This said, there is an argument associated with the idea outlined as the ‘cognitive evolution’ of humanity, which suggests that humanity has already evolved beyond homo-sapiens in that our cognitive ability has already been extended by a wide variety of computer systems. In this context, the hybrid AI paradigm also extends the ‘homo’ genus of natural evolution further using a series of fictitious names, where technology becomes increasingly integrated within the human physiology, which also extends the cognitive ability of the human brain in isolation. However, the scope of this discussion will not replicate the details of this earlier paradigm, which can be referenced via the following links.
- Natural Evolution: Homo-Sapien
- Next 100 Years: Homo-Computerus , Homo-Optimus , Homo-Cyberneticus, Homo-Hybridus
- Future Speculation: Homo-Machinus , Homo-Primus
In terms of the current discussion, the main purpose of highlighting such possibilities is that it might suggest that significant incremental change, in what we currently understand as ‘humanity’, may start to occur within the next 100 years. The scope of these changes may also turn out to be quite profound, if we first consider the impact of the Internet, the web and social media over the last 30 years and then imagine the potential impact of future advances in cognitive AI, brain-computer interfaces and advanced prosthetics. In this context, considerable change is both possible and probable, irrespective of whether strong AGI is possible or not. While many may prefer to believe that this is simply another example of futurist speculation, which will never occur, people may need to reflect on how much human society has already deviated from the idea of Darwinian natural selection in the last 100 years before they dismiss the scope of further man-made change in the next 100 years.