AI-Robotic in Perspective

While the previous discussion supports the argument that AI and robotic developments are happening and will continue to advance, it was possibly less certain about the timescales of some of the speculation that surrounds this subject. This uncertainty might be initially characterised in the form of Amara's law, which states that we tend to overestimate the effect of a technology in the short-term, but often underestimate its effect in the long-term, which we might attempt to explain in terms of a hype cycle.

While there has been much criticisms of the hype cycle, possibly because it is not a cycle, but rather a sequence of stages, it might still be useful in illustrating that any idea can be initially subject to much marketing PR or hype prior to that idea being proved as a tangible reality. Of course, we might explain the initial need for this ‘hype’ in terms of the funding required in order to provide a proof of concept model or to develop a prototype, which then hopefully increases confidence for further investment. From a historical perspective, AI and robotic developments date back over 6 decades, such that neither is a new concept and where both have been previously over-hyped, many times, but never living up to expectation. However, a degree of marketing hype around AI has understandably returned in recent years following the 2011 success of IBM’s cognitive system called Watson followed by Deepmind’s 2015 success with AlphaGo. Along the way, the idea of machine learning and deep-learning neural networks have gone beyond just being conceptual models by providing real-world examples of future potential, although caution concerning the rate of progress may still be necessary.

What type of intelligence and physical abilities are we really talking about?

We possibly need to put the success of Watson and AlphaGo into some wider perspective by considering the fact that neither of these systems walked into the room and sat down with their human opponents. In the case of Watson, it was originally a room-size computer system with natural language interface that had access to 200 million pages of structured and unstructured content requiring four terabytes (1012) of disk storage. Although developments now suggest that Watson might fit into 3 pizza-size boxes, it still does not have the ability to call a taxi and make its own way to the studio as did its human opponents. Likewise, AlphaGo required the processing power of 48 CPUs and 8 GPUs, while its distributed version required 1202 CPUs and 176 GPUs. However, the point of highlighting these issues is not to belittle the achievements of these AI systems, but rather to point out that the ‘embodiment’ of these systems lack direct experience or interaction with the physical everyday world, which their human opponents processed on the way to just one very specific challenge. As such, we might realise that neither of these leading-edge AI systems can be described as ‘strong’ or ‘general’ in intelligence. Of course, it might reasonably be pointed out that neither of these systems was intended to have any robotic ability, although we might now have some perspective of the processing power required to perform a single task, i.e. win a game, which might be relevant in any wider assessment of robotics.

So what is the state-of-play in robotics?

Let us start with what may be the extreme end of marketing hype in robotics today, e.g. Sophia the Robot. This robotic system was designed, in 2015, to be a social humanoid robot capable of displaying 62 facial expressions and apparently declared a citizen of Saudi Arabia, presumably ahead of many human applicants. It uses AI and visual data processing to support a degree of facial recognition that allows it to ‘imitate’ human gestures and facial expressions and answer ‘certain’ questions and to make ‘simple’ conversations on ‘predefined’ topics e.g. the weather.   However, the system also uses voice recognition and AI to analyse conversations, such that it may improve its responses over time.

Note: The description above has deliberately avoided using the name ‘Sophia’ in order to highlight that this is not a person, despite the hype, it is AI technology operating a robotic platform that can engage in limited conversation, which can mimic certain facial expressions. As such, it is not an individual, it is not sentient and it has no gender.

Now you are left to question why Hanson Robotics might want to market this robotic system in the way it has for its own commercial reasons, although it might also be argued that it is projecting a false image of the actual state of development to the public. As with Watson and AlphaGO, AI-robotics is not nearly advanced enough, as yet, to be described as having any equivalence with human-level intelligence, which includes emotions, empathy and motor skills. While there is some undoubtedly impressive technology within this robotic system, there appears to be far too much anthropomorphic emphasis being placed on limited human-like conversations and gestures.

So what is the short-term requirement for robotic systems?

From a historic perspective, robotic systems have been designed to meet specific business requirements, mainly in manufacturing, which have not really required these systems to mimic any human gestures or have any human-like features. Of course, if we want AI-robotics to be able to address a much wider set of problems, then developing a human-like interface may be useful, although we may still need to question when this will prove itself to be cost-effective. Today, it might be argued that the requirements for most physical robotic systems is still oriented to either performing repetitive task better than humans or working in physical environments that are either impossible or dangerous to humans. In these cases, AI-robotics has a clear and distinct advantage over humanity, such that there is a clear business case for future investment.

If so, what robotic applications might be prioritised?

Again, the commentary within the following examples is not intended to be negative but simply intended to put the actual development of AI-robotic system into some better perspective. Today, none of these systems can really be described as autonomous in the sense of having any AI ability that can independently solve problems.  

  • Robotic arms are already extensively used in many manufacturing processes, where a disembodied arm is all that is required to execute numerous tasks, e.g. sorting, cutting, welding, lifting, painting and bending etc. Similar functions on a smaller scale are now being developed for the food industry.

  • In a wider physical environment, agricultural robots that have no obvious human-like appearance are now being used and developed to harvest and collect crops, while there is now increasing use of more specific robotic machinery to help feed and milk cows.

  • While the use of human-like robots has been the subject of science -fiction for many years, reality is closer to specific devices that can more intelligently address requirements, such as home safety and security along with other monitoring applications, e.g. energy consumption. However, these devices are increasingly using AI within the expanding concept of home automation.

  • Again, the idea of military robots is possibly better described as devices with varying degrees of autonomous action. While there are numerous designs for humanoid robots for military applications, it is not clear that any have been deployed beyond the training ground. Of course, this does not mean that developments are not continuing, e.g. see Atlas the robot, although we might still question the level of marketing hype around its actual ability for independent decision making.

  • Robotic applications in space have also been in development for years, see NASA Robots, where Robonaut has the requirement for a somewhat humanoid form in order to work alongside human astronauts and to use the same equipment and tools, although it appears not to require legs. However, as far as is known, Robonaut is a telepresence device, such that it is remotely controlled by a human operator and therefore presumably has little in the way of AI.

Of course, there are undoubtedly many other applications in development, e.g. hospitals, disasters, entertainment, where robotic devices may benefit from increasing AI autonomy. However, the current assessment is that we possibly need to be more realistic in the rate of progress, which is being driven primarily by investment that requires the robotic systems to be commercially viable.