Definition of Terminology

defsThe most basic definition of life is one that might differentiate living cells from non-living matter. Therefore, basic life is a:

  • Self-organised, non-equilibrium system
  • Governed by an internal program
  • Which can reproduce itself

We can define three distinct models of artificial life termed 'soft', 'hard' and 'wet' that correspond to computer software;  silicon based hardware and biochemistry. However, within this framework, we might define two broad classes of A-life in the form of 'weak' and 'strong'. The former being one that only simulates life, while the latter fulfils the minimum criteria of life, as given above.

At this point, we should also try to make some distinction between artificial intelligence (AI) and artificial life (Alife). As a broad definition, AI is essentially a computer-based model encompassing both 'soft' and 'hard' models, although it does not preclude 'hybrid wet' models. Of course, as its name implies, the focus in AI is on intelligent systems that support perception and learning. In contrast, Alife can be both a subset and superset of AI in that high intelligence is not mandatory, i.e. it can exist on a cellular level, but would presumably have to support all the criteria of life, as stated above from the outset. However, AI might be intelligent without meeting the criteria of life. However, like Alife, there are different classifications of AI, which can also be described in terms of 'weak' and 'strong':

Weak-Alife/AI: Intelligent, but not sentient. Intelligence is scoped:

  • Type-1: Algorithmic intelligence. This type of intelligence is initially restricted to a set of programmable rules encoded into the system in which AI operates.

  • Type-2: Independent intelligence. This form of intelligence is assumed to be independently creative. As such, it is also capable of evolving independently from its initial programming.

Type-1 is assumed to start from a point subservient to the 'programmer'. However, it is highlighted that even within the context of today's 'subservient' computers, the complexity of such systems can lead to unexpected results. Of course, it is a matter of some speculation as to how Type-2 machines might adapt or evolve increased intelligence. As a starting point, Type-1 machines might simply adapt or extend its original algorithms in a more efficient manner.

While, weak AI is not sentient, could increasing intelligence require some form of self-awareness in order to operate independently?

One interesting fact about the human brain is the separation between the cerebrum and cerebellum. The cerebrum is that part of the brain comprising of the outer grey matter, containing some 1011 neurons, that support the higher functions of conscious thought. The cerebellum is buried below the cerebrum and consists of a comparable number of neurons (1010) and controls most of the motor reflexes of the body. The interesting fact to highlight here is that the cerebellum evolved first and has no sentience in terms of self-awareness. So, in part, in answer to the question as to whether some level of intelligence can survive without sentience, the answer appears to be YES.

Strong-Alife/AI: Independently intelligent and sentient.

  • Type-3: Sentience is only emulated, i.e. mimics sentience within the parameters of its programming.

  • Type-4: Sentience has become an emergent property as a result of its structural complexity. This is essentially a new life form evolving independently of humanity.

From the perspective of classical AI, intelligence and sentience are emergent metaphysical properties that arise out of the complexity of a physical system, irrespective of how that system is constructed, i.e. biological or non-biological architecture. However, even if intelligence and sentience can never emerge out of any physical implementation devised by Homo Sapien, AI might still develop towards Alife. For example, through a concept of hybridisation, AI technology might initially develop as an extension to human intelligence and sentience. Such an evolutionary path could effectively become a hybrid species that exceeds the intellectual capacity of Homo Sapiens and, in so doing, becomes capable of replacing all the biological systems originally inherited from Homo Sapiens.

Type-3 also raises the inherent problem of proving sentience, for if type-3 was designed to believe it was sentient, it may not be able to understand the different between itself and Type-4. In fact, an external observer might also find it difficult, if not impossible, to determine the difference between the real level of sentience in Type-3 and Type-4. Of course, the issue of proving AI sentience can also lead to uncomfortable questions about how we prove our own sentience; for example:

If sentience is an emergent property of a complex system, is humanity just a complex machine?