AI Approaches

approachIn an attempt to break the problem space into smaller pieces, the definition of weak and strong AI provides a practical separation of what may be achieved in the next 10-50 years and the next 50-500 years. Much of the foundation technology for strong AI has yet to be set down and may still require computing to undergo several paradigm shifts rather than just assuming that faster computers will eventually lead to sentient intelligence. While weak AI can be considered as a separate goal without reference to strong AI, it seems unlikely that strong AI can be developed without weak AI first being achieved. However, there is also an important philosophical issue that separates these two modes of AI that should not be forgotten:

  • Weak AI is meant to be 'subservient' to human society
  • Strong AI may grow to be 'independent' of human society

Describing weak AI as subservient does not imply that a weak AI system cannot be considered intelligent, simply that it is not self-aware with the ability to define its own goals. However, given the undoubted complexity of future weak AI systems, it should not be assumed that humans would always be able to predict, or control, the output of such systems. Weak AI systems are the natural evolution of present-day computers that become capable of solving both general and specific problems. Some of these systems may initially support only limited sensory (I/O) interfaces with which to directly interact with the real world. However, developing the ability to solve problems is one of the essential prerequisites of intelligence, along with the ability to learn and understand. Therefore, this aspect of the AI problem space is critical.

Historically, there are aspects of AI development that appear to be more an issue of philosophical debate rather than clear-cut physics. One school of thought called 'Symbolic AI' has long held the opinion that the key to higher intellectual processes of the brain is dependent on an ability to manipulate symbols. In contrast, another school of thought, referred to as 'Connectionist AI', believes the brain is analogous to a neural network of connections, which processes patterns. In reality, either approach in isolation may prove to be too polarised in the sense that intelligent systems may have to initially support both approaches as both have inherent weaknesses.