Ever since the famous ‘Turing Test’, the debate for developing machines with human mind-like capabilities has only grown. The test is for a computer in which it can use written communications to fool a human interrogator into thinking that it is a person. While today’s machines do imitate certain actions of humans, no computer has ever passed the Turing test.
Do We Understand intelligence?
To understand what ‘intelligence’ goes into the term ‘artificial intelligence’, we need to delve into the fundamental concept of human intelligence first. Whatever be the source of intelligence within us – a god, a higher artifact, or a sentient programmer, but down the chain, it is we who are going to infuse any sort of intelligence within the machines, and that not only imparts a certain level of responsibility but also makes us ponder the question of intelligence itself.
Do we understand intelligence in the first place? If yes, the discussion ends here itself and I have a firm belief that the intelligence which we have a full understanding of, we would be able to implant in the machines as well. Sadly, this proposition is far from the truth. We, as humans, have not yet figured out what intelligence actually means. No doubt, we have associated certain societal and cultural norms with it, but the fundamental concept is yet elusive.
The Earlier Philosophers
According to Plato, perhaps the most famous philosopher that ever lived, there is a creator who designed both humans as well as the world, both being distinguishable entities. While the world is a corruptible materialization of the creator’s ideas, our thoughts, i.e. human ideas, are true to the extent that they are accurate copies of the creator’s ideas.
On the other hand, Aristotle, another famous philosopher, removed the notion of a creator and proposed that our thoughts resemble the objects of the world itself. These two ideas might be the foundation of the thinking capability that we would be introducing to the concept of artificial general intelligence (AGI), also known as strong AI in which machines develop and replicate human cognitive abilities.
The Modern Philosophers
Fast forward to the 16th century and we have Galileo Galilei, according to whom perception is an internal process. He believed that properties such as taste, smell, color, etc are no more than mere names so far as the object in which we establish them are concerned. They rather reside in consciousness and so these properties do not have a standalone existence. Also, according to him, philosophy is written in the language of mathematics with its characters derived from geometry.
Later came Thomas Hobbes, an English philosopher, who was highly influenced by Galileo. He extended Galileo’s notion by proposing that our thought too was made up of particles and it is at the mercy of the manipulation by its thinker. Rene Descartes, after whom the cartesian coordinates are named, further developed a theory that thoughts themselves are symbolic representations. The aforesaid ideas need to be taken into account when there is a desire to fully replicate human cognition in any software.
The Mind-Body Problem
We also have the term ‘dualism’ and we’ll be discussing it strictly in the terms of the philosophy of our being. Dualism states a theory that mind and body are radically different entities. Humans have both mental and physical properties. While the former encapsulates the concept of consciousness, intentionality, and the notion of a self, the latter deals with the properties of size, weight, motion through space and time, and so on.
The mind-body problem deals with the relationship between these two sets of properties and leads us to ontology and causation. The ontological questions debate – Is the mental state a subclass of the physical state or vice versa? Or are both entirely distinct? The causal question asks – does either of the states influence the other? We are not sure how would the concept of dualism affect the machines, but, if by any means it does, it would have a profound impact on their cognitive capabilities.
Limit to Intelligence
We, as of now, do not have a fixed value of the intelligence that both, we as a species possess, and what we are going to impart in the machines. Going by the current understanding of our universe where we do have certain limits such as the speed of light (nothing can travel faster than it), and the quantum uncertainties (especially the Planck length below which any distance measurement does not have any meaning), it is almost certain that intelligence is not infinite and there is a boundary limit to it too.
Morals and Values
If we are going to permit computers to establish the norms and values of our society, make laws and order among living beings, and have a profound relationship between the natural and the artificial, we certainly need to make sure that the intelligence fed into them has at least all the components of the intelligence that we possess. But that also points to a more fundamental question – what exactly our values are? It is quite natural for our morals to change during our lifespan and so we need to figure out the moral drift that the machines would also probably go through.
The Time Ahead
The aforementioned philosophies have to be, if not the only, guiding forces in our path to embed intelligence in machines. And then who would know that in the future when there is no demarcation between humans and machines and a perfect harmony exists between them, in the process, we’ll get rid of our current interhuman differences as well and set an ardent example of co-existence!!