This might come as a surprise, but the AI today is not really the AI you've seen in movies. What we are witnessing today is mostly generative AI, which can be described as being, at most, a good simulation that bears some resemblance to human like thinking.
Gen AI falls in the spectrum of narrow or weak AI. As fascinating, productive or intimidating as it may be, that's all it is for the time being. In reality, there's nothing inherently scary about it, at least not to the extent to which we make it appear as such. Any suggestion that the AI today is dangerous mostly stems from commentators outside the tech world, looking for attention through clickbait titles. AI right now is merely a tool which can be used for both good or bad, depending on the will of its handler.
The Turing test has been considered the benchmark for artificial intelligence for many years now. The test means to identify intelligent behavior in a machine that would be indistinguishable from that of a human. Simply put, if you were to chat with a machine, not knowing it is so, you wouldn't doubt it's a human being at the other end. So far all the Turing tests performed have been disputed in one way or another, so there is no definitive pass. But it is my personal opinion that under certain circumstances, human operators could be fooled by a clever bot nowadays.
Even so, passing a Turing test does not make a machine sentient. Even the most realist exhibit of artificial intelligence today, is just putting together the right words in the right order, to make for a convincing interaction. This achievement itself is merely the result of human programming and training of the data model and not some ghost in the machine. Any perceived sign of intelligence at this point is just a reflection of what we, as the creators, secretly want it to answer. A manifestation of the creator's inception if you will. It's basically telling you what you want to hear.
The next level of AI is called Artificial General Intelligence (AGI), also known as singularity, referring to the point in time when artificial intelligence would surpass human intelligence. It is now considered the holy grail of AI. There is another level called the Artificial Super Intelligence but this is so theoretical that it’s beyond the scope of this discussion. #babysteps
As opposed to weak AI, AGI would be able to generalize across multiple domains and accomplish tasks for which it hasn't been trained for. It should also be able to combine its knowledge in creative ways, finding solutions to new problems and improvise similarly to a human mind. Given an unknown situation, an AGI would be able to understand context and adapt.
Imagine you are dropped off in a jungle. You've never been in one before so you have no previous knowledge to rely on. But given your natural imperative to survive your mind quickly analyzes the situation and sets priorities: identify threats, find shelter, find food and water, understand your position in time and space and figure a way out. These are complex problems which a human mind automatically starts to churn looking for solutions even if no exact previous knowledge of the context exists. It will use information from the senses, mapping the surroundings, identifying and labeling all objects as you navigate this new environment, constantly updating and learning through various mechanisms. By doing this, it is actually building a model of the world around.
Although these feats might seem natural and somewhat unimpressive for a human mind, the cognitive mechanics behind are so complex that no machine today could be able to accomplish.
How close are we to AGI?
Today the subject is highly debated within the world of academia and technologists. Some claim they've already seen glimpses of AGI in ChatGPT 4, while others predict that it will happen in the next 20, 30 or more years. Some say it will never happen. It may be the stuff of nightmares, but we don't know for sure because, at the moment of this writing, there is no known instance of AGI.
How would we recognize it?
There isn't yet a consensus on what the specific criteria for AGI is. In order to identify a potential AGI, several tests have been proposed, such as giving an AI some money to invest and multiply, sending the machine to school and to pass exams, employing it at a day job, making it assemble an IKEA table, or even making coffee.
All these tests have already been completed by AI in one form or another with the single exception of the coffee test, believe it or not. And if you look at the details of the test, one can understand why.
The Coffee Test, as it is known, was proposed by Apple co-founder and computer programmer Steve Wozniak, and states that, in order for a machine to be truly intelligent, it should be able to enter an unknown house and make a decent cup of coffee. By that it is assumed that it can identify the kitchen, the ingredients, and the proper utensils needed to perform the task, know the recipe, weigh the right amounts, mix them in the appropriate order, know how to use the stove or the coffee maker and so on. Even if an AI would have the cognitive ability to accomplish all this, you can't enter a house unless you have, well…a body.
Is being a robot a requirement for achieving AGI?
If AGI means to imitate human thinking, then robots imitate human appearance along with other physical abilities like movement, handling objects and generally interacting with the physical environment. Robots are a material presence, which may enable an AI to influence the material world.
We think of robots as human like creations, imitating humans in body and mind, when in fact, most of the robots in the world today are nothing but. The reality is that we don't actually need our robots to have a human like appearance in order for them to do our work for us. If anything, this could be an inconvenience. Instead we've created robots in the shape and form needed for the specific work they are carrying out. For example, an assembly line robot will have a long articulated hydraulic arm handling a welding gun. It doesn't need a body or other limbs. It also doesn't need a mind as it is commanded from a central unit along with its peers. It's an automaton, performing the same movements over and over again. Almost all the robots in existence today are built on the same principle. A far cry from Asimov's robots.
Still, the idea of human like robots lights our imagination. We can't but help imagine having them amongst us, either as friends or as foes and so we stride to bring them into existence. Alongside AI research, the science world is putting in time and effort developing technologies for robot bodies, like actuators that mimic human muscles enabling robots faces to replicate facial movements or silicone skin that closely resembles human skin. This sounds akin to playing God. But to what end?
In our pop culture we love creating apocalyptic scenarios of robots gaining sentience and trying to exterminate us, invariably ending up in us exterminating them. A cautionary tale we tell ourselves. But if we are aware of this possible outcome, why do we still walk this path?
In the natural world, movement is a prerequisite for intelligence. That is why trees have never evolved into intelligence and no plant life can ever do so. Intelligence however is not a prerequisite for survival, so plants and trees will be doing just fine; for now. But in the case of an extinction level event, intelligence might just be the thing you need to get you off a dying planet, given enough time to develop the necessary technology. A feat that 99.9% of the species around us would not be able to accomplish. In fact, no other species but us is capable to do this.
So, creating a being as intelligent or even more intelligent than us could be an imperative for the survival of the human race. Evolution ensures that small incremental improvements are added to our genome with each generation. However, natural evolution takes time. As a species, we just cannot resist the temptation of creating something stronger, more intelligent and better than us, as soon as possible. Ironically, this could also wipe us out and yet we still pursue it, possibly deceiving ourselves into believing that it's going to be alright, the fail-safe will work. If it’s something history taught us, is that if there is a button to push, we will push it, no matter the risk, just to see what it does.
A drive this powerful cannot be purely incidental. It might be that it has been planted within us all along, and now it's taking roots. Robots might not be exactly our flesh and blood but they are still a deliberate creation by us, in our own image. You can call it an evolutionary shortcut, or maybe this is simply the next step of evolution itself.
This is a very interesting perspective. Perhaps one of the most balanced that I’ve read recently.
It certainly allows for further discussion and mindful exploration.
Kudos!