Artificial intelligence (AI) is a loaded term that has become part of our common vocabulary and yet those in the know don’t agree on what it is. We can all agree that artificial is that which doesn’t come from natural processes. However, we tend to get hung up on what is meant by “intelligence.” To be clear, artificial intelligence is unlikely to function as human brains do since it isn’t based on a biological brain.
Perhaps it would be best not to get hung up on the human intelligence analogy and use the acronym “AI” instead of the longer term. In addition, it would help to think in terms of skills, functions, and tasks instead of intelligence. As an example, in recent years, we’ve seen computers beat top-ranked humans at chess, the oriental game of Go, and at Jeopardy.
These are narrow skills. Putting enough resources at the disposal of a computer allows it to perform those functions faster and better than humans. But these programs do not “think” as we do. There’s no consciousness.
Now, aficionados might argue that none of these applications constitute AI because they merely crunch vast amounts of information through basic computer programs. In fact, they would say that in winning at Jeopardy, the computer was merely accessing vast databases and not performing skilled functions. They would also argue that we don’t really achieve AI until we can get a computer to make intuitive leaps like a human brain. That will bring much more rapid growth in what an AI can do, but misses the point.
If we define AI as the ability of technology to perform mental functions with limited to no human guidance faster or better than we can, then we free the definition from the shackles of implying human intelligence and consciousness. This allows us to characterize a self-driving vehicle as employing AI and arguably a much more complex version than played the games mentioned above, since it involves not only driving and navigation, but also safety concerns.
Even a self-driving vehicle only requires an AI to perform a narrow set of human skills, however. Just because a program can accomplish this neat trick, doesn’t lead to the conclusion the same application could balance a checkbook or classify a movie just based on watching it.
Now, what many aficionados might agree to be artificial intelligence would be an AI that has broad capabilities to learn a wide variety of skills based on a more intricate “brain.” Some researchers call this AGI (Artificial General Intelligence). Such a machine would be able to generalize its research skills not only to cancer research, but to literature, sports, and a multitude of other subjects. It would also be able to apply its ability to manipulate physical objects not only like self-driving cars, but other vehicles and non-vehicle devices, such as surgical instruments.
When an AGI reaches the point that it’s capable of performing a substantial majority of human mental functions better than we can and is capable of improving its own abilities, then we’ve reached the AI singularity, a point that concerns a number of big names in science and technology such as Stephen Hawking, Elon Musk, and Bill Gates.
See article: https://www.forbes.com/sites/ericmack/2015/07/27/hawking-musk-wozniak-freaked-about-artificial-intelligence-getting-a-trigger-finger/#4b519bcd7416.