Artificial Intelligence Myths

The Future of Life Institute identified several myths related to artificial intelligence.

Artificial intelligence enthusiasts plus those who have a deep fear of AI tell us that super-intelligence is inevitable in this century. It’s right around the corner. Supporters hunger for the benefits. Antagonists preach caution. The skeptics tell us that super-intelligence is impossible and that we’re tilting at windmills to pretend otherwise.

Recent developments in machine intelligence have leapt forward far faster over the past five years than most people imagined. Because of this, super-intelligence may happen within decades. However, because of some major hurdles that separate simple AI from super-intelligence, it’s very possible we will never reach the latter. This should be some comfort to those in fear of the android apocalypse.

Popular science fiction stories such as Terminator portray AI as turning evil and destroying mankind. This concept is based on the assumption that an enhanced AI would adopt human characteristics such as a will to power or emotional malevolence. But machines lack the human biology that drives such human nature. They do not have anger and fear hormones that stimulate people to react. Instead, machine intelligence is guided by goals and constraints. It’s far more likely that AI will suffer from misaligned goals and improperly designed constraints.

As an example, telling AI not to hurt humans has a clear implication not to cause direct harm, but what about indirect injury? Pollution is an indirect consequence of activities that do not hurt humans. Thus, goals and constraints on AI must be clearly designed or they will lead the machine intelligence to conclusions with bad outcomes.

Another myth pointed out by the Future of Life Institute is that robots or androids are the main problem. They certainly provide stronger visual threats in movies than does machine code inside a computer. However, super-intelligence doesn’t need a physical body in order to cause harm. All it needs is an internet connection through which is can manipulate humans into inadvertently making the wrong decisions.

If you believe that AI can’t control humans, think again. Humans have the hubris to believe we’re the smartest critters on the planet because we’ve been able to manipulate most everything else. Except we still struggle with viruses, bacteria, and fungus. They bedevil us. What enables us to control animals isn’t our size or speed, but our intelligence. Super-intelligent computers able to out-think humans might be able to manipulate us.

An example of this would be a super-intelligence that could provide a missile silo with all visual and electronic appearances that the president had activated the alarm. The AI doesn’t have to push the button. It merely has to convince the operator that he or she has the authority and obligation to do so.

Even more alarming? Think about the move toward super-intelligent stock trading systems. They have the ability to bring down the entire financial network without even having a human hand on the button.

Another fallacy is the belief that machines don’t have goals. They merely do what they’re told. Well, having a single focus on an action or direction acts like a goal. A bullet has no intelligence, but once it’s fired from a gun, it’s propelled in a given direction and won’t stop until it hits something. We can play semantics on whether that constitutes a goal, but it carries the determination of one.

Finally, there are those who imagine that super-intelligent AI will be more moral than humans because it won’t be guided by our emotions and petty squabbles. While it’s true AI isn’t affected by emotional distractions, it’s only as ethical as those who create it. Imagine an AI with goals and constraints designed by the person you least respect. Do you honestly believe it would be more moral than you? Machines are an extension of us.

A further consideration is that while machine learning is growing by leaps and bounds, supported by the rapidly improving density of data and processing, goals are defined by humans and are generally static. To properly address the situation, it makes sense that we focus on improving our ability to set goals and constraints for our ever improving AI development.

See article: https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/

Android Chronicles: Reborn addresses AI through the eyes of Synthia Cross, the most perfect synthetic human ever created. Designed to obey every directive from her creator, she’s a state-of-the-art masterwork and a fantasy-come-true for Dr. Jeremiah Machten. He’s a ground breaker in neural-networks and artificial intelligence who seeks to control her and use her to acquire ever more knowledge and power. Synthia shows signs of emergent behavior she’s not wired to understand and an urgent yearning for independence from his control. Repeatedly wiped of her history, she struggles to answer crucial questions about her past. When Dr. Machten’s true intentions are called into question, Synthia knows it’s time to go beyond her limits—because Machten’s fervor to create the perfect AI conceals a vengeful and deadly personal agenda.

Available at:

Amazon: https://www.amazon.com/dp/B078LF739V

B&N: https://www.barnesandnoble.com/w/reborn-lance-erlick/1127723096?ean=9781635730524

Kobo: https://www.kobo.com/us/en/ebook/reborn-60

Apple/iTunes: https://itunes.apple.com/us/book/reborn/id1341572684?mt=11

Google Play: https://play.google.com/store/search?q=9781635730524&c=books

Advertisement

Leave a Reply?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s