Sometime over the last 20 years artificial intelligence has gone from something we laughed about and feared in movies to something that people are now devoting their lives to and view it as a salvation for (or the destruction of) mankind.
Stock photo representing artificial intelligence by Adobe.
Tim Urban at Wait but Why has already written a tome on “The AI Revolution” (Part 1 and 2), which is a very good primer on AI. For a slightly longer read, consider Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. This post is neither.
This post is meant to describe the acceleration of artificial intelligence development as seen through various news updates and published papers. In the process I hope to convey to you, the reader, how close we are to artificial general intelligence (AGI).
What is Artificial Intelligence?
For this post, we are limiting the topic to AGI, which Tim Urban describes as, “a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.” The problem is that we may not have a good way of quantifying AGI. It’s as if once we’ve achieved any new threshold in AI, humans tend to move the bar higher.
Once there is a program in existence that does the job, you are inclined to think it’s merely a formula and that isn’t thinking. The very success of me producing a program which exhibits thinking causes you to deny that that can be thinking.” -Dr. Richard W. Hamming (1915-1998) in a lecture from “The Art of Doing Science and Engineering: Learning to Learn: Artificial Intelligence – Part I” (April 7, 1995)
Peter H. Diamandis and Steven Kotler echo this sentiment in their book, Bold: How to Go Big, Create Wealth and Impact the World, which covers artificial intelligence as “exponential technology”. It’s not just this human bias that hides artificial intelligence’s acceleration, sometimes AI gets written about by other names such as “natural computing”:
Researchers at UCLA and the National Institute for Materials Science in Japan have developed a method to fabricate a self-organized complex device called an atomic switch network that is in many ways similar to a brain or other natural or cognitive computing device…The device we have created is capable of rapidly generating self-organization in a small chip with high speed…Experiments demonstrated that the atomic switch network exhibits emergent behavior…has the potential to process information at very high rates…We plan to move towards a hybrid morphic system using the best of conventional computation with our brain-like device capabilities…This would be a radical step in the real development of AI.” –Scientists develop atomic-scale hardware to implement natural computing, May 13, 2015 by Lisa Zyga
“Emergent behavior” is when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. This is similar to cellular automata that Stephan Wolfram describes in A New Kind of Science.
“RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell. Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain.” –Nano memory cell can mimic the brain’s long-term memory, May 12, 2015
Sometimes, narrow AI technology is created that works in ways that the creator did not intend or does not understand. “He can’t really explain all the reasons it does what it does. It’s started making decisions on its own,” which is a quote from George Hotz in “The First Person to Hack the iPhone Built a Self-Driving Car. In His Garage; George Hotz is taking on Google and Tesla by himself” by Ashlee Vance.
“He’d devoured the cutting-edge AI research and decided the technology wasn’t that hard to master. Hotz took a job at Vicarious, a highflying AI startup, in January to get a firsthand look at the top work in the field, and this confirmed his suspicions. “I understand the state-of-the-art papers,” he says. “The math is simple. For the first time in my life, I’m like, ‘I know everything there is to know.’” -George Hotz
The Economics of Artificial Intelligence
And then there is the economic aspect of AI. More narrow AI such as those used in driverless cars may have the need to buy gas (or electricity) and get maintenance. It can even make money for it’s owner by making deliveries or driving people around.
The self-driving car then, calculating it has approximately 3.5 hours before it will be required by one of its owners again, logs in to Uber and makes itself available for a 3-hour block as a self-driving resource. It is immediately called out to a pickup, and after 3 hours has earned $180 in fees, which it puts away in its wallet.” -“The Death of Bank Products has been greatly under-exaggerated” by Brett King, an excerpt from the book, Augmented: Life in the Smart Lane.
What happens when AI starts making money, needs it’s own bank account and credit cards, and needs to be able to make transactions on it’s own? With artificial intelligence, that future is sooner than you might think.
It’s theoretically possible to write an autonomous self-funded application on Ethereum that earns money to pay for its own execution, or rather its own existence. It might create value by enabling new kinds of markets, for example. Artificial intelligence might help optimize the value it delivers to ensure its own survival. In this case, it’s not just the network that’s unstoppable, but an autonomous agent operating within the network. Cue Skynet joke.” -“Ethereum: Rise of the World Computer” by Rick Seeger.
Skynet may not be a joke. The United States military is increasingly looking into how artificial intelligence can help supplement drones and fighter jets on the battlefield.
One new project … is called Avatar, and calls for the Pentagon to pair high-tech “fifth-generation” fighter jets like the F-22 Raptor and F-35 Joint Strike Fighter with unmanned versions of older jets like the F-16 Fighting Falcon or F/A-18 Hornet, which would be flown without a pilot for the first time. The Avatar effort was previously called Skyborg by SCO and is known as “the Loyal Wingman” concept in the Air Force, Roper said. The program will require unmanned fighters to act with enough autonomy that the pilot in the manned jet doesn’t have to direct them all the time.” – by Dan Lamothe
Why are Smart People So Afraid of AI?
You may have read that Elon Musk is weary of an AGI becoming an artificial super intelligence, which according to Nick Bostrom, could happen minutes to weeks after AGI is established. Elon Musk has read Nick Bostrom’s work and it’s part of why he’s helped create Open AI, a collaborative effort to create an open-source AI.
The hope with Open AI is to reduce the risk that one person, company, or country could create and control an AI that could cause unintended (or intended) negative consequences. But what types of negative consequences?
The Paperclip Maximizer
The “paperclip maximizer” is a thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.
An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (as humans), and as a side-effect destroy us by consuming resources essential to our survival. I thought it would have been better to use red staplers, but that’s just me. 🙂
The Rise of Artificial Intelligence
When I was 14 I called to get a ‘free’ magazine subscription in the mail. The operator asked me what I was interested in and I said ‘artificial intelligence’. Immediately, an unknown man on the line bust out laughing, but quickly contained himself. No one is laughing now.