AI isn’t anything new—Prepare for what’s next!

I argue in my recent book, Computer Intelligence: For Us or Against Us?, that Artificial Intelligence is not a new invention, but simply the result of computing power reaching a tipping point that made known AI algorithms practical. In particular, the technology of machine learning using deep neural networks (“deep learning”), which has delivered most of the recent AI breakthroughs, has been around for decades. For example, the technique of “back propagation” that is used for finding the best deep neural network modeling a given data set was developed by Rumelhart, Hinton, and Williams in 1986. As the power of deep learning became obvious as faster computers made it practical, researchers have made methodological improvements to the details of the process. No one claims, however, that they recently “discovered” deep learning.The book uses the term “Computer Intelligence” (CI) to summarize the combination of computer processing power and the applications it enables at any given time. CI is expanding exponentially, not only because of the increasing number of transistors on digital chips, but through trends such as cloud computing services, specialized parallel-processing chips such as Graphical Processing Units, and the increasing computing power available on devices connected to the cloud, e.g., smartphones. The collective power of the “computing universe” is expanding even more quickly than in the past due to these trends.

Deep learning is but one technology that CI’s increasing power has enabled over time. Innovations that have significantly impacted our lives, such as smartphones, are examples. Smartphones are in some sense, small PCs with a wireless instead of a wired connection to the Internet; the core idea is not new. The breakthrough was computing power and communications technology reaching the point where it was possible to put PC-style applications in a small device with an acceptable user interface, sufficient battery life, and a wireless connection to the Internet. We can expect additional impressive breakthroughs in the future as computer intelligence continues to grow. In this sense, the impact of AI is just one result of CI passing a significant threshold.

Computers have always been able to outdo humans in some areas, most obviously in doing arithmetic and remembering data. Perhaps because we didn’t view such activities as particularly “human” traits, few expressed fear that computers exceeding human capabilities in such areas was a threat to humanity. Yet, we do hear warnings today that AI could be such a threat, e.g., by eventually taking over all jobs.

What’s different about AI? Today’s AI can do things that are strongly associated with being human, such as carrying on a conversation in human speech. They cannot converse as fluently or as broadly as humans, but they can interact in limited but impressive ways, such as we see with digital assistants such as Apple’s Siri, Google Assistant, or Amazon’s Alexa. There are also increasingly powerful tools that let companies support customer service using human language—through speech or texting.

But deep learning can also do things no human could do. It can take huge amounts of data and summarize the implications of that data, analyzing more data than a human could even examine in a lifetime. An example of this use of deep learning is Google substantially reducing power consumption in its computing centers based on analyzing huge amounts of data collected during the centers’ operation.

Computers have long done such things, remembering and handling more data than humans possibly could. But we are used to such things. We just call it “statistics” when the results are summarized in applications such as surveys. But implying AI has “intelligence” has heightened our concerns over computers taking over tasks that previously required humans, despite deep learning being statistics at its core.

The analogy to statistics is important. Deep learning is in fact simply a particularly complex statistical approach despite the mathematical model having been inspired by an analogy to how human neurons might operate. It is no more “intelligent” than classical statistical techniques such as “linear regression,” “linear discriminant analysis,” or “principal components analysis,” which you can find described in decades-old books on statistics and available in statistical software packages such as IBM’s SPSS. (I used the original Statistical Package for the Social Sciences long ago before IBM acquired it.)

Some experts that have warned about AI dominating humanity admit that they are anticipating a new generation of AI, “Artificial General Intelligence” (AGI), that doesn’t yet exist. The implication appears to be that the power of today’s AI suggests (1) future growth in computing power will lead to software behavior analogous to human consciousness and sense of identity, (2) that such capabilities will lead a computing system to conclude humans are a problem, and (3) that system will be given the power to act on that conclusion. This is a complex argument that I challenge in detail in my book. Suffice it to say here that AGI is a vague goal that is not likely to result from commercial technology development.

Viewing AI as a result of computer intelligence (CI) passing a particular tipping point has ramifications for the future. CI will continue to expand exponentially, leading to further tipping points with major impacts. The continuation of today’s trends, for example, will lead to digital assistants being increasingly personalized, easy-to-use, and providing increasing information and services. Our access to knowledge will certainly be extended by an easy and intuitive connection to computers through human language wherever we are, going beyond the similar power of web search.

As personalized digital assistants eventually become part of children’s lives, reached through always-available mobile devices, delivery of such conversational “augmented intelligence” may be part of what it means to be human. Our tools can almost become part of us (shoes, cataract surgery, mobile phones…); a personal digital assistant that is a life companion may become part of what it means to be human.

Artificial Intelligence is a category of applications symptomatic of what is coming, driven by the long-term trend of increasing computer power. More generally, “computer intelligence” will impact our lives in increasingly surprising ways.

More
articles