Skip to content

Can We Make Robots Think?

A common trope of science fiction is that soon we shall find robots walking around, acting nearly indistinguishable from us. How likely is that? Will it happen soon? Will it happen ever?

The Barrier to Meaning

Just over 1 ½ years ago I was at the Santa Fe Institute for a workshop on the topic of “AI and the Barrier of Meaning.” One question was this one stated above, and the workshop examined progress to date on developing AI (and robot) general intelligence. Was there some barrier to meaning that might prevent it from ever happening, or optimistically, was some “singularity” to AI consciousness just around the corner?

SFI had assembled an all-star cast of AI scientists, several having worked in the field for thirty years. The scientists included Rodney Brooks, MIT professor, scientist and author of several technical books and Flesh and Machines: How Robots Will Change Us, and, the inventor of the Roomba. There were scientists from Facebook AI Research (FAIR; and no, obviously it is not “. . . Labs”), as well as the Google counterpart, and another three dozen prominent researchers. Surprising to me, the group was not optimistic about continued breakthroughs. Rodney joked to me, that it surprised him to need a decade to teach a robot to sweep the floor. They worried that after the current buzz about deep learning, that AI might descend into yet another “AI winter,” a perennial problem for the discipline. Why is the problem so hard?

The history is uneven. The first “winter” occurred in the early 1970s, was repeated in the late 1980s and again in the early 1990s. The term “AI winter” was first coined in 1984 (a good year for that), and traces how soaring optimism, then pessimism among AI scientists was repeatedly followed by general disappointment in the hype, leading to funding cutbacks.

The problem is hard because AI scientists try one technique; it shows promise for some time; it hits some barrier; and they are back to the drawing board, scratching their heads.

There is now significant excitement around “deep learning.” We attended a deep learning class at SFI to learn the basics, and such algorithmic techniques as hierarchical algorithms, gradient descent, and backprop.

Deep learning differs from the previous generation when the model developers completed a laborious process called feature extraction—they needed to tell the computer what was most important. That step often led to wrong results. The AIs got lost when presented with real-world data and situations. The world is far messier than anyone can imagine.

Our Current Optimism—Deep Learning

Deep learning lets the machine sift through enormous datasets, looking for the patterns. It extracts the features. Think of it as a billion-dimensional database. We can only comfortably think in four dimensions, counting time (maybe five to seven for some). The statistical approach reveals weird connections for which humans cannot trace the logic.

There have been breakthroughs using deep learning in many research areas. Facebook and Google use these algorithms to classify faces, and that has led to facial recognition software used for good, and too often for evil, such as by authoritarian states to control their citizens. Google ditched their rules-based translation approach years ago and relies on a deep learning engine. Deep learning has replaced machine vision techniques in industrial robots.

Here is an odd future to imagine if this were the end answer to AI general intelligence. An example: Imagine an AI filled with the USA medical database. Its algorithms can find the strangest correlations. It seems to be always right. (At least, it can say, “I have a 97% confidence level in this diagnosis of your medical condition.”) Then it tells you that you have a disease leading to fatality unless you allow surgical removal of your right leg. What do you do? Perhaps it cannot tell you how it arrived at that conclusion. (It can, if you can imagine the billion dimensions.) The AI cannot provide the basic logic behind its conclusion. It is no longer science, given our current definition (requiring transparency, and the ability for other scientists to critique and to retest). It is then an oracle.

Is deep learning the “answer,” with a long runway of development, leading to spectacular breakthroughs? That is possible. Is it likely? We do not know. Last year, in a MIT Technology Review article, researchers reported that an analysis of 16,625 scientific papers suggested that the era of deep learning may come to an end. If predictive, perhaps this promise also fails, and another AI winter ensues.

Simple Robotics Problems versus Hard Problems

These examples, while promising a scifi future on our doorstep, hardly address the hard problems. They are steps toward developing AIs, and then robots, capable of walking around without injuring us. There is a chasm before the appearance of robots like Data in Star Trek. And there is a larger chasm to create a truly conscious robot. (Poor Data always seemed just close to true human-like consciousness.) The philosopher David Chalmers coined the term, the “hard problem” of consciousness. What is our consciousness? We must understand our own conscious minds before we can imagine creating a machine that can possess it.

The image at the top is of the “Devils fork,” an optical illusion. (There is more about devils in my new novel.) We do not yet fully understand how our own minds work, causing the illusion. If we were to create a robot to be conscious, would it have the same reaction to this photo as we do? Do we care if it did?

 

In my novel, Unfettered Journey, Joe is thinking about this profound problem:

. . .

Yes, it’s true. I am thinking. There is a particular “I” that is having this thought now. That is me. I am thinking; therefore, I exist.

But what is thinking? It is forming relationships among various things in the world, and those relationships have meaning.

A bot can take inputs from the world and compute relations among those inputs. A bot can say, “I think; therefore I am,” but isn’t it just parroting back the constructs we’ve coded?

. . .

In between the fast-paced action, these are some questions that I discuss. If you ask yourself similar questions, then join me soon on this journey, to contemplate the future, and to understand yourself.

Be well and stay calm.

Leave a Reply

Your email address will not be published. Required fields are marked *