The passages quoted below are authored by an NYU professor of psychology and neural science.
(p. 6) Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.
Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus. Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).
Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.
Presumably, my daughter relied on an implicit theory of how her body moves, along with an implicit theory of physics — how one complex object travels through the aperture of another. I challenge any robot to do the same. A.I. systems tend to be passive vessels, dredging through data in search of statistical correlations; humans are active engines for discovering how things work.
For the full commentary, see:
GARY MARCUS. “Gray Matter; A.I. Is Stuck. Let’s Unstick It.” The New York Times, SundayReview Section (Sun., JULY 30, 2017): 6.
(Note: the online version of the commentary has the date JULY 29, 2017, and has the title “Gray Matter; Artificial Intelligence Is Stuck. Here’s How to Move It Forward.”)
Another case in point: between them, Google, Tesla, and others have spent countless billions on mapping the USA, enough for at least $1000/mile including every last obscure Forest Service track. That should be more than enough to catalog everything down to the embossing style on every manhole cover. And yet a person can find their way to Grandma’s new house with vague turn-by-turn directions or a vague line-sketch that shows no details whatsoever about the road surface or the sidewalks or the crosswalks. And a person will manage the task without needing, in advance, a finely detailed map of the current construction projects, including lane changes etc.
But that severe incompleteness won’t stop morally-posturing politicians from forcing autonomous cars onto the populace years or even decades before they are actually ready for unsupervised consumer use. That is the essentially only kind of use they will get in the real world. After all, politicians love to posture, they love to toady up to rent-seeking billionaires, and they love photo-ops of themselves gawking at shiny new tech gadgets.
Note that when signals were first installed on the Chicago El, the accident rate went up for a time, as trained motormen became careless about watching where they were going. Not-so-trained consumers will be far too busy fiddling with their phones to be ready to take over on a split-second’s notice.