Millions of Dollars and 30 Years Later, A.I. Still Has Lacks Crucial Common Sense

(p. B6) SAN FRANCISCO — Microsoft’s co-founder Paul Allen said Wednesday [February 28, 2018] that he was pumping an additional $125 million into his nonprofit computer research lab for an ambitious new effort to teach machines “common sense.”
. . .
“To make real progress in A.I., we have to overcome the big challenges in the area of common sense,” said Mr. Allen, who founded the software giant Microsoft in the 1970s with Bill Gates.
. . .
In the mid-1980s, Doug Lenat, a former Stanford University professor, with backing from the government and several of the country’s largest tech companies, started a project called Cyc. He and his team of researchers worked to codify all the simple truths that we learn as children, from “you can’t be in two places at the same time” to “when drinking from a cup, hold the open end up.”
Thirty years later, Mr. Lenat and his team are still at work on this “common sense engine” — with no end in sight.
Mr. Allen helped fund Cyc, and he believes it is time to take a fresh approach, he said, because modern technologies make it easier to build this kind of system.
Mr. Lenat welcomed the new project. But he also warned of challenges: Cyc has burned through hundreds of millions of dollars in funding, running into countless problems that were not evident when the project began. He called them “buzz saws.”

For the full story, see:
CADE METZ, “A.I.’s Greatest Challenge: Digitizing Common Sense.” The New York Times (Thursday, March 1, 2018): B6.
(Note: ellipses, and bracketed date, added.)
(Note: the online version of the article has the date Feb. 28, 2018, and has the title “Paul Allen Wants to Teach Machines Common Sense.”)

Virtual Reality Was Intended as a Complement to Physical Reality, Not as a Substitute

(p. A17) The illusion of presence is what drove Mr. Lanier from the start. He envisioned VR not as an alternative to physical reality but as an enhancement–a way to more fully appreciate the wonder of existence. More conventional individuals, their senses dulled by the day-to-day, may be drawn to virtual reality because it seems realer than real; he considered it a new form of communication. “I longed to see what was inside the heads of other people,” he writes. “I wanted to show them what I explored in dreams. I imagined virtual worlds that would never grow stale because people would bring surprises to each other. I felt trapped without this tool. Why, why wasn’t it around already?”
“Dawn of the New Everything” is full of such self-revelatory moments. The author grew up an only child in odd corners of the Southwest, first on the Texas-Mexico border, then in the desert near White Sands Missile Range. When he was nine, his mother, a Holocaust survivor, was killed in a car crash on the way home from getting her driver’s license. The tract house they’d bought burned down the day after construction was completed. The insurance money never came, so Jaron and his father lived in tents in the desert until they could afford to build a real home–which turned out to be a mad concoction of geodesic domes of Jaron’s own design. They called it Earth Station Lanier.
. . .
Lacking a degree from high school, never mind college, he nonetheless parlayed his virtual-reality obsession into a company, VPL Research, that for a few years in the late ’80s made VR seem real, if only in a lab setting. Then came board fights and bankruptcy, and VR disappeared from public view for more than 20 years.
What went wrong at VPL? Unfortunately, you won’t find out here. Mr. Lanier warns us he isn’t going to deliver a blow-by-blow; instead we get a disjointed sequence of half-remembered anecdotes. What does come through is his ambivalence about going into business at all, and his even deeper ambivalence toward writing about it.

For the full review, see:
Frank Rose. “BOOKSHELF; The Promise of Virtual Reality; The story of VR, the most immersive communications technology to come along since cinema, as told by two of its pioneers.” The Wall Street Journal (Tuesday, February 6, 2018): A17.
(Note: ellipsis added.)
(Note: the online version of the review has the date Feb. 5, 2018, and has the title “BOOKSHELF; Review: The Promise of Virtual Reality; The story of VR, the most immersive communications technology to come along since cinema, as told by two of its pioneers.”)

The book under review, is:
Lanier, Jaron. Dawn of the New Everything: Encounters with Reality and Virtual Reality. New York: Henry Holt & Company, 2017.

Obstacles and Conflicts Were Too Much for Lanier’s “VPL Research” Startup

(p. 11) Lanier’s book is, . . . , intimate and idiosyncratic. He carries us through his quirky and fascinating life story, with periodic nerdy side trips through his early thinking on more technical aspects of virtual reality. If you liked Richard Feynman’s autobiographical “Surely You’re Joking, Mr. Feynman” but thought it was rather self-indulgent, this book will prompt similar reactions. You could almost say that Lanier’s vivid and creative imagination is a distinct character in this book, he discusses it so much. Midway through, Feynman himself makes an appearance, and it seems as if we’re meeting an old friend.
Lanier has been credited with inventing the term “virtual reality,” and he founded one of the original companies to produce it, VPL Research. He goes over the technology’s history in detail, outlining not only the obstacles to getting consistent hardware but some personalities and interpersonal conflicts that ultimately led to his company’s breaking up. He also demonstrates the role personal connections and interactions play in Silicon Valley.

For the full review, see:
CATHY O’NEIL. “Enter the Holodeck.” The New York Times Book Review (Sunday, February 4, 2018): 11.
(Note: ellipsis added.)
(Note: the online version of the review has the date JAN. 30, 2018.)

The book under review, is:
Lanier, Jaron. Dawn of the New Everything: Encounters with Reality and Virtual Reality. New York: Henry Holt & Company, 2017.

Level of Loneliness About the Same as 70 Years Ago

(p. 8) . . . is loneliness, as many political officials and pundits are warning, a growing “health epidemic”?
. . .
The main evidence for rising isolation comes from a widely reported sociology journal article claiming that in 2004, one in four Americans had no one in their life they felt they could confide in, compared with one in 10 during the 1980s. But that study turned out to be based on faulty data, and other research shows that the portion of Americans without a confidant is about the same as it has long been. Although one of the authors has distanced himself from the paper (saying, “I no longer think it’s reliable”), scholars, journalists and policymakers continue to cite it.
The other data on loneliness are complicated and often contradictory, in part because there are so many different ways of measuring the phenomenon. But it’s clear that the loneliness statistics cited by those who say we have an epidemic are outliers. For example, one set of statistics comes from a study that counted as lonely people who said they felt “left out” or “isolated,” or “lacked companionship” — even just “some of the time.” That’s an exceedingly low bar, and surely not one we’d want doctors or policymakers to use in their work.
One reason we need to be careful about how we measure and respond to loneliness is that, as the University of Chicago psychologist John Cacioppo argues, an occasional and transitory feeling of loneliness can be healthy and productive. It’s a biological signal to ourselves that we need to build stronger social bonds.
Professor Cacioppo has spent much of his career documenting the dangers of loneliness. But it’s notable that he relies on more measured statistics in his own scientific papers than the statistics described above. One of his articles, from last year, reports that around 19 percent of older Americans said they had felt lonely for much of the week before they were surveyed, and that in Britain about 6 percent of adults said they felt lonely all or most of the time. Those are worrisome numbers, but they are quite similar to the numbers reported in Britain in 1948, when about 8 percent of older adults said they often or always felt lonely, and to those in previous American studies as well.

For the full commentary, see:
ERIC KLINENBERG. “Is Loneliness a Health Epidemic?” The New York Times, SundayReview Section (Sunday, February 11, 2018): 8.
(Note: ellipses added.)
(Note: the online version of the commentary has the date FEB. 9, 2018.)

Child Prodigies Seldom Excel as Adults

(p. 15) Child prodigies are exotic creatures, each unique and inexplicable. But they have a couple of things in common, as Ann Hulbert’s meticulous new book, “Off the Charts,” makes clear: First, most wunderkinds eventually experience some kind of schism with a devoted and sometimes domineering parent. “After all, no matter how richly collaborative a bond children forge with grown-up guides, some version of divorce is inevitable,” Hulbert writes. “It’s what modern experts would call developmentally appropriate.” Second, most prodigies grow up to be thoroughly unremarkable on paper. They do not, by and large, sustain their genius into adulthood.
. . .
The very traits that make prodigies so successful in one arena — their obsessiveness, a stubborn refusal to conform, a blistering drive to win — can make them pariahs in the rest of life. Whatever else they may say, most teachers do not in fact appreciate creativity and critical thinking in their own students. “Off the Charts” is jammed with stories of small geniuses being kicked out of places of learning. Matt Savage spent two days in a Boston-area Montessori preschool before being expelled. Thanks to parents who had the financial and emotional resources to help him find his way, he is now, at age 25, a renowned jazz musician.

For the full review, see:
AMANDA RIPLEY. “Gifted and Talented and Complicated.” The New York Times Book Review (Sunday, January 21, 2018): 15.
(Note: ellipsis added.)
(Note: the online version of the review has the date JAN. 17, 2018.)

The book under review, is:
Hulbert, Ann. Off the Charts: The Hidden Lives and Lessons of American Child Prodigies. New York: Alfred A. Knopf, 2018.

Cognitive Abilities Highest After Waking in Morning

(p. A15) A raft of studies in disciplines ranging from medicine to economics have yielded all sorts of data on the science of timing. Daniel Pink, an author who regularly applies behavioral science to the realm of work, has handily distilled the findings in “When: The Scientific Secrets of Perfect Timing.”
. . .
For a slim book, “When” brims with a surprising amount of insight and practical advice. In amiable, TED-talk-ready prose, Mr. Pink offers scheduling tips for everything from workouts to weddings. Exercise, for example, is best done in the morning for those who hope to lose weight, build strength and boost their mood through the day.
. . .
Moods are not the only things that shift every 24 hours. Our cognitive abilities also morph in foreseeable ways. We are often sharpest in the hours after waking up, which makes morning the best time to take exams or answer logic problems. Researchers analyzing four years of test results for two million Danish schoolchildren found that students consistently scored higher in mornings than afternoons.

For the full review, see:
Emily Bobrow. “BOOKSHELF; Hacking The Clock; Exercise in the morning if you want to lose weight. But if you want to perform at your physical peak, plan a workout for the afternoon.” The Wall Street Journal (Wednesday, Jan. 10, 2018): A15.
(Note: ellipses added.)
(Note: the online version of the review has the date Jan. 9, 2018, and has the title “BOOKSHELF; Review: Hacking The Clock; Exercise in the morning if you want to lose weight. But if you want to perform at your physical peak, plan a workout for the afternoon.”

The book under review, is:
Pink, Daniel H. When: The Scientific Secrets of Perfect Timing. New York: Riverhead Books, 2018.

DeepMind Mastered “Go” Only After It Was Told the Score

(p. C3) To function well outside controlled settings, robots must be able to approximate such human capacities as social intelligence and hand-eye coordination. But how to distill them into code?
“It turns out those things are really hard,” said Cynthia Breazeal, a roboticist at the Massachusetts Institute of Technology’s Media Lab.
. . .
Even today’s state-of-the-art AI has serious practical limits. In a recent paper, for example, researchers at MIT described how their AI software misidentified a 3-D printed turtle as a rifle after the team subtly altered the coloring and lighting for the reptile. The experiment showed the ease of fooling AI and raised safety concerns over its use in real-world applications such as self-driving cars and facial-recognition software.
Current systems also aren’t great at applying what they have learned to new situations. A recent paper by the AI startup Vicarious showed that a proficient Atari-playing AI lost its prowess when researchers moved around familiar features of the game.
. . .
Google’s DeepMind subsidiary used a technique known as reinforcement learning to build software that has repeatedly beat the best human players in Go. While learning the classic Chinese game, the machine got positive feedback for making moves that increased the area it walled off from its competitor. Its quest for a higher score spurred the AI to develop territory-taking tactics until it mastered the game.
The problem is that “the real world doesn’t have a score,” said Brown University roboticist Stefanie Tellex. Engineers need to code into AI programs so-called “reward functions”–mathematical ways of telling a machine it has acted correctly. Beyond the finite scenario of a game, amid the complexity of real-life interactions, it’s difficult to determine what results to reinforce. How, and how often, should engineers reward machines to guide them to perform a certain task? “The reward signal is so important to making these algorithms work,” Dr. Tellex added.
. . .
If a robot needs thousands of examples to learn, “it’s not clear that’s particularly useful,” said Ingmar Posner, the deputy director of the Oxford Robotics Institute in the U.K. “You want that machine to pick up pretty quickly what it’s meant to do.”

For the full commentary, see:
Daniela Hernandez. “‘Can Robots Learn to Improvise?” The Wall Street Journal (Sat., Dec. 16, 2017): C3.
(Note: ellipses added.)
(Note: the online version of the commentary has the date Dec. 15, 2017.)

The paper by the researchers at Vicarious, is:
Kansky, Ken, Tom Silver, David A. Mely, Mohamed Eldawy, Miguel Lázaro-Gredilla, Xinghua Lou, Nimrod Dorfman, Szymon Sidor, Scott Phoenix, and Dileep George. “Schema Networks: Zero-Shot Transfer with a Generative Causal Model of Intuitive Physics.” Manuscript, 2017.

The paper, mentioned above, from the MIT Media Lab, is:
Athalye, Anish, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. “Synthesizing Robust Adversarial Examples.” Working paper, Oct. 30, 2017.

For Jane Jacobs, “Self-Certainty” Was Better than a Doctorate

(p. 17) Like the critic Pauline Kael and the conservative activist Phyllis Schlafly, Jane Jacobs arrived to churn the fertile soil of American cultural ideology in the 1960s, brandishing a disciplined populist intellect and a comfort with courting enmity. All three were middle-aged mothers by the time they would shake things up. That Jacobs, nee Butzner in 1916, would force a reconsideration of the nature and purpose of cities was an outcome her young adulthood would have hardly suggested. An unexceptional student at Central High in Scranton, Pa., she later studied at Columbia before failing to gain formal admission to Barnard and abandoning the pursuit of a degree entirely. These experiences, Robert Kanigel maintains in his biography “Eyes on the Street: The Life of Jane Jacobs,” left her with a distaste for the academy that she carried throughout her career.
Where others had doctorates, Jacobs had a self-certainty that was manifest early on. In a chronicling of her childhood so thorough it includes the number of times she was late for homeroom during her first semester of high school (seven), Kanigel recounts an incident in which Jane was expelled from third grade for urging her classmates to dismiss the entreaties of a hygiene instructor, who asked them to pledge to brush their teeth twice a day for the rest of their lives. In Jane’s view, the promise would be impossible to keep, making the request absurd.

For the full review, see:
GINIA BELLAFANTE. “Fighting the Power Broker.” The New York Times Book Review (Sunday, OCT. 9, 2016): 17.
(Note: the online version of the review has the date OCT. 7, 2016, and has the title “Two New Books About Jane Jacobs, Urban Visionary.”)

The book under review, is:
Kanigel, Robert. Eyes on the Street: The Life of Jane Jacobs. New York: Alfred A. Knopf, 2016.

Sapolsky Wrong to Dismiss Hunter-Gatherer Violence

(p. 15) Sapolsky proposes 10 strategies for reducing violence, all reasonable but none that justify the notion that science is the basis for societal advances toward less violence and higher morality.
. . .
In this section Sapolsky becomes a partisan critic, including presenting a skeptical view about the supposed long-term decline of human violence claimed by Steven Pinker in “The Better Angels of Our Nature: Why Violence Has Declined.” Sapolsky asserts that Pinker’s calculations include elementary errors, and that low rates of violence among contemporary hunter-gatherers mean that warfare did not predate agriculture. His arguments here are unbalanced. He fails to note that data on hunter-gatherer violence is relevant only where they are neighbored by other hunter-gatherers, rather than by militarily superior farmers.

For the full review, see:
RICHARD WRANGHAM. “Brain Teasers.” The New York Times Book Review (Sunday, JULY 9, 2017): 15.
(Note: ellipsis added.)
(Note: the online version of the review has the date JULY 5, 2017, and has the title “Insights Into the Brain, in a Book You’ll Wish You Had in College.”)

The book under review, is:
Sapolsky, Robert M. Behave: The Biology of Humans at Our Best and Worst. New York: Penguin Press 2017.

Innovation Benefits from Constructive Arguments

(p. 7) When Wilbur and Orville Wright finished their flight at Kitty Hawk, Americans celebrated the brotherly bond. The brothers had grown up playing together, they had been in the newspaper business together, they had built an airplane together. They even said they “thought together.”
These are our images of creativity: filled with harmony. Innovation, we think, is something magical that happens when people find synchrony together. The melodies of Rodgers blend with the lyrics of Hammerstein. It’s why one of the cardinal rules of brainstorming is “withhold criticism.” You want people to build on one another’s ideas, not shoot them down. But that’s not how creativity really happens.
When the Wright brothers said they thought together, what they really meant is that they argued together. One of their pivotal decisions was the design of a propeller for their plane. They squabbled for weeks, often shouting back and forth for hours. “After long arguments we often found ourselves in the ludicrous position of each having been converted to the other’s side,” Orville reflected, “with no more agreement than when the discussion began.” Only after thoroughly decimating each other’s arguments did it dawn on them that they were both wrong. They needed not one but two propellers, which could be spun in opposite directions to create a kind of rotating wing. “I don’t think they really got mad,” their mechanic marveled, “but they sure got awfully hot.”
. . .
Wilbur and Orville Wright came from a wobbly family. Their father, a preacher, never met a moral fight he wasn’t willing to pick. They watched him clash with school authorities who weren’t fond of his decision to let his kids miss a half-day of school from time to time to learn on their own. Their father believed so much in embracing arguments that despite being a bishop in the local church, he had multiple books by atheists in his library — and encouraged his children to read them.
. . .
The Wright brothers weren’t alone. The Beatles fought over instruments and lyrics and melodies. Elizabeth Cady Stanton and Susan B. Anthony clashed over the right way to win the right to vote. Steve Jobs and Steve Wozniak argued incessantly while designing the first Apple computer. None of these people succeeded in spite of the drama — they flourished because of it. Brainstorming groups generate 16 percent more ideas when the members are encouraged to criticize one another. The most creative ideas in Chinese technology companies and the best decisions in American hospitals come from teams that have real disagreements early on. Breakthrough labs in microbiology aren’t full of enthusiastic collaborators cheering one another on but of skeptical scientists challenging one another’s interpretations.
If no one ever argues, you’re not likely to give up on old ways of doing things, let alone try new ones. Disagreement is the antidote to groupthink. We’re at our most imaginative when we’re out of sync. There’s no better time than childhood to learn how to dish it out — and to take it.

For the full commentary, see:
Grant, Adam. “Kids, Would You Please Start Fighting?” The New York Times, SundayReview Section (Sun., NOV. 5, 2017): 7.
(Note: ellipses added.)
(Note: the online version of the commentary has the date NOV. 4, 2017.)