Philosopher Argued Artificial Intelligence Would Never Reach Human Intelligence

(p. A28) Professor Dreyfus became interested in artificial intelligence in the late 1950s, when he began teaching at the Massachusetts Institute of Technology. He often brushed shoulders with scientists trying to turn computers into reasoning machines.
. . .
Inevitably, he said, artificial intelligence ran up against something called the common-knowledge problem: the vast repository of facts and information that ordinary people possess as though by inheritance, and can draw on to make inferences and navigate their way through the world.
“Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon,” he wrote in “Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer” (1985), a book he collaborated on with his younger brother Stuart, a professor of industrial engineering at Berkeley.
His criticisms were greeted with intense hostility in the world of artificial intelligence researchers, who remained confident that success lay within reach as computers grew more powerful.
When that did not happen, Professor Dreyfus found himself vindicated, doubly so when research in the field began incorporating his arguments, expanded upon in a second edition of “What Computers Can’t Do” in 1979 and “What Computers Still Can’t Do” in 1992.
. . .
For his 2006 book “Philosophy: The Latest Answers to the Oldest Questions,” Nicholas Fearn broached the topic of artificial intelligence in an interview with Professor Dreyfus, who told him: “I don’t think about computers anymore. I figure I won and it’s over: They’ve given up.”

For the full obituary, see:
WILLIAM GRIMES. “Hubert L. Dreyfus, Who Put Computing In Its Place, Dies at 87.” The New York Times (Wednesday, May 3, 2017): A28.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MAY 2, 2017, and has the title “Hubert L. Dreyfus, Philosopher of the Limits of Computers, Dies at 87.”)

Dreyfus’s last book on the limits of artificial intelligence, was:
Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: The MIT Press, 1992.

Happiness “Emerges from the Pursuit of Purpose”

(p. C7) The modern positive-psychology movement– . . .–is a blend of wise goals, good studies, surprising discoveries, old truths and overblown promises. Daniel Horowitz’s history deftly reveals the eternal lessons that underlie all its incarnations: Money can’t buy happiness; human beings need social bonds, satisfying work and strong communities; a life based entirely on the pursuit of pleasure ultimately becomes pleasureless. As Viktor Frankl told us, “Happiness cannot be pursued; it must ensue. One must have a reason to ‘be happy.’ ” That reason, he said, emerges from the pursuit of purpose.

For the full review, see:
Carol Tavris. “”How Smiles Were Packaged and Sold.” The Wall Street Journal (Saturday, March 31, 2018): C5 & C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date March 29, 2018, and has the title “”Happier?’ and ‘The Hope Circuit’ Reviews: How Smiles Were Packaged and Sold.”)

The book under review, is:
Horowitz, Daniel. Happier?: The History of a Cultural Movement That Aspired to Transform America. New York: Oxford University Press, 2017.

“A Litigious, Protective Culture Has Gone Too Far”

(p. A1) SHOEBURYNESS, England — Educators in Britain, after decades spent in a collective effort to minimize risk, are now, cautiously, getting into the business of providing it.
. . .
Limited risks are increasingly cast by experts as an experience essential to childhood development, useful in building resilience and grit.
Outside the Princess Diana Playground in Kensington Gardens in London, which attracts more than a million visitors a year, a placard informs parents that risks have been “intentionally provided, so that your child can develop an appreciation of risk in a controlled play environment rather than taking similar risks in an uncontrolled and unregulated wider world.”
This view is tinged with nostalgia for an earlier Britain, in which children were tougher and more self-reliant. It resonates both with right-wing tabloids, which see it as a corrective to the cosseting of a liberal nanny state; and with progressives, drawn to a freer and more natural childhood.
. . .
(p. A12) Britain is one of a number of countries where educators and regulators say a litigious, protective culture has gone too far, leaching healthy risks out of childhood. Guidelines on play from the government agency that oversees health and safety issues in Britain state that “the goal is not to eliminate risk.”

For the full story, see:
ELLEN BARRY. “In Britain, Learning to Accept Risk, and the Occasional ‘Owie’.” The New York Times, First Section (Sunday, March 11, 2018): A1 & A12.
(Note: ellipses added.)
(Note: the online version of the story has the date MARCH 10, 2018, and has the title “In Britain’s Playgrounds, ‘Bringing in Risk’ to Build Resilience.”)

Brain as Computer “Is a Bad Metaphor”

(p. A13) In “The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are,” Mr. Jasanoff, the director of the MIT Center for Neurobiological Engineering, presents a lucid primer on current brain science that takes the form of a passionate warning about its limitations. He argues that the age of popular neurohype has persuaded many of us to identify completely with our brains and to misunderstand the true nature of these marvelous organs.
We hear constantly, for example, that the brain is a computer. This is a bad metaphor, Mr. Jasanoff insists. Computers run on electricity, so we concentrate on the electrical activity within the brain; yet there is also chemical and hormonal signaling, for which there are no good computing analogies.

For the full review, see:
Steven Poole. “”BOOKSHELF; Identify Your Self.” The Wall Street Journal (Friday, April 6, 2018): A13.
(Note: the online version of the review has the date April 5, 2018, and has the title “BOOKSHELF; ‘The Biological Mind’ Review: Identify Your Self.”)

The book under review, is:
Jasanoff, Alan. The Biological Mind: How Brain, Body, and Environment Collaborate to Make Us Who We Are. New York: Basic Books, 2018.

Labor-Intensive Tinkering Can Advance Science

(p. A24) When John E. Sulston was 5 years old and growing up in Britain, the son of an Anglican priest, his parents sent him to a private school. There, he discovered, sports were his nemesis.
“I absolutely loathed games,” he said. “I was hopeless.”
When it came to schoolwork, he said, he was “not a books person.”
He had only one consuming interest: science. He liked to tinker, to figure out how things were put together.
. . .
The Nobel he received, shared with two other scientists, recognized the good data he amassed in his work on the tiny transparent roundworm C. elegans in an effort to better understand how organisms develop.
. . .
At the time, it was widely believed that the 558 cells the worm had when it hatched were all it would ever have. But Dr. Sulston noticed that, in fact, the worm kept gaining cells as it developed. And by tracing the patterns of divisions that gave rise to those new cells, he found, surprisingly, that the worm also lost cells in a predictable way. Certain cells were destined to die at a specific time, digesting their own DNA.
Dr. Sulston’s next major project was to trace the fate of every single cell in a worm. It was a task so demanding and labor-intensive that other scientists still shake their heads in amazement that he got it done.
Each day, bending over his microscope for eight or more hours, he would start with a worm embryo and choose one of its cells. He would then watch the cell as it divided and follow each of its progeny cells as, together, they grew and formed the organism. This went on for a total of 18 months.
In the end, he had a complete map of every one of the worm’s 959 cells (not counting sperm and egg cells).

For the full obituary, see:
GINA KOLATA. “John Sulston, 75; Tiny Worm Guided Him to Nobel.” The New York Times (Friday, March 16, 2018): A24.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MARCH 15, 2018, and has the title “John E. Sulston, 75, Dies; Found Clues to Genes in a Worm.”)

Individualistic Cultures Foster Innovation

IndividualismProductivityGraph2018-04-20.pngSource of graph: online version of the WSJ commentary quoted and cited below.

(p. B1) Luther matters to investors not because of the religion he founded, but because of the cultural impact of challenging the Catholic Church’s grip on society. By ushering in what Edmund Phelps, the Nobel-winning director of Columbia University’s Center on Capitalism and Society, calls the “the age of the individual,” Luther laid the groundwork for capitalism.
. . .
(p. B10) Mr. Phelps and collaborators Saifedean Ammous, Raicho Bojilov and Gylfi Zoega show that even in recent years, countries with more individualistic cultures have more innovative economies. They demonstrate a strong link between countries that surveys show to be more individualistic, and total factor productivity, a proxy for innovation that measures growth due to more efficient use of labor and capital. Less individualistic cultures, such as France, Spain and Japan, showed little innovation while the individualistic U.S. led.
As Mr. Bojilov points out, correlation doesn’t prove causation, so they looked at the effects of country of origin on the success of second, third and fourth-generation Americans as entrepreneurs. The effects turn out to be significant but leave room for debate about how important individualistic attitudes are to financial and economic success.

For the full commentary, see:
James Mackintosh. “STREETWISE; What Martin Luther Says About Capitalism.” The Wall Street Journal (Friday, Nov. 3, 2017): B1 & B10.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date Nov. 2, 2017, and has the title “STREETWISE; What 500 Years of Protestantism Teaches Us About Capitalism’s Future.” Where there are minor differences in wording in the two versions, the passages quoted above follow the online version.)

Macron Gives France Hope That “Tomorrow Can Be Better Than Today”

(p. A27) PARIS — When people used to ask me what I missed about America, I would say, “The optimism.” I grew up in the land of hope, then moved to one whose catchphrases are “It’s not possible” and “Hell is other people.” I walked around Paris feeling conspicuously chipper.
But lately I’ve had a kind of emotional whiplash. France is starting to seem like an upbeat, can-do country, while Americans are less sure that everything will be O.K.
. . .
The French haven’t become magically cheerful, but there’s a creeping sense that hope isn’t idiotic, and life can actually improve. As is common with a new president, there was a jump in optimism after Emmanuel Macron was elected last year. But this time, optimism has remained strong, and in January it hit an eight-year high.
It helps that France’s economy is finally growing more and that Mr. Macron has made good on promises ranging from overhauling the labor laws to shrinking class sizes at kindergartens in disadvantaged areas.
. . .
“The France of the optimists has won, and is dragging the other part of France toward its own side,” said Claudia Senik, an economist who heads the Well-Being Observatory, an academic think tank here.
The French are even taking an intellectual interest in this alien idea. There are optimism clubs, conferences and school programs, scholars of positivity and books like “50+1 Good Reasons to Choose Optimism.” In September Mr. Macron was a patron of the Global Positive Forum, a study group of “positive initiatives” in business and government. (“Tomorrow can be better than today,” the forum’s website insists.)

For the full commentary, see:
Druckerman, Pamela. “The New French Optimism.” The New York Times (Friday, March 23, 2018): A27.
(Note: ellipses added.)
(Note: the online version of the commentary has the date March 22, 2018, and has the title “Are the French the New Optimists?”)

Patients Lower Blood Pressure Best When It Is Self-Monitored

(p. D4) The most effective way to monitor blood pressure may be to do it yourself.
British researchers randomly assigned 1,003 patients with hypertension to one of three groups.
. . .
The study was published in Lancet.
“People who monitor their own blood pressure and share the readings with their physician get better control,” said the lead author, Dr. Richard J. McManus, a professor of primary care at the University of Oxford. “Seventy-five million Americans have hypertension. If a good proportion of those self-monitored, it would lead to a big reduction in stroke.”

For the full story, see:
NICHOLAS BAKALAR. “The Best Way to Monitor Your Blood Pressure? Do It Yourself.” The New York Times (Tuesday, MARCH 13, 2018): D4.
(Note: ellipsis added.)
(Note: the online version of the story has the date MARCH 6 [sic], 2018, and has the title “The Best Way to Monitor Your Blood Pressure? Do It Yourself.”)

The Lancet study summarized above, is:
McManus, Richard J., Jonathan Mant, Marloes Franssen, Alecia Nickless, Claire Schwartz, James Hodgkinson, Peter Bradburn, Andrew Farmer, Sabrina Grant, Sheila M. Greenfield, Carl Heneghan, Susan Jowett, Una Martin, Siobhan Milner, Mark Monahan, Sam Mort, Emma Ogburn, Rafael Perera-Salazar, Syed Ahmar Shah, Ly-Mee Yu, Lionel Tarassenko, F. D. Richard Hobbs, Brendan Bradley, Chris Lovekin, David Judge, Luis Castello, Maureen Dawson, Rebecca Brice, Bethany Dunbabin, Sophie Maslen, Heather Rutter, Mary Norris, Lauren French, Michael Loynd, Pippa Whitbread, Luisa Saldana Ortaga, Irene Noel, Karen Madronal, Julie Timmins, Peter Bradburn, Lucy Hughes, Beth Hinks, Sheila Bailey, Sue Read, Andrea Weston, Somi Spannuth, Sue Maiden, Makiko Chermahini, Ann McDonald, Shelina Rajan, Sue Allen, Brenda Deboys, Kim Fell, Jenny Johnson, Helen Jung, Rachel Lister, Ruth Osborne, Amy Secker, Irene Qasim, Kirsty William, Abi Harris, Susan Zhao, Elaine Butcher, Pauline Darbyshire, Sarah Joshi, Jon Davies, Claire Talbot, Eleanor Hoverd, Linda Field, Tracey Adcock, Julia Rooney, Nina Cooter, Aaron Butler, Naomi Allen, Maria Abdul-Wahab, Kathryn McNicholas, Lara Peniket, Kate Dodd, Julie Mugurza, Richard Baskerville, Rakshan Syed, Clare Bailey, Jill Adams, Paul Uglow, Neil Townsend, Alison Macleod, Charlotte Hawkins, Suparna Behura, Jonathan Crawshaw, Robin Fox, Waleed Doski, Martin Aylward, Christine A’Court, David Rapley, Jo Walsh, Paul Batra, Ana Seoane, Sluti Mukherjee, Jonathan Dixon, Peter Arthur, Karen Sutcliffe, Costas Paschallides, Richard Woof, Peter Winfrey, Matthew Clark, Roya Kamali, Paul Thomas, David Ebbs, Liz Mather, Andre Beattie, Karim Ladha, Larisa Smondulak, Surinder Jemahl, Peter Hickson, Liam Stevens, Tony Crockett, David Shukla, Ian Binnian, Paul Vinson, Nigel DeKare-Silver, Ramila Patel, Ivor Singh, Louise Lumley, Glennis Williams, Mark Webb, Jack Bambrough, Neetul Shah, Hergeven Dosanjh, Frank Spannuth, Carolyn Paul, Jude Ganesegaram, Laurie Pike, Vijaysundari Maheswaran, Farah Paruk, Stephen Ford, Vineeta Verma, Kate Milne, Farhana Lockhat, Jennifer Ferguson, Anne-Marie Quirk, Hugo Wilson, David Copping, Sam Bajallan, Simria Tanvir, Faheem Khan, Tom Alderson, Amar Ali, Richard Young, Umesh Chauhan, Lindsey Crockett, Louise McGovern, Claire Cubitt, Simon Weatherill, Abdul Tabassum, Philip Saunders, Naresh Chauhan, Samantha Johnson, Jo Walsh, Inderjit Marok, Rajiv Sharma, William Lumb, John Tweedale, Ian Smith, Lawrence Miller, Tanveer Ahmed, Mark Sanderson, Claire Jones, Peter Stokell, Matthew J. Edwards, Andrew Askey, Jason Spencer, Kathryn Morgan, Kyle Knox, Robert Baker, Crispin Fisher, Rachel Halstead, Neil Modha, David Buckley, Catherine Stokell, John Gerald McCabe, Jennifer Taylor, Helen Nutbeam, Richard Smith, Christopher MacGregor, Sam Davies, Mark Lindsey, Simon Cartwright, Jonathan Whittle, Julie Colclough, Alison Crumbie, Nicholas Thomas, Vattakkatt Premchand, Rafia Hamid, Zishan Ali, John Ward, Philip Pinney, Stephen Thurston, and Tina Banerjee. “Efficacy of Self-Monitored Blood Pressure, with or without Telemonitoring, for Titration of Antihypertensive Medication (TASMINH4): An Unmasked Randomised Controlled Trial.” The Lancet 391, no. 10124 (March 10, 2018): 949-59.

“Octopuses Try Hard to Escape from Captivity”

(p. A23) I can’t stop telling people about the factoids I learned from Amia Srinivasan’s book review essay “The Sucker, the Sucker!” in The London Review of Books about the personality of octopuses. An octopus’s arms have more neurons than its brain, so each arm can taste and smell on its own and exhibit short-term memory. An octopus can change color to mimic other animals, but it cannot itself see color. So how does it know which color to change into? Good question.
Octopuses are curious but sometimes ornery. When researchers tried to train an octopus to pull a lever to get food, the octopus kept breaking off the lever. Octopuses try hard to escape from captivity, waiting for those moments when they aren’t being watched. One octopus persistently shot jets of water at the nearby aquarium light bulbs, repeatedly short-circuiting the electricity supply until it was finally released into the wild.

For the full commentary, see:

Brooks, David. “The Sidney Awards, Part I.” The New York Times (Tuesday, Dec. 26, 2017): A23.

(Note: the online version of the commentary has the date Dec. 25, 2017, and has the title “The 2017 Sidney Awards, Part I.” The online version says that the New York edition of the print version of the commentary appeared on Dec. 25, 2017 on p. A25. It appeared on Dec. 26 on p. A23 of my National edition.)

“Overblown” Worries that A.I. Will Make Humans Obsolete

(p. B3) SAN FRANCISCO — Apple has hired Google’s chief of search and artificial intelligence, John Giannandrea, a major coup in its bid to catch up to the artificial intelligence technology of its rivals.
. . .
Mr. Giannandrea, a 53-year-old native of Scotland known to colleagues as J.G., helped lead the push to integrate A.I. throughout Google’s products, including internet search, Gmail and its own digital assistant, Google Assistant.
He joined Google in 2010 when it purchased Metaweb, a start-up where he served as chief technology officer. Metaweb was building what it described as a “database of the world’s knowledge,” which Google eventually rolled into its search engine to deliver direct answers to users’ queries. (Try googling “How old is Steph Curry?”) During Mr. Giannandrea’s tenure, A.I. research became increasingly important inside Google, with its primary A.I. lab, Google Brain, moving into a space beside the chief executive, Sundar Pichai.
. . .
On the debate over whether humanity should be worried about the rapidly accelerating improvements in A.I., Mr. Giannandrea told MIT Technology Review in an interview last year that the concerns were overblown.
“What I object to is this assumption that we will leap to some kind of superintelligent system that will then make humans obsolete,” he said. “I understand why people are concerned about it but I think it’s gotten way too much airtime. I just see no technological basis as to why this is imminent at all.”

For the full story, see:
JACK NICAS and CADE METZ. “Lagging Rivals in A.I., Apple Adds A Top Google Executive to Its Team.” The New York Times (Wednesday, April 4, 2018): B3.
(Note: ellipses added.)
(Note: the online version of the story has the date APRIL 3, 2018, and has the title “Apple Hires Google’s A.I. Chief.”)