Courageous Grover Cleveland Belongs in “Entitlement Reform Hall of Fame”

(p. A11) Mr. Cogan has just written a riveting, massive book, “The High Cost of Good Intentions,” on the history of entitlements in the U.S., and he describes how in 1972 the Senate “attached an across-the-board, permanent increase of 20% in Social Security benefits to a must-pass bill” on the debt ceiling. President Nixon grumbled loudly but signed it into law. In October, a month before his re-election, “Nixon reversed course and availed himself of an opportunity to take credit for the increase,” Mr. Cogan says. “When checks went out to some 28 million recipients, they were accompanied by a letter that said that the increase was ‘signed into law by President Richard Nixon.’ ”
The Nixon episode shows, says Mr. Cogan, that entitlements have been the main cause of America’s rising national debt since the early 1970s. Mr. Trump’s pact with the Democrats is part of a pattern: “The debt ceiling has to be raised this year because elected representatives have again failed to take action to control entitlement spending.”
. . .
Mr. Cogan conceived the book about four years ago when, as part of his research into 19th-century spending patterns, he “saw this remarkable phenomenon of the growth in Civil War pensions. By the 1890s, 30 years after it had ended, pensions from the war accounted for 40% of all federal government spending.” About a million people were getting Civil War pensions, he found, compared with 8,000 in 1873, eight years after the war. Mr. Cogan wondered what caused that “extraordinary growth” and whether it was unique.
When he went back to the stacks to look at pensions from the Revolutionary War, he saw “exactly the same pattern.” It dawned on him, he says, that this matched “the evolutionary pattern of modern entitlements, such as Social Security, Medicare, Medicaid, food stamps.”
. . .
Who would feature in an Entitlement Reform Hall of Fame? Mr. Cogan’s blue eyes shine contentedly at this question, as he utters the two words he seems to love most: Grover Cleveland. “He was the very first president to take on an entitlement. He objected to the large Civil War program and thought it needed to be reformed.” Cleveland was largely unsuccessful, but was a “remarkably courageous president.” In his time, Congress had started passing private relief bills, giving out individual pensions “on a grand scale. They’d take 100 or 200 of these bills on a Friday afternoon and pass them with a single vote. Incredibly, 55% of all bills introduced in the Senate in its 1885 to 1887 session were such private pension bills.”.

For the full interview, see:
Tunku Varadarajan. “THE WEEKEND INTERVIEW with John F. Cogan; Why Entitlements Keep Growing, and Growing, and . . ..” The Wall Street Journal (Tues., Sept. 9, 2017): A11.
(Note: ellipsis in title, in original; other ellipses added.)
(Note: the online version of the interview has the date Sept. 8, 2017, and has the title “THE WEEKEND INTERVIEW; Why Entitlements Keep Growing, and Growing, and . . ..”.)

The Cogan book, mentioned above, is:
Cogan, John F. The High Cost of Good Intentions: A History of U.S. Federal Entitlement Programs. Stanford, CA: Stanford University Press, 2017.

“We Liberals” Oppose Diversity of Ideas

(p. 11) We liberals are adept at pointing out the hypocrisies of Trump, but we should also address our own hypocrisy in terrain we govern, such as most universities: Too often, we embrace diversity of all kinds except for ideological. Repeated studies have found that about 10 percent of professors in the social sciences or the humanities are Republicans.
We champion tolerance, except for conservatives and evangelical Christians. We want to be inclusive of people who don’t look like us — so long as they think like us.
I fear that liberal outrage at Trump’s presidency will exacerbate the problem of liberal echo chambers, by creating a more hostile environment for conservatives and evangelicals. Already, the lack of ideological diversity on campuses is a disservice to the students and to liberalism itself, with liberalism collapsing on some campuses into self-parody.
. . .
Whatever our politics, inhabiting a bubble makes us more shrill. Cass Sunstein, a Harvard professor, conducted a fascinating study of how groupthink shapes federal judges when they are randomly assigned to three-judge panels.
When liberal judges happened to be temporarily put on a panel with other liberals, they usually swung leftward. Conversely, conservative judges usually moved rightward when randomly grouped with other conservatives.
It’s the judicial equivalent of a mob mentality. And if this happens to judges, imagine what happens to you and me.
Sunstein, a liberal and a Democrat who worked in the Obama administration, concluded that the best judicial decisions arose from divided panels, where judges had to confront counterarguments.
Yet universities are often the equivalent of three-judge liberal panels, and the traditional Democratic dominance has greatly increased since the mid-1990s — apparently because of a combination of discrimination and self-selection. Half of academics in some fields said in a survey that they would discriminate in hiring decisions against an evangelical.
The weakest argument against intellectual diversity is that conservatives or evangelicals have nothing to add to the conversation. “The idea that conservative ideas are dumb is so preposterous that you have to live in an echo chamber to think of it,” Sunstein told me..

For the full commentary, see:
Kristof, Nicholas. “The Dangers of Echo Chambers on Campus.” The New York Times, SundayReview Section (Sun., DEC. 11, 2016): 11.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date DEC. 10, 2016.)

Cass Sunstein’s research on the effect of political orientation on federal judges’ decisions, mentioned above, was most fully reported in:
Sunstein, Cass R., David Schkade, Lisa M. Ellman, and Andres Sawicki. Are Judges Political?: An Empirical Analysis of the Federal Judiciary. Washington, D.C.: Brookings Institution Press, 2006.

“Bankruptcies and Losses Concentrate the Mind on Prudent Behavior”

(p. A18) Allan H. Meltzer, an influential conservative economist who strongly opposed government bailouts and was credited with coining the anti-bailout slogan, “Capitalism without failure is like religion without sin,” died on Monday in Pittsburgh. He was 89.
. . .

In books like “Why Capitalism?” (2012), Dr. Meltzer promoted the view that countries and investors should suffer the consequences of their mistakes, whether flawed fiscal measures or bad lending decisions.
In coining the slogan “Capitalism without failure is like religion without sin,” he added another maxim: “Bankruptcies and losses concentrate the mind on prudent behavior.”
. . .
In recent years Mr. Meltzer found a new interest in law and regulation. He and other scholars were working on a book, “Regulation and the Rule of Law.”

For the full obituary, see:
ZACH WICHTER. “Allan H. Meltzer, Economist Averse to Bailouts, Dies at 89.” The New York Times (Sat., MAY 13, 2017): A18.
(Note: ellipses added.)
(Note: the online version of the obituary has the date MAY 12, 2017, and has the title “Allan H. Meltzer, Conservative Economist, Dies at 89.”)

Meltzer’s book on capitalism, mentioned above, is:
Meltzer, Allan H. Why Capitalism? New York: Oxford University Press, 2012.

“Many of Our Worst Behaviors Are in Retreat”

(p. A19) Mr. Sapolsky is one of those very few eminent scientists who are also eminent–or even coherent–when writing for the general public.
. . .
The author’s comprehensive approach integrates controlled laboratory investigation with naturalistic observations and study. To his immense credit, he doesn’t omit cultural norms, social learning, the role of peer pressure or historical tradition. He also has a delightfully self-deprecating sense of humor. Introducing a chapter titled “War and Peace,” he summarizes the chapter’s goals as: (a) to demonstrate that “many of our worst behaviors are in retreat, our best ones ascendant”; (b) to examine “ways to improve this further”; (c) to derive “emotional support for this venture” (d) and, “finally, to see if I can actually get away with calling this chapter ‘War and Peace.’ ” Earlier, after an especially abstruse sentence, he adds a footnote: “I have no idea what it is that I just wrote.”
. . .
It’s no exaggeration to say that “Behave” is one of the best nonfiction books I’ve ever read. .

For the full review, see:
David P. Barash. “BOOKSHELF; How the Brain Makes Us Do It; Biology can explain but not excuse our worst behavior; Testosterone may drive a vicious warlord, but social triggers shape his actions.” The Wall Street Journal (Tues., May 2, 2017): A19.
(Note: ellipses added.)
(Note: the online version of the review has the date May 1, 2017.)

The book under review, is:
Sapolsky, Robert M. Behave: The Biology of Humans at Our Best and Worst. New York: Penguin Press 2017.

Inventor of Submarine “Was Shunted Aside”

(p. C6) There are very few wars in history that begin, dramatically, with a brand-new weapon displaying its transformative power, but one such case occurred in the southern North Sea in September 1914, when three large cruisers of the Royal Navy were torpedoed and swiftly sunk by a diminutive German U-boat, the U-9. At that moment, the age of the attack submarine was born, and the struggle for naval supremacy for a great part of both World War I and World War II was defined. The U-boat–shorthand for “Unterseeboot”–had come of age.
It is appropriate, then, that the historian Lawrence Goldstone begins “Going Deep” with a dramatic re-telling of the U-9’s exploit. It should be said immediately that his chronicle doesn’t present the whole history of submarine warfare but rather the story of the efforts of various American inventors and entrepreneurs–above all, an Irish-born engineer named John Philip Holland–to create a power-driven, human-directed and sub-marine vessel that could stalk and then, with its torpedoes, obliterate even the most powerful of surface warships.
. . .
“Going Deep” ends in 1914. By that time, the U.S. Navy was on its way to possessing some submarines–vessels equipped with torpedoes that were therefore capable, in theory, of sinking an enemy’s warships or his merchant marine, although in fact these boats were aimed at only coastal defense. And by 1914 American industry could boast of a nascent submarine-building capacity, especially in the form of the Electric Boat Co., which was to survive the capriciousness of the Navy Department’s “on-off” love affair with the submarine until World War II finally proved its undoubted power.
But these successes, limited though they were, were not John Philip Holland’s. He had played a major role–really, the greatest role–in developing the early submarine, grasping that it could transform naval warfare. He had grappled with and overcome most of the daunting technological obstacles in the way of making his vision a reality. Mr. Goldstone is surely right to give him such prominence. But eventually Holland was shunted aside by more ruthless entrepreneurs, diddled by business partners and denied Navy contracts. He passed away on Aug. 12, 1914, just as World War I was beginning. By then, feeling beaten and having retired, he was a quiet churchman and amateur historian. This part of Mr. Goldstone’s story is not a happy one.

For the full review, see:

Kennedy, Paul. “A Man Down Below; How an Irish-American engineer developed a Jules Verne-like wonder-weapon of the deep.” The Wall Street Journal (Sat., June 17, 2017): C6.

(Note: ellipsis added.)
(Note: the online version of the review has the date June 16, 2017.)

The book under review, is:
Goldstone, Lawrence. Going Deep: John Philip Holland and the Invention of the Attack Submarine. New York: Pegasus Books Ltd., 2017.

“Splendid Tutorial” of Bitcoin, Distributed Ledgers, and Smart Contracts

(p. A13) ‘The future is already here–it’s just not very evenly distributed.” The aphorism coined by novelist William Gibson explains why Andrew McAfee and Erik Brynjolfsson’s tour of the technologies that are shaping the future of business, “Machine, Platform, Crowd: Harnessing Our Digital Future,” contains sights that are already familiar and others that are not. This is a book for managers whose companies sit well back from the edge and who would like a digestible introduction to technology trends that may not have reached their doorstep–yet.
. . .
In the penultimate chapter, the authors present a splendid tutorial on things that are too new for most civilians to have gained a good understanding of–cryptocurrencies like Bitcoin, distributed ledgers, and smart contracts. The authors present the theoretical possibility that conventional contracts and the human handling of disputes could be rendered obsolete by dense networks of sensors in the physical world and extremely detailed contracts anticipating all contingencies so that machines alone can handle enforcement. But they show that computing power, however much it grows, seems unlikely to replace the human component for dispute resolution.

For the full review, see:
Randall Stross. “BOOKSHELF; The Future On Fast Forward; GE used ‘crowdfunding’ to gauge interest in a new ice maker. McDonald’s has begun adding self-service ordering in all its U.S. locations..” The Wall Street Journal (Thurs., July 6, 2017): A13.
(Note: ellipsis added.)
(Note: the online version of the review has the date July 5, 2017.)

The book under review, is:
McAfee, Andrew, and Erik Brynjolfsson. Machine, Platform, Crowd: Harnessing Our Digital Future. New York: W. W. Norton & Company, 2017.

Who Was the Breakfast Cereal Innovator?

(p. A15) . . . , it turns out that the turn-of-the-last-century origin and evolution of the cereal industry was a very nasty and unpleasant bit of business, as Howard Markel chronicles in “The Kelloggs: The Battling Brothers of Battle Creek.”
. . .
The Kelloggs (and others) thought that an easily digestible corn cereal might solve all the problems. The birth of breakfast cereal is a tortured tale. Both Kellogg brothers would insist on having made the crucial innovations, as would others, including the most successful copycat, C.W. Post, who moved to Battle Creek to make his new Shredded Wheat. Shredded Wheat became a top seller after John failed to conclude a deal to buy Post’s company and, worse, refused to aggressively sell the Kellogg cereal because he thought it unseemly for a medical doctor, and his increasingly famous sanitarium (“the San”), to sell a commercial product.
Through it all, John’s younger brother, Will–a plump, colorless, diligent numbers man–served as his long-suffering factotum. “The doctor was the San’s showman and carnival barker,” Mr. Markel writes, “while Will kept the place running smoothly and served as a brake to his brother’s tendency to make poor and costly business decisions.” Mr. Markel’s portrayal of the sibling dynamic edges a bit into a Scrooge-and-Cratchit stereotype, though it is amply backed up by anecdotes, such as the many times poor Will was obliged to take dictation while John sat on the toilet.
In 1905, after 25 years of this, Will said “enough.” He made a deal with John to leave the San and start a cereal company of his own, which in time became a global conglomerate.

For the full review, see:
Bryan Burrough. “BOOKSHELF; The Battle of Battle Creek.” The Wall Street Journal (Mon., Aug. 14, 2017): A15.
(Note: ellipses added.)
(Note: the online version of the review has the date Aug. 13, 2017, and has the title “BOOKSHELF; The Birth of a Cereal Empire.”)

The book under review, is:
Markel, Howard. The Kelloggs: The Battling Brothers of Battle Creek. New York: Pantheon, 2017.

Inventor Haber and Entrepreneur Bosch Created “an Inflection Point in History”

(p. C7) . . . , Mr. Kean’s narrative of scientific discovery jumps back and forth. The first episode narrated in detail is Fritz Haber and Carl Bosch’s conversion of nitrogen into ammonia, the crucial step in producing artificial fertilizer, which Mr. Kean characterizes as “an inflection point in history” that in the 20th century “transformed the very air into bread.” The process consumes 1% of the global energy supply, producing 175 million tons of ammonia fertilizer a year and generating half the world’s food. Haber and Bosch both won Nobel Prizes but were subsequently tainted by their involvement in developing chlorine gas for the German military.
The book’s middle section turns back the clock to steam power, the technology that launched the Industrial Revolution. James Watt was its master craftsman, though Mr. Kean confesses that, as “a sucker for mechanical simplicity,” he regards Watt’s pioneering engine, with its separate condenser, as “a bunch of crap cobbled together.” A more elegant application of gases was Henry Bessemer’s process for making steel, which used blasts of compressed air to make obsolete the laborious and energy-hungry mixing of liquid cast iron and carbon.

For the full review, see:
Mike Jay. “Adventures in the Atmosphere.” The Wall Street Journal (Sat., July 22, 2017): C7.
(Note: ellipsis added.)
(Note: the online version of the review has the date July 21, 2017.)

The book under review, is:
Kean, Sam. Caesar’s Last Breath: Decoding the Secrets of the Air Around Us. New York: Little, Brown and Company, 2017.

“Shannon’s Principles of Redundancy and Error Correction”

(p. C7) There were four essential prophets whose mathematics brought us into the Information Age: Norbert Wiener, John von Neumann, Alan Turing and Claude Shannon. In “A Mind at Play: How Claude Shannon Invented the Information Age,” Jimmy Soni and Rob Goodman make a convincing case for their subtitle while reminding us that Shannon never made this claim himself.
. . .
The only one of the four Information Age pioneers who was also an electrical engineer, Shannon was practical as well as brilliant.
. . .
Wiener’s theory of information, drawing on his own background in thermodynamics, statistical mechanics and the study of random processes, was cloaked in opaque mathematics that was impenetrable to most working engineers.
. . .
“Before Shannon,” Messrs. Soni and Goodman write, “information was a telegram, a photograph, a paragraph, a song. After Shannon, information was entirely abstracted.” He derived explicit formulas for rates of transmission, the capacity of an ideal channel, ability to correct errors and coding efficiency that could be understood by anyone familiar with logarithms to the base 2.
Mathematicians use mathematics to understand things. Engineers use mathematics to build things. Engineers love logarithms as a carpenter loves a familiar tool. The electronic engineers who flooded into civilian life in the aftermath of World War II adopted Shannon’s theory as passionately as they had avoided Wiener’s, bringing us the age of digital machines.
. . .
Despite the progress of technology, we still have no clear understanding of how memories are stored in our own brains: Shannon’s principles of redundancy and error correction are no doubt involved in preserving memory, but how does the process work and why does it sometimes fail? Shannon died of Alzheimer’s disease in February 2001. The mind that gave us the collective memory we now so depend on had its own memory taken away.

For the full review, see:
George Dyson. “The Elegance of Ones and Zeroes.” The Wall Street Journal (Sat., July 22, 2017): C7.
(Note: ellipses added.)
(Note: the online version of the review has the date July 21, 2017.)

The book under review, is:
Soni, Jimmy, and Rob Goodman. A Mind at Play: How Claude Shannon Invented the Information Age. New York: Simon & Schuster, 2017.

Employment Grows as Productivity Rises

(p. C3) In a recent paper prepared for a European Central Bank conference, the economists David Autor of MIT and Anna Salomons of Utrecht University looked at data for 19 countries from 1970 to 2007. While acknowledging that advances in technology may hurt employment in some industries, they concluded that “country-level employment generally grows as aggregate productivity rises.”
The historical record provides strong support for this view. After all, despite centuries of progress in automation and recurrent warnings of a jobless future, total employment has continued to increase relentlessly, even with bumps along the way.
More remarkable is the fact that today’s most dire projections of jobs lost to automation fall short of historical norms. A recent analysis by Robert Atkinson and John Wu of the Information Technology & Innovation Foundation quantified the rate of job destruction (and creation) in each decade since 1850, based on census data. They found that an incredible 57% of the jobs that workers did in 1960 no longer exist today (adjusted for the size of the workforce).
Workers suffering some of the largest losses included office clerks, secretaries and telephone operators. They found similar levels of displacement in the decades after the introduction of railroads and the automobile. Who is old enough to remember bowling alley pin-setters? Elevator operators? Gas jockeys? When was the last time you heard a manager say, “Take a memo”?
. . .
. . . , if artificial intelligence is getting so smart that it can recognize cats, drive cars, beat world-champion Go players, identify cancerous lesions and translate from one language to another, won’t it soon be capable of doing just about anything a person can?
Not by a long shot. What all of these tasks have in common is that they involve finding subtle patterns in very large collections of data, a process that goes by the name of machine learning.
. . .
But it is misleading to characterize all of this as some extraordinary leap toward duplicating human intelligence. The selfie app in your phone that places bunny ears on your head doesn’t “know” anything about you. For its purposes, your meticulously posed image is just a bundle of bits to be strained through an algorithm that determines where to place Snapchat face filters. These programs present no more of a threat to human primacy than did automatic looms, phonographs and calculators, all of which were greeted with astonishment and trepidation by the workers they replaced when first introduced.
. . .
The irony of the coming wave of artificial intelligence is that it may herald a golden age of personal service. If history is a guide, this remarkable technology won’t spell the end of work as we know it. Instead, artificial intelligence will change the way that we live and work, improving our standard of living while shuffling jobs from one category to another in the familiar capitalist cycle of creation and destruction.

For the full commentary, see:
Kaplan, Jerry. “Don’t Fear the Robots.” The Wall Street Journal (Sat., June 22, 2017): C3.
(Note: ellipses added.)
(Note: the online version of the commentary has the date June 21, 2017.)

The David Autor paper, mentioned above, is:

Autor, David, and Anna Salomons. “Does Productivity Growth Threaten Employment?” Working Paper. (June 19, 2017).

The Atkinson and Wu report, mentioned above, is:
Atkinson, Robert D., and John Wu. “False Alarmism: Technological Disruption and the U.S. Labor Market, 1850-2015.” (May 8, 2017).

The author’s earlier book, somewhat related to his commentary quoted above, is:
Kaplan, Jerry. Artificial Intelligence: What Everyone Needs to Know. New York: Oxford University Press, 2016.