For Movies, Film Option Survives Digital Advance

(p. B1) Faced with the possible extinction of the material that made Hollywood famous, a coalition of studios is close to a deal to keep Eastman Kodak Co. in the business of producing movie film.
The negotiations–secret until now–are expected to result in an arrangement where studios promise to buy a set quantity of film for the next several years, even though most movies and television shows these days are shot on digital video.
Kodak’s new chief executive, Jeff Clarke, said the pact will allow his company to forestall the closure of its Rochester, N.Y., film manufacturing plant, a move that had been under serious consideration. Kodak’s motion-picture film sales have plummeted 96% since 2006, from 12.4 billion linear feet to an estimated 449 million this year. With the exit of competitor Fujifilm Corp. last year, Kodak is the only major company left producing motion-picture film.
. . .
Film and digital video both “are valid choices, but it would be a tragedy if suddenly directors didn’t have the opportunity to shoot on film,” said Mr. Apatow. director of comedies including “Knocked Up” and “The 40 Year-Old Virgin,” speaking from the New York set of his coming movie “Trainwreck,” which he is shooting on film. “There’s a magic to the grain and the color quality that you get with film.”

For the full story, see:
BEN FRITZ. “Movie Film, at Death’s Door, Gets a Reprieve.” The Wall Street Journal (Weds., July 30, 2014): B1 & B8.
(Note: ellipsis added.)
(Note: the online version of the article was dated July 29, 2014.)

Challenging Videogame Improves Attention and Memory in Seniors

(p. R1) Neuroscientist Adam Gazzaley and his colleagues at the University of California in San Francisco have found that playing a challenging videogame upgrades our ability to pay attention.
As reported in the journal Nature in 2013, the Gazzaley lab trained 60- to 85-year-old subjects on a game called NeuroRacer. The multitask version involves simulated driving along a winding road while quickly pressing keys or a game controller to respond to a green sign when it appears on the roadside. As a control, some subjects played a single-task version of the game that omits the winding road and involves only noticing and responding to the green sign. To ensure that subjects were genuinely challenged but not discouraged, the level of game difficulty was individualized.
After 12 hours of training spread evenly over a month, multitasking subjects were about twice as efficient at shifting attention as when they started, a huge improvement by any standard. Remarkably, their new scores were comparable to those of 20-year-olds not trained on NeuroRacer. The subjects still tested positive six months later.
The multitaskers also got an unexpected brain bonus. Their sustained concentration and working memory (briefly holding information such as a phone number) improved as well. The training had targeted neither of these functions, but the general benefits emerged nonetheless.

For the full commentary, see:
PATRICIA CHURCHLAND. “MIND AND MATTER; A Senior Moment for Videogames as Brain-Boosters.” The Wall Street Journal (Sat., Oct. 3, 2015): C2.
(Note: the online version of the commentary has the date Sept. 30, 2015, and the title “MIND AND MATTER: Videogames for Seniors Boost Brainpower.”)

The Gazzaley article mentioned above, is:
Anguera, J. A., J. Boccanfuso, J. L. Rintoul, O. Al-Hashimi, F. Faraji, J. Janowich, E. Kong, Y. Larraburo, C. Rolle, E. Johnston, and Adam Gazzaley. “Video Game Training Enhances Cognitive Control in Older Adults.” Nature 501, no. 7465 (Sept. 5, 2013): 97-101.

Dogged Dreamers Developed Deadly Dirigibles

(p. C7) “Dirigibility” means the ability to navigate through the air by engine power, unlike balloon flight, which is captive to the wind. Beginning and ending with the Hindenburg vignette, C. Michael Hiam gives in “Dirigible Dreams” a concise but comprehensive history of the airship and its evolution. With style and some flair, Mr. Hiam introduces a cast of dogged visionaries, starting with Albert Santos-Dumont, a Brazilian whose exploits from 1901 onward usually culminated in our hero dangling from a tree or a high building, shredded gas bags draped around him like a shroud. For all of these pioneers, problems queued up from the outset: Insurance companies, for example, refused to quote a rate for aerial liability. (Try asking your broker today.) And to inflate the craft the engineers were stuck with hydrogen, since non-flammable helium was too scarce and hot air has insufficient lifting force.
. . .
In 1929, British engineers pioneered a giant dirigible–at 133 feet in diameter, Mr. Hiam notes, it was “the largest object ever flown”–powered by six Rolls-Royce Condor engines. But too many died as the still-flimsy crafts plunged to the ground in flames. His Majesty’s secretary of state for air perished in a luxurious airship cabin on the way to visit the king’s subjects in India. One by one, nations gave up their dirigible dreams, especially after 35 souls burned to death on the Hindenburg in Lakehurst, N.J., one of the first transport disasters recorded on film. After that tragedy, commercial passengers never flew in an airship again, and by the start of World War II just two years later “the airship had become entirely extinct.”

For the full review, see:
SARA WHEELER. “Inflated Hopes; Early airship experimenters found that insurance companies refused to quote rates for aerial liability.” The Wall Street Journal (Sat., Oct. 18, 2014): C7.
(Note: ellipsis added.)
(Note: the online version of the review was updated on Oct. 23, 2014.)

The book under review, is:
Hiam, C. Michael. Dirigible Dreams: The Age of the Airship. Lebanon, NH: ForeEdge, 2014.

The Cure for Technology Problems Is Better Technology

(p. D2) The real lesson in VW’s scandal — in which the automaker installed “defeat devices” that showed the cars emitting lower emissions in lab tests than they actually did — is not that our cars are stuffed with too much technology. Instead, the lesson is that there isn’t enough tech in vehicles.
In fact, the faster we upgrade our roads and autos with better capabilities to detect and analyze what’s going on in the transportation system, the better we’ll be able to find hackers, cheaters and others looking to create havoc on (p. B11) the highways.

. . .
“What happened at Volkswagen had to do with embedded software that’s buried deep in the car, and only the supplier knows what’s in it — and it’s a black box for everybody else,” said Stefan Heck, the founder of Nauto, a new start-up that is introducing a windshield-mounted camera that monitors road conditions for commercial fleets and consumers. The camera uses artificial intelligence to track traffic conditions; over time, as more vehicles use it, it could provide users with traffic and safety information plus data about mileage and other automotive functions.
The end goal for intelligent-car systems, said Dr. Heck, is to create an on-road network with data that is constantly being analyzed to get a sharper picture of what’s happening on the road. Sure, companies might still be able to cheat. But with enough independent data sources coming from different places on the road, it would become much more difficult.
He said there really isn’t any going back — software in cars is responsible not just for driver comforts like in-dash navigation, but also for critical safety and performance systems, many of which improve the car’s environmental footprint.

For the full commentary, see:
Farhad Manjoo. “STATE OF THE ART; Our Cars Need More Technology.” The New York Times (Thurs., Oct. 1, 2015): B1 & B11.
(Note: ellipsis added.)
(Note: the online version of the commentary has the date SEPT. 30, 2015, and the title “STATE OF THE ART; VW Scandal Shows a Need for More Tech, Not Less.” )

“Bring Prosperity to Billions of People”

(p. B1) If you’re feeling down about the world, the book, “Resource Revolution: How to Capture the Biggest Business Opportunity in a Century,” is an antidote. Mr. Rogers and Mr. Heck outline how emerging advances — among them 3-D printing, autonomous vehicles, modular construction systems and home automation — might in time alter some of the world’s largest industries and (p. B7) bring prosperity to billions of people.
They put forward a rigorous argument bolstered by mountains of data and recent case studies. And once you start looking at Silicon Valley their way, your mind reels at the far-reaching potential of the innovations now spreading through society.

For the full commentary, see:
Farhad Manjoo. “STATE OF THE ART; The Future Could Work, if We Let It.” The New York Times (Thurs., AUG. 28, 2014): B1 & B7.
(Note: the online version of the commentary has the date AUG. 27, 2014.)

The book praised in the commentary is:
Heck, Stefan, and Matt Rogers. Resource Revolution: How to Capture the Biggest Business Opportunity in a Century. New York: Melcher Media, 2014.

FCC Gains Arbitrary Power Over Internet Innovation

(p. A11) Imagine if Steve Jobs, Larry Page or Mark Zuckerberg had been obliged to ask bureaucrats in Washington if it was OK to launch the iPhone, Gmail, or Facebook’s forthcoming Oculus virtual-reality service. Ridiculous, right? Not anymore.
A few days before the Independence Day holiday weekend, the Federal Communications Commission announced what amounts to a system of permission slips for the Internet.
. . .
As the FCC begins to issue guidance and enforcement actions, it’s becoming clearer that critics who feared there would be significant legal uncertainty were right. Under its new “transparency” rule, for example, the agency on June 17 conjured out of thin air an astonishing $100 million fine against AT&T, even though the firm explained its mobile-data plans on its websites and in numerous emails and texts to customers.
The FCC’s new “Internet Conduct Standard,” meanwhile, is no standard at all. It is an undefined catchall for any future behavior the agency doesn’t like.
. . .
From the beginning, Internet pioneers operated in an environment of “permissionless innovation.” FCC Chairman Tom Wheeler now insists that “it makes sense to have somebody watching over their shoulder and ready to jump in if necessary.” But the agency is jumping in to demand that innovators get permission before they offer new services to consumers. The result will be less innovation.

For the full commentary, see:
BRET SWANSON. “Permission Slips for Internet Innovation; The FCC’s new Web rules are already as onerous as feared and favor some business models over others.” The Wall Street Journal (Sat., Aug. 15, 2015): A11.
(Note: ellipses added.)
(Note: the online version of the commentary has the date Aug. 14, 2015.)

“We Embrace New Technology”

(p. 2D) . . . , the first digital images created by the earliest digital cameras “were terrible,” Rockbrook’s Chuck Fortina said. “These were real chunky images made by big, clunky cameras.”
Viewing those results, some retailers dismissed the new digital technology and clung doggedly to film. But Rockbrook Camera began stocking digital cameras alongside models that used film, Fortina said.
“Film sales were great, but we just knew digital was going to take over,” Fortina said. As those cameras and their images improved, the retailer saw a huge opportunity. ”Instead of thinking this is going to kill our business, we were thinking people are going to have to buy all new gear,” Fortina said of the switch from analog to digital.
“By 2000, film was over,” he said. Companies that didn’t refocus their business found themselves struggling or forced to close their doors.
Today, Rockbrook Camera is constantly scouring the Internet, attending trade shows and quizzing customers and employees in search of new technologies, Fortina said. “We embrace new technology,” he said.

For the full story, see:
Janice Podsada. “More Ready than Not for Tech Shifts; How 3 Omaha-area businesses altered course and thrived amid technological changes.” Omaha World-Herald (SUNDAY, SEPTEMBER 27, 2015 ): 1D-2D.
(Note: ellipsis added.)
(Note: the online version of the story has the title “How 3 Omaha-area businesses altered course and thrived amid technological changes.”)

John Paul Stapp Thumbed His Nose at the Precautionary Principle

(p. C7) In the early 19th century, a science professor in London named Dionysus Lardner rejected the future of high-speed train travel because, he said, “passengers, unable to breathe, would die of asphyxia.” A contemporary, the famed engineer Thomas Tredgold, agreed, noting “that any general system of conveying passengers . . . [traveling] at a velocity exceeding 10 miles an hour, or thereabouts, is extremely improbable.”
The current land speed for a human being is 763 miles an hour, or thereabouts, thanks in large part to the brilliance, bravery and dedication of a U.S. Air Force lieutenant colonel named John Paul Stapp, a wonderfully iconoclastic medical doctor, innovator and renegade consumer activist who repeatedly put his own life in peril in search of the line beyond which human survival at speed really was “extremely improbable.”
. . .
Initial tests were carried out on a crash-test dummy named Oscar Eightball, then chimpanzees and pigs. There was plenty of trial and error–the term “Murphy’s Law” was coined during the Gee Whiz experiments–until Stapp couldn’t resist strapping himself into the Gee Whiz to experience firsthand what the cold data could never reveal: what it felt like. On May 5, 1948, for example, he “took a peak deceleration of an astounding twenty-four times the force of gravity,” the author writes. “This was the equivalent of a full stop from 75 miles per hour in just seven feet or, in other words, freeway speed to zero in the length of a very tall man.”
Stapp endured a total of 26 rides on the Gee Whiz over the course of 50 months, measuring an array of physiological factors as well as testing prototype helmets and safety belts. Along the way he suffered a broken wrist, torn rib cartilage, a bruised collarbone, a fractured coccyx, busted capillaries in both eyes and six cracked dental fillings. Colleagues became increasingly concerned for his health every time he staggered, gamely, off the sled, but, according to Mr. Ryan, he never lost his sense of humor, nor did these ordeals stop Dr. Stapp from voluntarily making house calls at night for families stationed on the desolate air base.
. . .
After 29 harrowing trips down the track, Stapp prepared for one grand finale, what he called the “Big Run,” hoping to achieve 600 miles per hour, the speed beyond which many scientists suspected that human survivability was–really, this time–highly improbable. On Dec. 10, 1954, Sonic Wind marked a speed of 639 miles per hour, faster than a .45 caliber bullet shot from a pistol. Film footage of the test shows the sled rocketing past an overhead jet plane that was filming the event. The Big Run temporarily blinded Stapp, and he turned blue for a few days, but the experiment landed him on the cover of Time magazine as the fastest man on earth. The record stood for the next 30 years.

For the full review, see:
PATRICK COOKE. “Faster Than a Speeding Bullet–Really.” The Wall Street Journal (Sat., Aug. 22, 2015): C7.
(Note: first ellipsis, and bracketed word, in original; other ellipses added.)
(Note: the online version of the review has the date Aug. 21, 2015.)

The book under review, is:
Ryan, Craig. Sonic Wind: The Story of John Paul Stapp and How a Renegade Doctor Became the Fastest Man on Earth. New York: Liveright Publishing Corp., 2015.

Fire Cooked Carbohydrates Fed Bigger Brains

(p. D5) Scientists have long recognized that the diets of our ancestors went through a profound shift with the addition of meat. But in the September issue of The Quarterly Review of Biology, researchers argue that another item added to the menu was just as important: carbohydrates, bane of today’s paleo diet enthusiasts. In fact, the scientists propose, by incorporating cooked starches into their diet, our ancestors were able to fuel the evolution of our oversize brains.
. . .
Cooked meat provided increased protein, fat and energy, helping hominins grow and thrive. But Mark G. Thomas, an evolutionary geneticist at University College London, and his colleagues argue that there was another important food sizzling on the ancient hearth: tubers and other starchy plants.
Our bodies convert starch into glucose, the body’s fuel. The process begins as soon as we start chewing: Saliva contains an enzyme called amylase, which begins to break down starchy foods.
Amylase doesn’t work all that well on raw starches, however; it is much more effective on cooked foods. Cooking makes the average potato about 20 times as digestible, Dr. Thomas said: “It’s really profound.”
. . .
Dr. Thomas and his colleagues propose that the invention of fire, not farming, gave rise to the need for more amylase. Once early humans started cooking starchy foods, they needed more amylase to unlock the precious supply of glucose.
Mutations that gave people extra amylase helped them survive, and those mutations spread because of natural selection. That glucose, Dr. Thomas and his colleagues argue, provided the fuel for bigger brains.

For the full story, see:
Carl Zimmer. “MATTER; For Evolving Brains, a ‘Paleo’ Diet of Carbs.” The New York Times (Tues., AUG. 18, 2015): D5.
(Note: ellipses added.)
(Note: the online version of the story has the date AUG. 13, 2015.)

The academic article summarized in the passages above, is:
Hardy, Karen, Jennie Brand-Miller, Katherine D. Brown, Mark G. Thomas, and Les Copeland. “The Importance of Dietary Carbohydrate in Human Evolution.” The Quarterly Review of Biology 90, no. 3 (Sept. 2015): 251-68.

More Danger from Existing Artificial Stupidity than from Fictional Artificial Intelligence

(p. B6) In the kind of artificial intelligence, or A.I., that most people seem to worry about, computers decide people are a bad idea, so they kill them. That is undeniably bad for the human race, but it is a potentially smart move by the computers.
But the real worry, specialists in the field say, is a computer program rapidly overdoing a single task, with no context. A machine that makes paper clips proceeds unfettered, one example goes, and becomes so proficient that overnight we are drowning in paper clips.
In other words, something really dumb happens, at a global scale. As for those “Terminator” robots you tend to see on scary news stories about an A.I. apocalypse, forget it.
“What you should fear is a computer that is competent in one very narrow area, to a bad degree,” said Max Tegmark, a professor of physics at the Massachusetts Institute of Technology and the president of the Future of Life Institute, a group dedicated to limiting the risks from A.I.
In late June, when a worker in Germany was killed by an assembly line robot, Mr. Tegmark said, “it was an example of a machine being stupid, not doing something mean but treating a person like a piece of metal.”
. . .
“These doomsday scenarios confuse the science with remote philosophical problems about the mind and consciousness,” Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, a nonprofit that explores artificial intelligence, said. “If more people learned how to write software, they’d see how literal-minded these overgrown pencils we call computers actually are.”
What accounts for the confusion? One big reason is the way computer scientists work. “The term ‘A.I.’ came about in the 1950s, when people thought machines that think were around the corner,” Mr. Etzioni said. “Now we’re stuck with it.”
It is still a hallmark of the business. Google’s advanced A.I. work is at a company it acquired called DeepMind. A pioneering company in the field was called Thinking Machines. Researchers are pursuing something called Deep Learning, another suggestion that we are birthing intelligence.
. . .
DeepMind made a program that mastered simple video games, but it never took the learning from one game into another. The 22 rungs of a neural net it climbs to figure out what is in a picture do not operate much like human image recognition and are still easily defeated.

For the full story, see:
QUENTIN HARDY. “The Real Threat Computers Pose: Artificial Stupidity, Not Intelligence.” The New York Times (Mon., JULY 13, 2015): B6.
(Note: ellipses added.)
(Note: the online version of the story has the date JULY 11, 2015, and has the title “The Real Threat Posed by Powerful Computers.”)