Human Intelligence Helps A.I. Work Better

(p. B3) A recent study at the M.I.T. Media Lab showed how biases in the real world could seep into artificial intelligence. Commercial software is nearly flawless at telling the gender of white men, researchers found, but not so for darker-skinned women.
And Google had to apologize in 2015 after its image-recognition photo app mistakenly labeled photos of black people as “gorillas.”
Professor Nourbakhsh said that A.I.-enhanced security systems could struggle to determine whether a nonwhite person was arriving as a guest, a worker or an intruder.
One way to parse the system’s bias is to make sure humans are still verifying the images before responding.
“When you take the human out of the loop, you lose the empathetic component,” Professor Nourbakhsh said. “If you keep humans in the loop and use these systems, you get the best of all worlds.”

For the full story, see:
Paul Sullivan. “WEALTH MATTERS; Can Artificial Intelligence Keep Your Home Secure?” The New York Times (Saturday, June 30, 2018): B3.
(Note: the online version of the story has the date June 29, 2018.)

The “recent study” mentioned above, is:
Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.

Leave a Reply

Your email address will not be published. Required fields are marked *