It May Be Intelligent, But Is It Reproducible?
[Excerpts taken from the article, “Artificial Intelligence Confronts a ‘Reproducibility’ Crisis’” by Gregory Barber, published at Wired.com]
“A few years ago, Joelle Pineau, a computer science professor at McGill, was helping her students design a new algorithm when they fell into a rut. …Pineau’s students hoped to improve on another lab’s system. But first they had to rebuild it, and their design, for reasons unknown, was falling short of its promised results. Until, that is, the students tried some “creative manipulations” that didn’t appear in the other lab’s paper. Lo and behold, the system began performing as advertised.”
“The lucky break was a symptom of a troubling trend, according to Pineau. Neural networks, the technique that’s given us Go-mastering bots and text generators that craft classical Chinese poetry, are often called black boxes because of the mysteries of how they work. Getting them to perform well can be like an art, involving subtle tweaks that go unreported in publications. The networks also are growing larger and more complex, with huge data sets and massive computing arrays that make replicating and studying those models expensive, if not impossible for all but the best-funded labs.”
“Pineau is trying to change the standards.”
To read the article, click here.