SAN FRANCISCO — IBM pitted a computer against two human debaters in the first public demonstration of artificial intelligence technology it’s been working on for more than five years.
The company unveiled its Project Debater in San Francisco on Monday, asking it to make a case for government-subsidized space research ‚Äî a topic it hadn’t studied in advance but championed fiercely with just a few awkward gaps in reasoning.
“Subsidizing space exploration is like investing in really good tires,” argued the computer system, its female voice embodied in a 5-foot-tall machine shaped like a monolith with TV screens on its sides. Such research would enrich the human mind, inspire young people and be a “very sound investment,” it said, making it more important even than good roads, schools or health care.
The computer delivered its opening argument by pulling in evidence from its huge internal repository of newspapers, journals and other sources. It then listened to a professional human debater’s counter-argument and spent four minutes rebutting it.
After closing arguments it moved on to a second debate about telemedicine.
An IBM research team based in Israel began working on the project not long after IBM’s Watson computer beat two human quizmasters on a “Jeopardy” challenge in 2011.
But rather than just scanning a giant trove of data in search of factoids, IBM’s latest project taps into several more complex branches of AI. Search engine algorithms used by Google and Microsoft’s Bing use similar technology to digest and summarize written content and compose new paragraphs. Voice assistants such as Amazon’s Alexa rely on listening comprehension to answer questions posed by people. Google recently demonstrated an eerily human-like voice assistant that can call hair salons or restaurants to make appointments.
But IBM says it’s breaking new ground by creating a system that tackles deeper human practices of rhetoric and analysis, and how they’re used to discuss big questions whose answers aren’t always clear.
“If you think of the rules of debate, they’re far more open-ended than the rules of a board game,” said Ranit Aharonov, who manages the debater project.
As expected, the machine tends to be better than humans at bringing in numbers and other detailed supporting evidence. It’s also able to latch onto the most salient and attention-getting elements of an argument, and can even deliver some self-referential jokes about being a computer.
But it lacks tact, researchers said. Sometimes the jokes don’t come out right. And on Monday, some of the sources it cited ‚Äî such as a German official and an Arab sheikh ‚Äî didn’t seem particularly germane.
“Humans tend to be better at using more expressive language, more original language,” said Dario Gil, IBM’s vice-president of AI research. “They bring in their own personal experience as a way to illustrate the point. The machine doesn’t live in the real world or have a life that it’s able to tap into.”
There are no immediate plans to turn Project Debater into a commercial product, but Gil said it could be useful in the future in helping lawyers or other human workers make informed decisions.