Why does AI have to be nice? Researchers propose ‘Antagonistic AI’
When interacting with today’s large language models (LLMs), do you expect them to be surly, dismissive, flippant or even insulting? Of course not — but they should be, according to researchers from MIT and the University of Montreal. These academics have introduced the idea of Antagonistic AI: That is, AI systems that are purposefully combative, critical, rude and even interrupt users mid-thought. Their work challenges the current paradigm of commercially popular but overly-sanitized “vanilla” LLMs. “There was always something that felt off about the tone, behavior and ‘human values’ embedded into AI — something that felt deeply ingenuine and out of touch with our real-life experiences,” Alice Cai, co-founder of Harvard’s Augmentation Lab and researcher at the MIT Center for Collective Intelligence, told VentureBeat. VB Event The AI Impact Tour – NYC We’ll be in New York on February 29 in partnership with Microsoft to discuss how to balance risks and rewards of AI applications. Request an invite to the exclusive event below. Request an invite She added: “We came into this project with …