X

Facebook Is Teaching Chat Bots To Negotiate

Facebook’s AI research department, FAIR for short, is working on teaching chat bots to work out deals in a very human manner; by setting goals, prioritizing, and planning ahead strategically. The way that it was done was a bit simpler than it could have been. Instead of programming the AI by hand to learn how to prioritize things, plan, and other complicated operations, FAIR simply trained a conversational AI by having it watch actual humans go through mock negotiations. The AI is driven by a neural network, and was thus able to learn mostly unaided, provided it was told what it was watching for. Specifically, the goal was for the bot to learn how to figure out what it wanted, prioritize other options, and end up with a successful negotiation.

One of the key differences between this bot and a human negotiator is that the bot refused to take no for an answer. While humans tend to have a certain threshold when negotiating and can be made to walk away without what they came for, this bot had no such function. Instead, it would always strive for its desired outcome, unless that outcome was determined to be impossible given the circumstances. With a strong goal-oriented mindset to guide it, the bot was shown thousands of negotiations over imaginary objects by real people, then set against itself in multiple negotiation tests. The bot ended up picking up a number of very human negotiation tactics, including bluffing and even feigning interest. It even began to construct and synthesize sentences and possibility data sets by making generalizations, a behavior that’s somewhat uncharacteristic of neural AIs.

This experiment may have produced a chat bot AI that can run negotiations effectively in some scenarios, but that was not the end goal. Instead, FAIR wanted to create a guided training model that could be applied to a number of different scenarios and goals. What essentially happened here was that FAIR figured out how to make a neural network AI expand its own capabilities outside of its training data set; by setting goals for the AI and giving it freedom to work toward those goals and prioritize, FAIR gave the AI all the tools and ‘motivation’ that it needed to begin ‘thinking outside the box’.  This experiment is a step in the right direction for the development of artificial general intelligence, a somewhat new branch of AI that takes a more holistic approach to the concept of intelligence, giving an AI more flexibility and freedom in essentially deciding how it wants to work.