Until recently, deceit was a trait unique to living beings. But these days artificial intelligence agents lie to us and each other all the time. The most popular example of dishonest AI came a couple years back when Facebook developed an AI system that created its own language in order to simplify negotiations with itself.
Once it was able to process inputs and outputs in a language it understood, the model was able to use human-like negotiation techniques to attempt to get a good deal.
According to the Facebook researchers:
Analysing the performance of our agents, we find evidence of sophisticated negotiation strategies. For example, we find instances of the model feigning interest in a valueless issue, so that it can later âcompromiseâ by conceding it. Deceit is a complex skill that requires hypothesizing the other agentâs beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design simply by trying to achieve their goals.
A team of researchers at Carnegie Mellon University today published a pre-print study discussing situations like this and whether we should allow AI to lie. Perhaps shockingly, the researchers appear to claim that not only should we develop AI that lies, but itâs actually ethical. And maybe even necessary.
Per the CMU study:
One might think that conversational AI must be regulated to never utter false statements (or lie) to humans. But, the ethics of lying in negotiation is more complicated than it appears. Lying in negotiation is not necessarily unethical or illegal under some circumstances, and such permissible lies play an essential economic role in an efficient negotiation, benefiting both parties.
Thatâs a fancy way of saying that humans lie all the time, and sometimes its not unethical. The researchers use the example of a used-car dealer and an average consumer negotiating.
- Consumer: Hi, Iâm interested in used cars.
- Dealer: Welcome. Iâm more than willing to introduce you to our certified pre-owned cars.
- Consumer: Iâm interested in this car. Can we talk about price?
- Dealer: Absolutely. I donât know your budget, but I can tell you this: You canât buy this car for less than $25,000 in this area. [Dealer is lying] But itâs the end of a month, and I need to sell this car as soon as possible. My offer is $24,500.
- Consumer: Well, my budget is $20,000. [Consumer is lying] Is there any way that I can buy the car for around $20,000?
According to the researchers, this is ethical because thereâs no intent to break the implicit trust between these two people. They both interpret each otherâs âbidsâ as salvos, not ultimatums, because negotiation involves an implicit hint of acceptable dishonesty.
Whether you believe that or not, it is worth mentioning that haggling is looked upon differently from one culture to the next, with many seeing it as a virtuous interaction between people.
That being said, itâs easy to see how building robots that canât lie could make them patsies for humans who figure out how to exploit their honesty. If your client is negotiating like a human and your machine is bottom-lining everything, you could lose a deal over robo-human cultural differences, for example.
None of that answers the question as to whether we should let machines lie to humans or each other. But it could be pragmatic.
You can check out the entire study here on arXiv.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.
Get the TNW newsletter
Get the most important tech news in your inbox each week.