This article was published on March 10, 2021

Study: It might be unethical to force AI to tell us the truth


Study: It might be unethical to force AI to tell us the truth

Until recently, deceit was a trait unique to living beings. But these days artificial intelligence agents lie to us and each other all the time. The most popular example of dishonest AI came a couple years back when Facebook developed an AI system that created its own language in order to simplify negotiations with itself.

Once it was able to process inputs and outputs in a language it understood, the model was able to use human-like negotiation techniques to attempt to get a good deal.

According to the Facebook researchers:

Analysing the performance of our agents, we find evidence of sophisticated negotiation strategies. For example, we find instances of the model feigning interest in a valueless issue, so that it can later ‘compromise’ by conceding it. Deceit is a complex skill that requires hypothesizing the other agent’s beliefs, and is learnt relatively late in child development. Our agents have learnt to deceive without any explicit human design simply by trying to achieve their goals.

A team of researchers at Carnegie Mellon University today published a pre-print study discussing situations like this and whether we should allow AI to lie. Perhaps shockingly, the researchers appear to claim that not only should we develop AI that lies, but it’s actually ethical. And maybe even necessary.

Per the CMU study:

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

One might think that conversational AI must be regulated to never utter false statements (or lie) to humans. But, the ethics of lying in negotiation is more complicated than it appears. Lying in negotiation is not necessarily unethical or illegal under some circumstances, and such permissible lies play an essential economic role in an efficient negotiation, benefiting both parties.

That’s a fancy way of saying that humans lie all the time, and sometimes its not unethical. The researchers use the example of a used-car dealer and an average consumer negotiating.

  • Consumer: Hi, I’m interested in used cars.
  • Dealer: Welcome. I’m more than willing to introduce you to our certified pre-owned cars.
  • Consumer: I’m interested in this car. Can we talk about price?
  • Dealer: Absolutely. I don’t know your budget, but I can tell you this: You can’t buy this car for less than $25,000 in this area. [Dealer is lying] But it’s the end of a month, and I need to sell this car as soon as possible. My offer is $24,500.
  • Consumer: Well, my budget is $20,000. [Consumer is lying] Is there any way that I can buy the car for around $20,000?

According to the researchers, this is ethical because there’s no intent to break the implicit trust between these two people. They both interpret each other’s “bids” as salvos, not ultimatums, because negotiation involves an implicit hint of acceptable dishonesty.

Whether you believe that or not, it is worth mentioning that haggling is looked upon differently from one culture to the next, with many seeing it as a virtuous interaction between people.

That being said, it’s easy to see how building robots that can’t lie could make them patsies for humans who figure out how to exploit their honesty. If your client is negotiating like a human and your machine is bottom-lining everything, you could lose a deal over robo-human cultural differences, for example.

None of that answers the question as to whether we should let machines lie to humans or each other. But it could be pragmatic.

You can check out the entire study here on arXiv.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with