This article was published on October 22, 2021

Google channels Big Tobacco with dystopian research censorship

We've seen this before. It doesn't end well


Google channels Big Tobacco with dystopian research censorship

In the wake of the firing of Timnit Gebru and other notable AI researchers at Google, Alphabetā€™s circled the wagons and lawyered up. Reports flow out of Mountain View depicting teams of lawyers censoring scientific research and acting as unnamed collaborators and peer-reviewers.

Most recently, Business Insider managed to interview several researchers who painted a startling and bleak picture of what itā€™s like to try and conduct research under such an anti-scientific regime.

Per the article, one researcher said:

Youā€™ve got dozens of lawyers ā€” no doubt, highly trained lawyers ā€” who nonetheless actually know very little about this technology ā€¦ and theyā€™re working their way through your research like English undergrads reading a poem.

The problem here is that Google isnā€™t censoring research to avoid, say, its secrets getting out. Its lawyers are targeting scientific research that makes the company look bad.

The person quoted above added that they were specifically talking about crossing out references to ā€œfairnessā€ and ā€œbiasā€ and scientists being told to change the results of their work. Itā€™s not only unethical, itā€™s incredibly dangerous.

The tea: Googleā€™s AI is broken. It might be a trillion-dollar company and the most cutting-edge AI outfit on Earth, but its algorithms are biased. And thatā€™s dangerous.

No matter how you slice it, Googleā€™s AI doesnā€™t work as well for people who donā€™t look like the vast majority of Googleā€™s employees (white dudes) as it does for people who do. From Searchā€™s conflation of Black people and animals to the algorithms running the camera on the Pixel 6ā€™s inability to properly process non-white skin tones, Googleā€™s machine-learning woes are well-documented.

This is a big problem and it isnā€™t easy to fix. Imagine building a car that didnā€™t work as well for Black people and women as it did for white guys, selling 200 million, and then people slowly learning their automobiles were racist.

Thereā€™d be a lot of feelings and emotions about what that would mean.

Googleā€™s current situation is a lot like that. Its products are everywhere. It canā€™t just recall Search or put Google Ads on hold for a few days while it rethinks the entire world of deep learning to exclude bias. Why not fix world hunger and make puppies immortal while theyā€™re at it?

So what do you do when youā€™re one of the richest companies in the world and you come up against a truth so awful that its existence makes your model seem evil?

You do what big tobacco did. You find people willing to say whatā€™s in your companyā€™s best interests and you use them to stop the people telling the truth from sharing their research.

The National Institutes of Health released research in 2007 describing the role of lawyers during the big tobacco legal battles of the previous decades.

In the paper, which is titled ā€œTobacco industry lawyers as a disease vector,ā€ the researchers attribute the spread of diseases associated with long-term tobacco use to the tactics employed by industry lawyers.

Some key takeaways from the paper include:

  • Despite their obligation to do so, tobacco companies often failed to conduct product safety research or, when research was conducted, failed to disseminate the results to the medical community and to the public.
  • Tobacco company lawyers have been involved in activities having little or nothing to do with the practice of law, including gauging and attempting to influence company scientistsā€™ beliefs, vetting inā€house scientific research, and instructing inā€house scientists not to publish potentially damaging results.
  • Additionally, company lawyers have taken steps to manufacture attorneyā€client privilege and workā€product cover to assist their clients in protecting sensitive documents from disclosure, have been involved in the concealment of such documents, and have employed litigation tactics that have largely prevented successful lawsuits against their client companies.

And weā€™re seeing the same potential with Googleā€™s approach. The companyā€™s treating the scientific method as an optional component of research.

As researcher Jack Clark, formerly of OpenAI, pointed out on Twitter:

I like to collaborate with people in research and I do a huge amount of work on AI measurement/assessment/synthesis/analysis. Why would I try and collaborate with people at Google if I know that thereā€™s some invisible group of people who will get inside our research paper?

Clarkā€™s talking about legibility here, the idea that the researchers have their names on the papers but the censors and lawyers donā€™t.

See, down the road a few years, if Googleā€™s inability to address bias or create algorithms that are fair turns out deadly at scale over time, no lawyers will be harmed in the proceeding lawsuits.

And thatā€™s not fair. Billions of people put their trust in Google products every day. The AI we rely on is a part of our lives that influences our decisions. Whatever Googleā€™s lawyers are hiding could hurt us all.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with