AI & futurism

powered by

This article was published on May 4, 2022

Meta’s free GPT-3 replica exposes the business benefits of AI transparency

For once, what's good for Zuck is good for society


Meta’s free GPT-3 replica exposes the business benefits of AI transparency
Thomas Macaulay
Story by

Thomas Macaulay

Writer at Neural by TNW Writer at Neural by TNW

The notoriously secretive Meta has set a milestone for transparency.

The company this week offered the entire research community access to a fully-trained large language model (LLM).

Named the Open Pretrained Transformer (OPT), the system mirrors the performance and size of OpenAI’s vaunted GPT-3 model.

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

This mimicry is deliberate. While GPT-3 has a stunning ability to produce human-like text, it also has a powerful capacity for biases, bigotry, and disinformation.

OPT’s creators said their system can reduce these risks:

Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and to bring more voices to the table in studying the impact of these LLMs.

In addition to sharing OPT for non-commercial use, Meta has released its pre-trained models, their underlying code, and a logbook of their development. No other company has ever provided this level of access to an LLM.

Such openness may appear uncharacteristic.

After all, Meta is often accused of concealing its algorithms and their harmful impacts. Yet the move may not be entirely altruistic.

Meta could benefit immensely from external experts probing OPT for flaws, uses, and fixes — without having to pay them.

The company’s public embrace of transparency could also dampen criticism of its secrecy.

Mutual benefits

Meta’s researchers acknowledge that OPT has major shortcomings.

They note that the system doesn’t work well with declarative instructions or point-blank interrogatives.

It also has a tendency to generate toxic language and reinforce harmful stereotypes — even when fed relatively innocuous prompts.

“In summary, we still believe this technology is premature for commercial deployment,” they wrote in their study paper,

Input from the broader research community could accelerate this maturation — which may not help Meta alone.

The move will hopefully show that businesses and society both benefit from transparency.

You can get the OPT open-source code and small-scale pre-trained models here. To try the full 175-billion parameter version, you need to request access here,

Get the Neural newsletter

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Also tagged with