Experts Want OpenAI to Open Up its AI Model Architecture
BY PYMNTS | JULY 20, 2023
|
OpenAI
The devil is in the details.

And as it relates to artificial intelligence (AI), the details are in the foundational data set — and frequently so is the business model.

This, as a new study from Stanford University and UC Berkeley adds fuel to the popular online claim that OpenAI’s ChatGPT generative AI product is getting, well, dumber.

qatar airways

The research paper, titled “How Is ChatGPT’s Behavior Changing over Time?” was published by Lingjiao Chen, Matei Zaharia and James Zou, and purports to show changes in GPT-4’s outputs over time while casting doubt on the consistency of OpenAI’s large language models (LLMs) performance.

“No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one. Current hypothesis: When you use it more heavily, you start noticing issues you didn’t see before,” OpenAI VP of Product Peter Welinder tweeted.

“If you have examples where you believe it’s regressed, please reply to this thread and we’ll investigate,” he added.

And while the Stanford and Berkeley paper wasn’t conducted by the Stanford president who recently stepped down due to a history of manipulated research, certain experts have expressed doubts that the findings conclusively prove a decline in GPT-4’s performance.

Rather, experts believe OpenAI should be less opaque about its AI model architecture, with the lack of certainty around GPT’s degradation indicating a larger problem with how the popular AI company handles its model releases.

See also: It’s a ’90s Browser War Redux as Musk and Meta Enter AI Race

Closed Book or Open Source? It’s Not So Simple
The paper found that GPT-4 dropped from a 97.6% accuracy in its ability to identify prime numbers in March to just 2.4% in June.

But observers, including Princeton computer science professor Arvind Narayanan, have noted that these alleged degradations could potentially be consistent with fine-tuning adjustments made by OpenAI.

Other hypotheses include OpenAI streamlining its models to reduce their compute cost and overhead while saving GPU resources, or simply the fact that as generative AI tools like ChatGPT top 100 million users — the novelty has worn off, and what once seemed miraculous now seems mundane.

“The team is aware of the reported regressions and looking into it,” tweeted Logan Kilpatrick, OpenAI’s head of developer relations.

And because OpenAI, despite its name, runs on closed-source foundational models, that is part of the problem, critics say: that only OpenAI has the ability to “look into it.”

Read also: Meta Sharpens AI Strategy as Tech Giants Crowd Marketplace

Evolving Norms Around the Data for Foundational AI Models
Despite being founded as an open-source, nonprofit research organization, OpenAI closed up its models and corporate structure when Microsoft came knocking with billions of dollars to help commercialize the platform’s then-nascent generative AI technology.

Critics have increasingly targeted OpenAI’s now-closed approach to its AI models, as well as maligned the fact that nearly all of its peers do the same.

For GPT-4, the company did not reveal the source of training materials, source code, neural network weights, or deign to a provide paper describing its architecture.

“Overall, our findings shows that the behavior of the ‘same’ LLM service can change substantially in a relatively short amount of time, highlighting the need for continuous monitoring of LLM quality,” the Stanford and Berkeley paper posited.

Many leading AI firms have kept their models a secret and therefore un-auditable, including public benefit corporation Anthropic, whose stated goal is to provide a safer AI alternative.

With closed black box AI models, any research or results are not reproducible or even verifiable — and the companies behind the models may change them at any time without warning, as well as revoke access.

That makes it difficult for current and future ecosystem innovators to confidently build dependable software atop of platforms that may change in undocumented ways or pull the rug out entirely from them without a moment’s notice.

But it does make it easier for first-to-market players to ensure that would-be competitors don’t pilfer their all-important code.

However, the ecosystem paradigm may be changing.

Meta has decided to stand out from the crowd by choosing to open source the just-released version of LLaMA 2, making it available free of charge for research and commercial use.

“Meta [is] supporting an open approach to provide increased access to foundational AI technologies to the benefits of businesses globally,” the company wrote in a blog post.

Meta reportedly hopes that by spurring greater experimentation atop its own foundation, it will be able to leapfrog competitors despite entering the AI race a few months late.

Meta’s AI joins Databricks as one of the few open source AIs in the market — at least so far.

LEAVE A REPLY