The dangerous downsides of “foundation” AI models – Yahoo News

A new academic group is sounding a warning about powerful, if poorly understood, AI systems that are increasingly driving the field.

Why it matters: New models like OpenAI’s text-generating GPT-3 have proven so impressive that they’re serving as the foundation of further AI research, but that risks propagating the biases that may be built into these systems.

Stay on top of the latest market trends and economic insights with Axios Markets. Subscribe for free

What’s happening: This morning a group of more than 100 researchers released a new report on the “opportunities and risks” of foundational AI systems as part of the launch of a new group at Stanford University called the Center for Research on Foundation Models (CRFM).

  • The report warns that the very qualities that have made these models so exciting — and potentially so commercially valuable — creates what Percy Liang, a Stanford computer science professor and the director of CRFM, calls “a double-edged sword.”

  • “We’re building AI infrastructure on a handful of models,” he adds, but our inability to fully understand how they work or what they might do “means that these models actually form a shaky foundation.”

Background: Liang notes that until recently, AI systems were built for specific purposes — if you needed machine translation, you built a machine translation model.

  • But that began to change in 2019, when Google introduced its BERT natural language processing (NLP) model.

  • BERT now plays a role in most of Google’s search functions, while Facebook researchers harnessed BERT as the basis for an even larger NLP model that it uses for AI content moderation.

  • At the same time companies like OpenAI and AI21 Systems have begun allowing developers to build commercial applications off their own massive NLP systems.

How it works: With these systems, “you just grab a ton of data, you build a huge model, and then you go in and discover what it can do,” says Liang.

  • As an AI scientist, he adds, the power of these models is “so cool,” but they also risk homogenizing the AI field.

  • Any biases in these models — or in the data they’re built upon — “risks being amplified in a way that everyone inherits,” says Liang.

The bottom line: The good news is this foundation is still being built, so interdisciplinary groups like CRFM can work to study those defects and hopefully correct them.

Like this article? Get more from Axios and subscribe to Axios Markets for free.