×
Friday, March 29, 2024

At Stanford's “foundation models” workshop, large language model debate resurfaces - Morning Brew

Last updated Monday, August 30, 2021 13:00 ET , Source: NewsService

This month, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) announced a brand-new research arm, the Center for Research on Foundation Models. Typically known as large language models, these powerful AI techniques have been the subject of both intense scrutiny and lofty praise.

Last week, during the CRFM’s first-ever workshop, the debate bubbled up again.

Rewind: After a dispute over a research paper about large language models, Google fired its AI ethics co-leader Timnit Gebru in December 2020.

  • The decision prompted massive backlash, and put a spotlight on the AI technique. These massive algorithms are trained on large swaths of the internet, causing them to both produce highly accurate approximations of natural language but also to replicate some human biases.

The models already fuel a wide range of popular tools and services, including Gmail’s Smart Compose. An increasing number of startups are powered by OpenAI’s GPT-3, and Google recently announced it will double down on the use of large language models to underpin services like Search.

Names and frames

As commercialization of these algorithms moves full steam ahead, some leaders, researchers, and academics in the AI community continue to warn that the tech warrants a much more cautious approach. Some saw HAI’s decision to refer to large language models as “foundation models” as an attempt to give the technology a clean slate of sorts—or as a reframing that awards the models too much credit.

“...



Read Full Story: https://www.morningbrew.com/emerging-tech/stories/2021/08/30/stanfords-foundation-models-workshop-large-language-model-debate-resurfaces

Your content is great. However, if any of the content contained herein violates any rights of yours, including those of copyright, please contact us immediately by e-mail at media[@]kissrpr.com.