Mark Zuckerberg Announces Meta's New Large-Language Model AI 'LLaMA'

A new challenger has appeared in the user-facing AI game. Mark Zuckerberg's Meta has officially stepped into the fray with the Large-Language Model AI, charmingly known as "LLaMA."

On paper, LLaMA doesn't seem to be positioned as an outright competitor for ChatGPT, Microsoft's conversational AI, or Google's own talkative algorithm, Bard. Per Meta's press release, the company forecasts that LLaMA's main role will be as an AI that improves AI. In Meta's own words, LLaMA "requires far less computing power and resources to test new approaches, validate others' work, and explore new use cases." LLaMA is designed to be a comparatively low-demand, efficient tool best suited to focused tasks.

Advertisement

That opens up a new field for AI development, at least in terms of customer-facing tools. Most user-friendly AIs to date have followed the chat assistant model, which requires an AI powerful enough to deal with any question a user feeds it. LLaMA purports to make that easier for other AI without exacerbating the problem itself.

Not a universal tool - and that's a good thing

Notable in Meta's press release was an explicit mention of LLaMA's limitations and the safeguards Meta employed in developing it. Meta suggests specific use cases, noting that small models trained on large language bases, like LLaMA, are "easier to retrain and fine-tune for specific potential product use cases." That notably narrows the field for AI applications.

Advertisement

In addition, Meta will be limiting LLaMA's accessibility, at least at first, to "academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world." The company clearly takes the potential ethical ramifications of AI seriously, acknowledging that LLaMA and all AI share the "risks of bias, toxic comments, and hallucinations" in their operation. Meta is working to counteract this by selecting users carefully, making their code available in full to all users to check for bugs and biases, and releasing a set of useful benchmarks for evaluating malfunctions.

In short, if Meta seems to have gotten to AI School late, at least they've done the reading. As users grow increasingly anxious about the potential dark side of AI, Meta may well be modeling the appropriate response: progress tempered by due diligence guaranteeing ethical use.

Advertisement

Recommended

Advertisement