We all are aware of ChatGPT now, and now Microsoft ChatGPT and Google's Bard have been using the technology to ease the use of the platform. Now it has been reported that Meta has joined the AI chatbot race with its own state-of-the-art foundational large language model which has been designed to help researchers to advance their work in the field of artificial intelligence.
ALSO READ: Google lays off 12,000 workers and even 100 robots: Know-why?
However, Meta's Large Language Model Meta AI (LLaMA) is not like ChatGPT-driven Bing at the time of writing, as it cannot talk to humans yet, but could help the researchers.
In a statement, Meta stated: "Smaller, more performant models such as LLaMA enable others in the research community who don't have access to large amounts of infrastructure to study these models, further democratising access in this important, fast-changing field."
ALSO READ: Nokia X30 5G up for sale in India: Know the price, features, offers and more
Meta is making LLaMA available at several sizes (7 billion, 13 billion, 33 billion, and 65 billion parameters).
Large language models -- natural language processing (NLP) systems with billions of parameters -- have shown new capabilities to generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more.
ALSO READ: Dyson Purifier Hot+Cool- HP07 Review: Best air purifier with temperature control
"They are one of the clearest cases of the substantial potential benefits AI can offer at scale to billions of people," said Meta.
Smaller models trained on more tokens -- which are pieces of words -- are easier to retrain and fine-tune for specific potential product use cases.
Meta has trained LLaMA 65 billion and LLaMA 33 billion on 1.4 trillion tokens.
"Our smallest model, LLaMA 7B, is trained on one trillion tokens," said the company.
Like other large language models, LLaMA works by taking a sequence of words as input and predicts the next word to recursively generate text.
"To train our model, we chose a text from the 20 languages with the most speakers, focusing on those with Latin and Cyrillic alphabets," Meta stated.
To maintain the integrity and prevent misuse, Meta said that it is releasing the model under a noncommercial license focused on research use cases at the moment.