Microsoft has released its newest compact ‘small language model’ which was titled ‘Phi-2’. The new compact model will continue to perform at par and claims to be better than certain larger open-source Llama 2 models which have less than 13 billion parameters.
What is Phi?
In the past few months, the Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” which has achieved a remarkable performance on a variety of benchmarks.
About Phi-1
The first model, the 1.3 billion parameter Phi-1 will achieve state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks).
In an update, the company said, "We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters.”
About Phi-2
Phi-2 is said to be an ideal playground for researchers, which includes exploration around mechanistic interpretability, fine-tuning experimentation and safety improvements on a variety of tasks.
Microsoft said, “We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models.”
It was further said, that the increase in the size of language models to hundreds of billions of parameters has unlocked a host of emerging capabilities- redefining the landscape of natural language processing.
However, a question remains whether such emergent abilities can be achieved at a smaller scale using strategic choices for training, e.g., data selection.
Phi models aim to answer this question by training SLMs
Microsoft further stated: “Our line of work with the Phi models aims to answer this question by training SLMs that achieve performance on par with models of much higher scale (yet still far from the frontier models).”
The company has performed extensive testing on commonly used prompts from the research community.
The tech giant has further said, “We observed a behaviour in accordance with the expectation we had given the benchmark results.”
ALSO READ: Global smart personal audio market drops down by 3%, Apple leads
Inputs from IANS