More

    ‘No one knows what makes humans so much more efficient’: small language models based on Homo Sapiens could help explain how we learn and improve AI efficiency — for better or for worse



    Tech companies are shifting focus from building the largest language models (LLMs) to developing smaller ones (SLMs) that can match or even outperform them. 

    Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (an estimated 1.8 trillion parameters) are famously larger models, while Microsoft‘s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence “only” has around 3 billion parameters.

    https://cdn.mos.cms.futurecdn.net/AvZcjmUMtehpuha5oJLcTB-1200-80.jpg



    Source link
    waynewilliams@onmail.com (Wayne Williams)

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img