More

    SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done




    • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
    • Can run on Linux, macOS, Android, and iOS, but not Windows
    • AI models needing 16GB RAM can run on two 8GB laptops

    Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

    The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

    https://cdn.mos.cms.futurecdn.net/2UMvPDp3snEwaGbRuCivjE-1200-80.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img