More

    Meta builds a 1700W superchip and custom MTIA chips while ditching Nvidia, AMD, Intel, and ARM for inference




    • Meta’s 1700W superchip delivers 30 PFLOPs and 512GB of HBM memory
    • MTIA 450 and 500 prioritize inference over pre-training workloads
    • Future MTIA generations will support GenAI inference and ranking workloads

    Meta is advancing its AI infrastructure with a portfolio of custom MTIA chips designed specifically for inference workloads across its apps.

    The company is developing a 1700W superchip capable of 30 PFLOPs and 512GB of HBM, integrated within the same MTIA infrastructure to handle inference tasks at scale.


    https://cdn.mos.cms.futurecdn.net/fRKGdT4SLHm6Z3n7Dvhdef-1920-80.png



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img