Multiverse launches compressed OpenAI language model designed to cut memory needs and lower AI infrastructure costs



Spanish AI company Multiverse Computing has released HyperNova 60B 2602, a compressed version of OpenAI’s gpt-oss-120B, and published it for free on Hugging Face.

The new version cuts the original model’s memory needs from 61GB to 32GB, and Multiverse says it retains near-parity tool-calling performance despite the 50% reduction in size.

In theory, this means a model that once required heavy infrastructure can run on far less hardware. For developers with tighter budgets or energy constraints, that’s a potentially huge advantage.

Multiverse Computing HyperNova 60B 2602 performance

(Image credit: Multiverse Computing)


https://cdn.mos.cms.futurecdn.net/KCJxM3ey3eCaGs2Kdr7kcL-2223-80.jpg



Source link
desire.athow@futurenet.com (Desire Athow)

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img