More

    ChatGPT is getting smarter, but its hallucinations are spiraling




    • OpenAI’s latest AI models, GPT o3 and o4-mini, hallucinate significantly more often than their predecessors
    • The increased complexity of the models may be leading to more confident inaccuracies
    • The high error rates raise concerns about AI reliability in real-world applications

    Brilliant but untrustworthy people are a staple of fiction (and history). The same correlation may apply to AI as well, based on an investigation by OpenAI and shared by The New York Times. Hallucinations, imaginary facts, and straight-up lies have been part of AI chatbots since they were created. Improvements to the models theoretically should reduce the frequency with which they appear.

    OpenAI’s latest flagship models, GPT o3 and o4-mini, are meant to mimic human logic. Unlike their predecessors, which mainly focused on fluent text generation, OpenAI built GPT o3 and o4-mini to think things through step-by-step. OpenAI has boasted that o1 could match or exceed the performance of PhD students in chemistry, biology, and math. But OpenAI’s report highlights some harrowing results for anyone who takes ChatGPT responses at face value.

    https://cdn.mos.cms.futurecdn.net/HJYRB6GzNWNn2ubMrmmk8Z.jpg



    Source link
    erichs211@gmail.com (Eric Hal Schwartz)

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img