More

    I’m an AI engineer but I don’t trust artificial intelligence yet: here’s what we should do to change it



    LLMs have been plagued by hallucinations from the very start. Developers are investing huge amounts of money and time into improving these models, yet the problem remains: hallucinations are rife. And in fact, some of the newest models – as OpenAI confessed to on its recent launch of o3 and o4-mini – hallucinate even more than previous ones.

    Not only do these programs hallucinate, but they also essentially remain ‘black boxes’. Hallucinations are hard to defend against, because they are the result of random chance. The answers simply seem plausible, serving some basic use cases, but requiring extensive human oversight. Their hallucinations remain imperceptible to non-subject matter experts.

    https://cdn.mos.cms.futurecdn.net/Wcc69A4Ts8bhSbGgJeGkoZ.jpg



    Source link

    Latest articles

    spot_imgspot_img

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    spot_imgspot_img