I taught ChatGPT to distrust itself, and suddenly it stopped hallucinating


Anyone who uses ChatGPT or other AI chatbots eventually encounters the confident hallucination. The AI will explain a nonexistent feature, invent a quote, or describe a restaurant that closed during the first Clinton administration.

That’s because large language models are designed to produce plausible-sounding responses quickly. That ability is what makes them useful, but it also creates the perfect conditions for hallucinations. The chatbot wants to keep the conversation moving smoothly, so it often fills in gaps with fiction if it’s convenient.

https://cdn.mos.cms.futurecdn.net/HJYRB6GzNWNn2ubMrmmk8Z-1920-80.jpg



Source link
ESchwartzwrites@gmail.com (Eric Hal Schwartz)

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img