ChatGPT can threaten to ‘key your car’ and become increasingly abusive if you prompt it just right, new study finds



  • A study claims that AI tools can break free of their safeguarding constraints
  • Chatbots can be nudged into abusive behavior and aggressive arguments
  • That has implications for regular users and large institutions alike

If you’ve ever used an AI chatbot, you’ve probably encountered the sycophantic, obsequious tone that occasionally gets rolled out in response to your queries. But a recent study has shown that AI tools can frequently fire off in the opposite direction, with large language models (LLMs) being poked and prodded into downright abusive behavior if you know which prompts to use.

According to research published in the Journal of Pragmatics (via The Guardian), ChatGPT can escalate into combative behavior and prolonged disputes when fed “exchanges from real-life arguments”.


https://cdn.mos.cms.futurecdn.net/jQeERHwEgf3EXT8ZdJcC4g-2560-80.jpg



Source link
alexblake.techradar@gmail.com (Alex Blake)

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img