Chinese AI assistant DeepSeek-R1 struggles with sensitive topics, producing broken code and security disasters for enterprise developers




  • Experts find DeepSeek-R1 produces dangerously insecure code when political terms are included in prompts
  • Half of the politically sensitive prompts trigger DeepSeek-R1 to refuse to generate any code
  • Hard-coded secrets and insecure input handling frequently appear under politically charged prompts

When it released in January 2025, DeepSeek-R1, a Chinese large language model (LLM) caused a frenzy and has since been widely adopted as a coding assistant.

However, independent tests by CrowdStrike claim the model’s output can vary significantly depending on seemingly irrelevant contextual modifiers.


https://cdn.mos.cms.futurecdn.net/RFHhebaafuggVviVNzSsy-1920-80.jpg



Source link

Latest articles

spot_imgspot_img

Related articles

spot_imgspot_img