Researchers poison their own data when stolen by an AI to ruin results



  • Researchers from China and Singapore proposed AURA (Active Utility Reduction via Adulteration) to protect GraphRAG systems
  • AURA deliberately poisons proprietary knowledge graphs so stolen data produces hallucinations and wrong answers
  • Correct outputs require a secret key; tests showed ~94% effectiveness in degrading stolen KG utility

Researchers from universities in China and Singapore came up with a creative way to prevent the theft of data used in Generative AI.

Among other things, there are two important elements in today’s Large Language Models (LLM): training data, and retrieval-augmented generation (RAG).


https://cdn.mos.cms.futurecdn.net/tLQ5v9nqQANArzHFugCRRP-1920-80.jpg



Source link

Latest articles

spot_imgspot_img

Related articles

spot_imgspot_img