Channels

Sticky Video Player with Ad Breaks
📺 WATCH US NOW!

AI models can acquire backdoors from surprisingly few malicious documents

Scraping the open web for AI training data can have its drawbacks. On Thursday, researchers from Anthropic, the UK AI Security Institute, and the Alan Turing Institute released a preprint research paper suggesting that large language models like the ones that power ChatGPT, Gemini, and Claude can develop backdoor vulnerabilities from as few as 250 corrupted documents inserted into their training data.

That means someone tucking certain documents away inside training data could potentially manipulate how the LLM responds to prompts, although the finding comes with significant caveats.

The research involved training AI language models ranging from 600 million to 13 billion parameters on datasets scaled appropriately for their size. Despite larger models processing over 20 times more total training data, all models learned the same backdoor behavior after encountering roughly the same small number of malicious examples.

Read full article

Comments

Content Accuracy: Keewee.News provides news, lifestyle, and cultural content for informational purposes only. Some content is generated or assisted by AI and may contain inaccuracies, errors, or omissions. Readers are responsible for verifying the information. Third-Party Content: We aggregate articles, images, and videos from external sources. All rights to third-party content remain with their respective owners. Keewee.News does not claim ownership or responsibility for third-party materials. Affiliate Advertising: Some content may include affiliate links or sponsored placements. We may earn commissions from purchases made through these links, but we do not guarantee product claims. Age Restrictions: Our content is intended for viewers 21 years and older where applicable. Viewer discretion is advised. Limitation of Liability: By using Keewee.News, you agree that we are not liable for any losses, damages, or claims arising from the content, including AI-generated or third-party material. DMCA & Copyright: If you believe your copyrighted work has been used without permission, contact us at dcma@keewee.news. No Mass Arbitration: Users agree that any disputes will not involve mass or class arbitration; all claims must be individual.

Sponsored Advertisement