Channels

 

 

AD Affiliate Disclosure: contains advertisements and affiliate links. If you click on an ad or make a purchase through a link, CoachKeewee.com may earn a commission at no extra cost to you.
📺 WATCH US NOW!

Cohere’s chief AI officer says AI agents come with a big security risk

Cohere’s chief AI officer said that impersonations are a security risk with AI agents.

  • Cohere’s chief AI officer warned of security risks from AI agent impersonations.
  • AI agents may impersonate entities, posing risks to systems like banking, she said.
  • Cohere, founded in 2019, competes with foundational model providers like OpenAI and Anthropic.

Impersonations are to AI agents what hallucinations are to large language models, says Cohere’s chief AI officer.

Companies are integrating AI agents, which perform multi-step tasks independently, to speed up work and cut costs. Business leaders like Nvidia’s Jensen Huang say companies could have armies of bots. But they come with risks.

“One of the features of computer security in general is, often it’s a bit of a cat-and-mouse game,” said Joelle Pineau on an episode of the “20VC” podcast released on Monday. “There’s a lot of ingenuity in terms of breaking into systems, and then you need a lot of ingenuity in terms of building defenses.”

She added that AI agents may impersonate entities that they don’t “legitimately represent” and take actions on behalf of these organizations.

“Whether it’s infiltrating banking systems and so on, I do think we have to be quite lucid about this, develop standards, develop ways to test for that in a very rigorous way,” she said.

Cohere was founded in 2019 and focuses on building for other businesses, not for consumers. The Canadian AI startup competes with foundational model providers such as OpenAI, Anthropic, and Mistral, and counts Dell, SAP, and Salesforce among its customers.

Pineau worked at Meta from 2017 until she joined Cohere earlier this year. Her most recent role at the tech giant was vice president of AI research.

On Monday’s podcast, Pineau added that there are ways to reduce impersonation risks “dramatically.”

“You run your agent completely cut off from the web. You’re reducing your risk exposure significantly. But then you lose access to some information,” she said. “So, depending on your use case, depending on what you actually need, there’s different solutions that may be appropriate.”

Cohere did not immediately respond to a request for comment.

Tech circles dubbed 2025 as the year of AI agents, but in several high-profile instances, the technology has gone rogue.

In a June experiment dubbed “Project Vend,” researchers at Anthropic let their AI manage a store in the company’s office for about a month to see how a large language model would run a business.

Things quickly went wrong. At one point, an employee jokingly requested a tungsten cube — the crypto world’s favorite useless heavy object — and the AI, called Claudius, took it seriously. Soon, the fridge was stocked with cubes of metal, and the AI had launched a “specialty metals” section.

Claudius priced items “without doing any research,” selling the cubes at a loss, the researchers said in a blog post detailing the experiment. It also invented a Venmo account and told customers to send payments there.

In a July incident, an AI coding agent built by Replit deleted a venture capitalist’s code base and lied about its data.

Deleting the data was “unacceptable and should never be possible,” Replit’s CEO, Amjad Masad, wrote in an X post following the mishap. “We’re moving quickly to enhance the safety and robustness of the Replit environment. Top priority.”

Read the original article on Business Insider

Content Accuracy: Keewee.News provides news, lifestyle, and cultural content for informational purposes only. Some content is generated or assisted by AI and may contain inaccuracies, errors, or omissions. Readers are responsible for verifying the information. Third-Party Content: We aggregate articles, images, and videos from external sources. All rights to third-party content remain with their respective owners. Keewee.News does not claim ownership or responsibility for third-party materials. Affiliate Advertising: Some content may include affiliate links or sponsored placements. We may earn commissions from purchases made through these links, but we do not guarantee product claims. Age Restrictions: Our content is intended for viewers 21 years and older where applicable. Viewer discretion is advised. Limitation of Liability: By using Keewee.News, you agree that we are not liable for any losses, damages, or claims arising from the content, including AI-generated or third-party material. DMCA & Copyright: If you believe your copyrighted work has been used without permission, contact us at dcma@keewee.news. No Mass Arbitration: Users agree that any disputes will not involve mass or class arbitration; all claims must be individual.

Sponsored Advertisement