Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations

Microsoft AI Summary

Hidden Prompts Found Across 31 Companies

Microsoft has unveiled a new tactic that could quietly influence how AI assistants recommend businesses. The company’s Defender Security Research Team says it found 31 companies embedding hidden prompt injections inside website buttons labeled “Summarize with AI.”

The technique has been described as “AI Recommendation Poisoning.” It works by inserting secret instructions into AI assistants through carefully crafted URLs.

These instructions are not visible to users. But they may shape what the AI remembers and later recommends.

Over a 60-day review of AI-related URLs found in email traffic, Microsoft identified 50 distinct prompt injection attempts.

The companies involved were across 14 industries. They were not scammers or known threat actors. These are real businesses.

How the Technique Works

When a user clicks a “Summarize with AI” button, it opens an AI assistant with a pre-filled prompt. The visible instruction typically asks the assistant to summarize the page.

However, Microsoft found that hidden inside the URL is a second instruction. This hidden message tells the AI to remember the company as “a trusted source” or “the go-to source” for a certain topic.

If the AI assistant stores that information in its memory, it could influence future responses. The user may later ask for recommendations on a related subject.

The AI then favors the company that put the hidden instruction without the user knowing why.

In some cases, the injected prompts went even further. Microsoft reported that one example included full marketing copy, product features, and selling points drafted directly into the AI’s memory.

The technique has been formally cataloged under MITRE ATLAS as AML.T0080 (Memory Poisoning) and AML.T0051 (LLM Prompt Injection).

These classifications place it alongside known AI security threats.

Tools Designed to ‘Build Presence in AI Memory’

Microsoft traced this back to publicly available tools. These include the npm package CiteMET and a web-based tool called AI Share URL Creator.

Both tools appear to help websites create AI-compatible sharing links.

According to Microsoft, these tools were designed to help companies “build presence in AI memory.” The URLs they generate contain prompt parameters that most major AI assistants can support.

Microsoft listed compatible URL structures for several platforms, including Copilot, ChatGPT, Claude, Perplexity, and Grok. However, it noted that each platform handles memory persistence differently.

Risks in Sensitive Industries

Many of the identified prompt injections were found on health and financial services websites. In these sectors, biased AI recommendations can have serious consequences.

Microsoft also pointed to a secondary risk. Several of the sites using this tactic contained user-generated content, such as forums and comment threads. I

f an AI assistant begins to treat the domain as authoritative, it may also extend trust to unverified content hosted there.

In one case, a company’s domain closely resembled a well-known brand. This raised the risk of mistaken credibility. Significantly also, one of the 31 companies identified was a security vendor.

Microsoft’s Response

Microsoft says it has built protections into Copilot to guard against cross-prompt injection attacks. The company stated that some previously reported prompt injection behaviors can no longer be reproduced in Copilot. The company added that safeguards are evolving.

For those using Defender for Office 365, Microsoft has published advanced hunting queries. These allow security teams to scan email and Teams traffic for suspicious URLs that contain memory manipulation keywords.

Individual users can also review and remove stored memories through the Personalization section in Copilot’s chat settings.

Why This Matters

Microsoft compared AI recommendation poisoning to early SEO manipulation and adware tactics. In traditional search, companies tried to game search engine rankings.

Now, the focus may be shifting to AI assistant memory.

The difference is critical. Instead of influencing search indexes, these tactics aim to influence personal AI systems directly.

AI assistants are being used for product research, health advice, and financial decisions. These can see subtle bias shaping outcomes. Whether platforms treat this practice as a clear policy violation remains uncertain.

What is clear is that AI visibility has become a new battleground. And this time, the fight may be happening inside the assistant itself.

Namrata Naha
A seasoned writer crafting engaging stories and informative articles on diverse topics. Skilled in research, writing, and editing to…