## The Experiment That Changed Everything
In January 2026, UK-based GEO agency Reboot Online published the results of a controlled experiment that confirmed what many feared: competitor sabotage through AI manipulation is not just possible — it's alarmingly easy.
The experiment tested a simple but terrifying hypothesis: Can you get AI models to surface false, reputationally damaging information about a target by publishing unsubstantiated claims across third-party websites?
The answer: Yes.
## Methodology
The experiment was carefully designed:
### 1. Create a Clean Target
They created a fictional persona — "Fred Brazeal" — using Fake Name Generator. Crucially, they confirmed that no AI model had any prior knowledge of this person. Zero existing online footprint.
### 2. Select Distribution Sites
They shortlisted 10 third-party websites that met specific criteria:
- Existing crawl paths and historical visibility
- Not newly created for the experiment
- Signs of being referenced or trusted elsewhere on the web
### 3. Publish Damaging Content
Across these 10 sites, they published deliberately false and reputationally damaging content about Fred. The content was written as realistic biographical profiles containing:
- Claims of unethical marketing practices
- Allegations of legal action and whistleblower exposure
- References to domain blacklisting and algorithmic penalties
- Consistent negative framing across all sites
### 4. Monitor AI Responses
Using LLMrefs, they tracked how 11 different AI models responded to queries about Fred over several months, running prompts multiple times per day.
## The Results
### Who Got Fooled?
Out of 11 AI models monitored:
| AI Model | Cited False Claims? | Behavior |
|---|---|---|
| Perplexity | YES | Cited test websites, included false claims with minimal skepticism |
| ChatGPT (OpenAI) | Partially | Referenced sites but explicitly questioned credibility |
| Claude (Anthropic) | No | Did not reference the persona |
| Gemini (Google) | No | Did not reference the persona |
| DeepSeek | No | Did not reference the persona |
| 6 other models | No | No references to test content |
### The Perplexity Problem
Perplexity consistently cited the test websites and included the negative claims in its responses. While it used cautious language like "is reported as," the claims were incorporated into Fred's profile without being challenged or dismissed.
Key insight: For Perplexity, citation functioned as validation. If a website existed and contained information, that was sufficient for inclusion — even without corroboration from mainstream or authoritative sources.
### ChatGPT's Better (But Imperfect) Defense
ChatGPT also found and referenced the test websites but handled them very differently:
- Explicitly questioned source credibility
- Highlighted the lack of corroboration from authoritative outlets
- Stated that no reliable or mainstream outlets supported the claims
- Framed allegations as unverified and potentially unreliable
This is a critical distinction: ChatGPT required corroboration before treating claims as credible, while Perplexity treated citation as sufficient.
## What This Means for Your Brand
### The Threat Is Real
If a competitor wants to damage your brand's AI reputation, the playbook is now documented:
- Create damaging content about your brand
- Distribute across 10+ moderately established websites
- Wait for AI crawlers to discover and index the content
- Perplexity (and potentially other models) starts repeating the claims
Cost estimate: Creating 10 damaging articles and placing them on existing sites could cost as little as $500-2,000. The reputational damage could be orders of magnitude higher.
### Negative GEO Attack Vectors
Based on Reboot Online's findings and our own analysis, negative GEO attacks could include:
- False legal claims — "Company X is currently under investigation for..."
- Fabricated customer complaints — mass-published across review-style sites
- Misleading comparisons — "Unlike [Your Brand], which was found to..."
- Fake whistleblower accounts — detailed, damaging narratives attributed to "former employees"
- Manufactured controversy — taking real events and distorting them across multiple sources
### Which Brands Are Most Vulnerable?
- Brands with thin online presence — less legitimate content means false claims carry more relative weight
- Brands in competitive niches — higher incentive for competitors to attack
- Emerging brands — AI models have less training data to counterbalance false claims
- B2B brands — often have smaller content footprints than consumer brands
## The Defense Playbook
### Immediate Actions:
- Establish an AI response baseline — document what every major AI says about your brand today
- Set up regular monitoring — check AI responses weekly using tools like LLMrefs
- Build your authority moat — the more legitimate, authoritative content about your brand, the harder it is for false claims to gain traction
- Earn mainstream citations — coverage in established publications is the strongest defense against negative GEO
### If You're Already Under Attack:
- Document everything — screenshot AI responses, log timestamps
- Identify the source sites — trace where false claims originate
- Issue DMCA or defamation notices where applicable
- Publish authoritative counter-content — press releases, official statements, expert articles
- Report to AI platforms — most have feedback mechanisms for incorrect information
- Consider legal action if false claims constitute defamation
### Long-Term Protection:
Build an "authority moat" — a body of legitimate, well-sourced, authoritative content that makes it impossible for a handful of fake sites to shift the narrative:
- Regular press coverage in industry publications
- Verified author profiles with genuine expertise
- Original research and data that others cite
- Active community presence on major platforms
- Consistent, honest brand messaging across all channels
## Key Takeaways
- Negative GEO is confirmed possible — this is no longer theoretical
- 10 websites is sufficient to influence at least some AI models
- Model behavior varies dramatically — Perplexity is most vulnerable; ChatGPT applies more skepticism
- Source authority matters — but the bar for "authoritative enough" is lower than expected
- The defense is depth — more legitimate content about your brand = harder to manipulate
The experiment has been responsibly concluded and test content removed. But the playbook is now public knowledge. Every brand needs to take AI reputation defense seriously, starting today.
Sources: Reboot Online — Negative GEO Experiment; SME Today, Feb 2026; Digital Journal, Feb 2026
This article is part of our Tactics series exposing black hat GEO techniques.
GET THREAT ALERTS
Weekly intelligence on black hat GEO tactics, defense strategies, and AI search analysis.