From Google to GPT: The Quiet Rise of AI Manipulation Tactics
The New SEO Race to Influence AI
Hello Riddlers!
📌 Your support keeps this newsletter alive —> [☕Buy Me a Coffee☕]💡 Want more from me? [SEO Strategy Course] | [Build Your Own SEO Tools (with Python & Agentic AI)] | [Premium SEO Mastermind]
Let’s be real
It was only a matter of time before SEOs started trying to game LLMs.
We’re an industry that still pulls off blackhat and spammy tactics in search, no matter how many systems are built to stop it, we adjust and innovate!
Which leaves you thinking, what chance do those new AI models have against the SEO spam? 😂
Through personal observation, reading papers, testing and regular discissions with industry peeps through my monthly SEO townhall style meetup [signup for next one here] I've been collecting the manipulation tactics that are already in play. Here's what I've found in no specific order.
The summary button
This one surprised me. Some sites now have a "Summarize with ChatGPT" button on their blogs.
Seems helpful, right?
You click it, it opens ChatGPT with a pre-written prompt to summarize the post. Harmless.
Except — every click trains ChatGPT to read your content. More clicks = more exposure = higher chance ChatGPT recommends or cites your blog later. 🤷
Listicles
Everyone knows about this. Google’s AI Overviews and chatgpt love listicles. For a while, this was the easiest way to get cited.
Well I tried getting listed on few of the main sources used for the keyword “best SEO newsletter”.
Did it work? Nop, you know why?
because I noticed Google was changing the cited sources over and over… it seems they were making adjustments to their systems to separate value bringing listicles and ones that are made specifically for ranking and getting cited.
These may also overlap with many of the “ranking volatility” you hear barry schwarz reporting on. Something was changing in how listicles are selected in search.
Does this tactic still work? It does, but only if your listicle delivers real value. If it’s just self-promotional fluff, don’t bother. Garbage in, garbage out.
Poisoning Attacks on LLMs
Anthropic has been putting out some great research lately, and this one is no exception "Poisoning Attacks on LLMs Require a Near-Constant Number of Poison Samples."
In this study, researchers wanted to find out how easy it is to sneak harmful content into AI training data.
What they discovered was pretty alarming: it only takes about 250 bad documents to compromise an AI model, no matter how big the model is or how much data it was trained on.
Even models trained on 20x more clean data were just as vulnerable.
Think of it like this: whether you’re poisoning a glass of water or an entire swimming pool, it takes roughly the same amount of poison to be dangerous. That’s a problem.
In SEO terms? All you need to influence an LLM is 250 mentions…
🎤 a mic drop moment.
Injecting Prompts
Remember this ancient antique SEO tactic called cloaking, where you add a white text on the page for ranking purposes but users cannot see it because it’s white?
Well, it evolved with times, and now some people embed hidden prompts in their content.
Modern AI reads text, images, and audio. Each one is a door for manipulation. Hide white text on a white background? The AI reads it. Shrink font to zero? The AI reads it. Embed a prompt inside an image? The AI reads that too. You’d never notice — but the AI processes every bit of it.
RAG Poisoning
LLM bots like ChatGPT use RAG (Retrieval Augmented Generation) which is just a fancy way of saying they search sources like Google or Bing to make their responses more accurate.
Well, researchers behind the “PoisonedRAG” paper found that you can trick a RAG system by injecting fake text into the sources the AI pulls from. They treated it like a math problem and figured out the exact wording that makes the AI give a specific wrong answer to a specific question.
Here’s how it works, the fake text is crafted to do two things at once:
First, it looks relevant enough that the system actually retrieves it.
Second, once the AI reads it, it steers the response toward whatever the attacker wants.
Think of it like slipping a fake document into a library. The title and keywords match the topic, so the librarian pulls it off the shelf. But the content inside is completely made up to push a specific narrative.
The scary part? Just five fake texts in a database of millions was enough to trick the AI 90% of the time. And when the researchers tested existing defenses? None of them held up.
Now think about this from an SEO perspective — many RAG systems pull from public sources that anyone can edit like Wikipedia. If five pieces of text can manipulate the output, imagine what a coordinated effort could do.
Reddit
LLM bots and Google AIO rely a lot on reddit in their citations and recommendations, and this of course opened another possibility for SEOs... “spam reddit”...
You may see some “success stories” circulating here and there, but the reality is, most of these attempts fail!
And That’s a Wrap (Almost 😄)
Obviously I just scratched the surface on this topic, but you know what I was thinking the whole time I’m writing this?
We're trying to figure out how to make LLMs harder to manipulate, when we still haven't figured that out for humans yet. 🤷🤷🤷
That’s that for today folks and see you next newsletter!
Support the Riddler!
Sign up for my newsletter if you’re not already.
Share the newsletter and invite your friends to signup. Help me reach 2k signups by end of 2025 please 🙂
Provide feedback on how I can make this newsletter better!!!
If you’re an SEO tool or an SEO service provider, consider sponsoring my newsletter. I’m also open to other partnership ideas as well.
Disclaimer: LLMs were used to assist in wording and phrasing this blog.


