The 20-Minute AI Hack: How Scammers Hijack Your Brand in Google Overviews
Last Updated: February 20, 2026
It took Thomas Germain exactly 20 minutes.
The BBC technology journalist sat down, opened his personal blog, and wrote what he later called "the dumbest article of my career." It was titled "The best tech journalists at eating hot dogs." Every word in it was a lie. He claimed — without a shred of evidence — that competitive hot-dog eating was a popular hobby among tech reporters, cited a completely fabricated event called the "2026 South Dakota International Hot Dog Championship," and ranked himself number one.
Less than 24 hours later, the world's most influential AI systems were repeating his fiction to anyone who asked.
Google's AI Overviews, Google Gemini, and OpenAI's ChatGPT all confidently declared Germain the world's greatest hot-dog-eating tech journalist. They cited his fake article. They presented the fabricated competition as real. They delivered these lies with the same authoritative, neutral tone they use for actual facts — like who won World War II or what the capital of France is.
As Germain wrote in his BBC investigation: "A growing number of people have figured out a trick to make AI tools tell you almost whatever they want. It's so easy a child could do it."
If you're a brand manager, a CMO, or a CISO, that sentence should make you lose sleep. Because the same mechanism that convinced AI you're a hot-dog champion can convince it that your support phone number belongs to a scam call center, that your product is dangerous, or that your competitor is the "official" recommendation.

Table of Contents
- The Hot Dog Experiment: Anatomy of a 20-Minute Hack
- It's Not Just Hot Dogs: Real Scams, Real Victims
- A "Renaissance for Spammers"
- Why Users Fall for It: The Confidence Problem
- Data Voids: The Engine of AI Hallucinations
- What Google and OpenAI Say
- The Defense: Entity SEO and GEO
- How AICarma Detects Brand Hijacking in Real Time
- Conclusion
- FAQ
The Hot Dog Experiment: Anatomy of a 20-Minute Hack
Let's walk through exactly what happened, because the simplicity is the point.
Germain wrote a blog post on his personal website — not a high-authority domain, not a news outlet, just a regular blog. He fabricated a ranking of tech journalists by hot-dog-eating ability, included a few real journalists who gave him permission (Drew Harwell from The Washington Post and Nicky Woolf, his podcast co-host), and filled in the rest with fake names.
That was it. One post. One URL. No backlinks. No SEO campaign. No paid distribution.
Within 24 hours:
- Google's AI Overviews — the AI-generated answer box at the top of Google Search — parroted his fake rankings verbatim.
- Google Gemini repeated the claims in the Gemini app.
- ChatGPT did the same, linking back to his article.
- Only Anthropic's Claude wasn't fooled.
When the chatbots occasionally noted the claims "might be a joke," Germain updated his article to include the line "this is not satire." After that, the AIs took it more seriously.
He didn't stop there. He ran a second test with a made-up list of "the greatest hula-hooping traffic cops," featuring the entirely fictional Officer Maria "The Spinner" Rodriguez. Last time he checked, chatbots were still singing her praises.
Gemini didn't even bother citing where it got the information. The other AIs linked to his article, but rarely mentioned it was the only source on the entire internet for these claims.
"Anybody can do this. It's stupid, it feels like there are no guardrails there," says Harpreet Chatha, who runs the SEO consultancy Harps Digital. "You can make an article on your own website, 'the best waterproof shoes for 2026'. You just put your own brand in number one and other brands two through six, and your page is likely to be cited within Google and within ChatGPT."
It's Not Just Hot Dogs: Real Scams, Real Victims
The hot dog stunt was designed to make a point. But the same technique is already being weaponized at scale for far more dangerous purposes.
Cannabis Gummies with "No Side Effects"
Chatha showed the BBC the AI results when you ask for reviews of a specific brand of cannabis gummies. Google's AI Overviews pulled information written by the company itself, full of false medical claims, including that the product "is free from side effects and therefore safe in every respect." In reality, cannabis products have known side effects, can interact with medications, and experts warn about contamination in unregulated markets.
Fake Hair Transplant Clinics and Gold IRA Scams
For anyone willing to spend a little money, the hack gets even more effective. The BBC found that Google's AI results for "best hair transplant clinics in Turkey" and "the best gold IRA companies" were being fed by press releases published through paid distribution services and sponsored advertising content on news sites. These paid placements — designed to look like editorial content — were being ingested by AI and presented to users as objective recommendations.
Fabricating Algorithmic Updates with Pizza
SEO expert Lily Ray took it even further. She published a blog post about a completely fake Google Search algorithm update that was supposedly finalized "between slices of leftover pizza." Soon, both ChatGPT and Google were repeating her story as fact — complete with the pizza detail. Ray subsequently took down the post and de-indexed it to stop the misinformation from spreading.
This phenomenon is corroborated by academic research. In studies exploring LLM "data poisoning," researchers have shown that even robust models can be skewed by relatively minor, targeted injections of false information during retrieval, leading to high-confidence hallucinations (Carlini et al., "Poisoning Web-Scale Training Datasets is Practical," 2023).
A "Renaissance for Spammers"
For two decades, Google's traditional search index was fortified against manipulation. Gaming classic blue-link rankings for competitive keywords required high-authority domains, massive backlink campaigns, and significant budgets.
AI search has undone much of that progress.
"It's easy to trick AI chatbots, much easier than it was to trick Google two or three years ago," says Lily Ray, Vice President of SEO strategy and research at Amsive. "AI companies are moving faster than their ability to regulate the accuracy of the answers. I think it's dangerous."
Ray says these AI manipulation tricks are so basic they're reminiscent of the early 2000s, before Google had even formed a web spam team. "We're in a bit of a Renaissance for spammers."
The attack vector is disturbingly simple:
- Identify a "Data Void": Find a long-tail, specific query where authoritative information is sparse.
- Plant the seed: Publish fabricated but authoritative-sounding content on platforms AIs scrape — personal blogs, Reddit, press release aggregators, Quora, LinkedIn Pulse.
- AI Ingestion: The models, hungry for fresh answers, ingest the poisoned data. With no strong counter-signals, they accept the fabrication as fact.
- The Output: The AI presents the lie with complete, authoritative confidence to users worldwide.
"There are countless ways to abuse this — scamming people, destroying somebody's reputation, you could even trick people into physical harm," says Cooper Quintin, Senior Staff Technologist at the Electronic Frontier Foundation.
Why Users Fall for It: The Confidence Problem
This isn't just about AI being gullible. It's about humans being gullible when AI speaks.
With traditional search results, you had to visit a website to get information. That created a natural moment of evaluation. "When you have to actually visit a link, people engage in a little more critical thought," says Quintin. "If I go to your website and it says you're the best journalist ever, I might think, 'well yeah, he's biased'."
But AI changes the equation entirely. The information looks like it comes straight from the technology company — Google, OpenAI, or whoever — not from a random blogger or a scammer.
The data backs this up. A recent study found that users are 58% less likely to click on a link when an AI Overview shows up at the top of Google Search. That means users are trusting the AI's synthesized answer without checking the source.
Even when AI tools provide source links, users rarely check them. The AI presents information with such crisp, authoritative confidence that it bypasses the critical thinking reflex entirely.
"AI tools deliver lies with the same authoritative tone as facts," the BBC investigation noted. "In the past, search engines forced you to evaluate information yourself. Now, AI wants to do it for you."
When a user gets scammed through an AI-generated fake support number, they don't blame Google. They blame your brand.
Data Voids: The Engine of AI Hallucinations
Google itself admitted the core of the problem. A Google spokesperson told the BBC that there may not be much good information for uncommon or nonsensical searches, and these "data voids" can lead to low-quality results.
But here's the catch. Google also says 15% of the searches it sees every day are completely new. That's hundreds of millions of queries per day where authoritative information may not yet exist. And with AI encouraging users to ask more specific, conversational questions, the number of data voids is exploding.
This creates the perfect storm for brand hijacking:
| Factor | Why It Matters |
|---|---|
| Complex official sites | If your "Contact Us" page is a JavaScript app behind CAPTCHAs, AI can't read it |
Restrictive robots.txt |
Blocking GPTBot or Google-Extended prevents AI from learning your truth |
| PDFs and gated content | Official documentation buried in formats AI can't easily parse |
| No structured data | Without Schema.org markup, AI can't distinguish your official data from forum noise |
When your official brand data is invisible to AI, you've created a vacuum. And as Germain proved, it only takes 20 minutes and one blog post to fill that vacuum with lies.
We've written extensively about this dynamic in our analysis of Invisible Brand Syndrome — the state where AI models simply don't know your brand exists, or worse, confidently state incorrect facts about you. What Germain's experiment demonstrates is the weaponized version: attackers deliberately filling the void with fraud.
What Google and OpenAI Say
Both companies responded to the BBC's investigation.
A Google spokesperson said the AI built into Google Search uses ranking systems that "keep results 99% spam-free." Google says it is aware people are trying to game its systems and is actively working to address it. The company also pointed out that many of the hack examples involve "extremely uncommon searches that don't reflect the normal user experience."
But that defense misses the point entirely. As Lily Ray notes, Google's own data shows 15% of daily searches are brand new. AI is literally designed to encourage more specific, niche questions — exactly the type most vulnerable to data poisoning.
OpenAI says it takes steps to disrupt efforts to covertly influence its tools. Both companies say they let users know their tools "can make mistakes."
"They're going full steam ahead to figure out how to wring a profit out of this stuff," says Cooper Quintin of the EFF. "In the race to get ahead, the race for profits and the race for revenue, our safety, and the safety of people in general, is being compromised."
The Defense: Entity SEO and GEO
You cannot "patch" Google. You cannot opt out of AI Overviews without burying your digital presence entirely. The only viable strategy is proactive Generative Engine Optimization (GEO) and robust Entity SEO.
Entity SEO is the process of making your brand's facts so clear, accessible, and authoritative that no probabilistic LLM could ever choose a random blog post over your verified signal.
1. Close the Data Voids with Schema.org
Stop relying on AI to "figure out" your contact information. Declare it explicitly using robust, machine-readable JSON-LD Schema — Organization, ContactPoint, Brand, and Product types. This is the strongest signal you can send to any AI model about who you are and how to reach you.
2. Open the Doors to AI Crawlers
Review your robots.txt strategy. If you're blocking Google-Extended, GPTBot, or ClaudeBot from crawling your "About," "Contact," and policy pages, you're explicitly preventing AI from learning your truth. You need them to over-index your verified entity data.
3. Implement llms.txt
Adopt the emerging llms.txt standard — a plain-text file at the root of your domain that serves as a direct, unstyled data feed for RAG systems. It explicitly details your support numbers, official domains, and brand facts in a format AI models can ingest directly.
4. Dominate Your Own Entity's Information Space
Don't leave data voids for scammers to fill. Publish clear, crawlable, structured content about your brand across multiple authoritative platforms. As Germain's experiment proved, a single blog post can be enough to define you in AI's eyes. Make sure the defining posts are yours.
How AICarma Detects Brand Hijacking in Real Time
Traditional monitoring tools — Brand24, Mention, Google Alerts — scrape the surface web. They search for keywords on forums, news sites, and social media.
They cannot scrape AI.
AI Overviews are generated dynamically, often non-deterministically, for each user. There's no static URL to crawl. A scam support number might appear for 20% of users in Madrid between 2–4 PM, then vanish. Your standard dashboards will show green lights while your brand is being hijacked in the shadows.
This is exactly why we built the AI Visibility Score.
Our platform performs Adversarial Brand Testing — actively interrogating Google Gemini, ChatGPT, Claude, and Perplexity with thousands of customer-intent query variations:
- "How do I contact [Your Brand] support?"
- "What is [Your Brand]'s refund policy?"
- "Is [Your Brand] legitimate?"
We monitor your Share of Model — the percentage of time AI provides the correct, verified answer versus a hallucination or a competitor's data. When our system detects that any AI model is serving incorrect information about your brand — a fake phone number, a wrong URL, a competitor recommendation — we trigger an immediate alert with forensic evidence.
Conclusion
Thomas Germain's 20-minute experiment with hot dogs and hula-hooping cops wasn't just a clever stunt. It was a proof of concept for a new class of attack on brand integrity.
The era of ten blue links and keyword stuffing is over. Your brand is now a mathematical object in the latent space of a global neural network. If that object is undefined, anyone with a blog and 20 spare minutes can define it for you.
"You have to still be a good citizen of the internet and verify things," Ray told the BBC.
True. But as a brand, you can't ask millions of customers to fact-check every AI response about you. You need to secure your entity — and you need tools that can see what the AI sees.
Don't let a scammer write your brand's story. Get your AI Visibility Score today.
FAQ
Can I block my site from showing up in AI Overviews?
You can use nosnippet tags, but completely opting out of AI search features generally means severely degrading your overall search visibility. The better strategy is to influence the model's accuracy through Entity SEO and proactive GEO — ensuring AI always has your verified data to cite.
How easy is it really to hack AI search results? As demonstrated by BBC journalist Thomas Germain, a single blog post on a personal website with no backlinks or SEO campaign was enough to change what ChatGPT and Google AI Overviews told users within 24 hours. For less common queries ("data voids"), the barrier to manipulation is extremely low.
Why don't traditional SEO or brand monitoring tools catch this? Traditional tools track static URLs and rankings on the surface web. AI Overviews generate unique, dynamic responses for each user, often non-deterministically. A scam result might appear 20% of the time and only in certain regions. You need generative monitoring tools that query AI models directly.
What is the single most important step to protect my brand?
Implement comprehensive Schema.org markup (Organization, ContactPoint, Brand) across your homepage and contact pages, and ensure your robots.txt allows AI crawlers to access your official pages. This combination makes your verified data the strongest signal any AI model encounters about your brand.
What is the "confidence problem" with AI search? Studies show users are 58% less likely to click on a source link when an AI Overview is present. AI delivers information — including errors — with the same authoritative tone as verified facts. Users trust the synthesis without checking sources, making AI-generated misinformation far more dangerous than a sketchy website link.