
Fluent in Bullsh*t: Why AI Sounds Smart but Can’t Be Trusted
Understanding LLM Hallucinations
Large Language Models (LLMs) are powerful content generators, but they can sometimes “hallucinate” – outputting false or made-up information as if it were true. An AI hallucination occurs when a model produces text that sounds plausible and confident but is factually incorrect or unfounded. This isn’t a software bug per se; it stems from how LLMs work. These models are trained to predict the most likely next word in a sentence based on patterns in their training data, not to verify facts. As a result, if an LLM lacks knowledge or context on a topic, it may “fill in the gaps” by inventing details that fit the prompt’s context – essentially confidently fabricating details to maintain a fluent answer.
For example, an AI might convincingly write that your product has a feature it actually doesn’t, or cite a nonexistent study to support a marketing claim. Such hallucinations pose obvious risks in any domain where accuracy matters, including marketing and business communications.
What Recent Research Reveals
New research underscores just how prevalent and tricky AI hallucinations can be. In a 2025 study, researchers tested six advanced LLMs with hundreds of clinical-style prompts that each contained one planted false detail. The results were eye-opening: the models fell for the fabricated detail between 50% and 82% of the time, often elaborating on the falsehood as if it were real (Source).
Even the best-performing model (a version of GPT-4o) still hallucinated the fake detail in about half of its responses under normal settings. This shows that even cutting-edge LLMs are highly susceptible to accepting and generating wrong information when it’s subtly embedded in a prompt.
By adding a special instruction prompt (a form of prompt engineering) telling the AI to be cautious and only use validated information, researchers cut the hallucination rate roughly in half on average – from about 66% of responses down to 44%. In the best case (GPT-4o), hallucinations dropped from 53% to 23% of outputs with the mitigation prompt. This demonstrates that careful prompt design can significantly reduce errors. However, hallucinations were not eliminated – even the most careful prompts still saw the AI go along with a false detail in roughly one out of four answers.
Interestingly, a more technical fix – forcing the model to be deterministic by setting temperature to 0 (to remove randomness) – offered no real improvement in accuracy.
Why AI Hallucinations Pose Risks to Businesses
For marketers and business owners, the implications of AI hallucinations are serious. If you deploy LLMs to generate marketing copy, blog posts, product descriptions, or chatbot responses, a hallucination can translate into misinformation published under your brand’s name.
Such errors can undermine your credibility and trust with customers or partners. As one AI security expert puts it, when an LLM outputs a plausible but false statement, it can directly damage the reputation of the organization using it, potentially leading to lost customer trust and market share (Source).
Examples of Real-World Consequences
- Legal Risk: In a 2023 case, attorneys submitted a ChatGPT-generated court brief that cited six non-existent legal cases. The lawyers were sanctioned (Source).
- Customer Service Error: Air Canada’s chatbot gave incorrect refund information to a customer, and the company was ordered to compensate them (Source).
- Brand Reputation: Google’s Bard hallucinated a fact about the James Webb Space Telescope during a demo, causing Alphabet stock to drop $100 billion in value (Source).
- Adversarial Exploits: Prompt injection attacks can deliberately cause chatbots to respond with offensive or false information. This opens new doors for malicious manipulation (Source).
Keeping the Human Element in AI Content Creation
Given the risks, human oversight is not just nice to have – it’s essential. AI-driven content generation should augment human creativity and efficiency, not replace human judgment. Every piece of content an AI produces should be treated as a first draft – one that requires expert review.
Marketing experts agree. Hawke’s guide to GEO emphasizes the need for rigorous editing and human fact-checking as a safeguard (Source).
Strategies to Minimize AI Hallucinations in Content Generation
Here are some actionable ways marketers can reduce AI hallucination risks:
- Engineer Prompts Thoughtfully: Specific, fact-rich prompts dramatically reduce hallucinations.
- Use Retrieval-Augmented Generation (RAG): Connect your LLM to a verified content source, like a knowledge base (https://huggingface.co/blog/rag). This gives the model a reference library.
- Apply Mitigation Prompts: Add instructions to avoid speculation, such as: “Only use provided data. If unsure, say so.”
- Human Fact-Check Before Publishing: Never post unedited AI-generated content. Verify all facts, citations, and statements.
- Monitor and Audit Regularly: Review chatbot outputs and AI-generated content periodically to catch issues early.
Checklist: Safer AI Content Creation
- Feed AI with accurate source material
- Use structured, detailed prompts
- Include guardrails against speculation
- Fact-check all outputs manually
- Test, monitor, and review live AI systems regularly
Final Thoughts
LLMs are transformative tools, but hallucination risk is real and persistent. As shown in the Omar et al. 2025 study (https://www.nature.com/articles/s43856-025-01249-9), even best-in-class models hallucinate in up to half of outputs under normal conditions.
Prompt engineering, mitigation prompts, and grounding the model in trusted data all help. But human oversight remains your most effective defense.
Treat AI as your intern, not your CMO. Empower it with good instructions, but verify its work before putting it in front of your customers.
Related Hawke Media Resources:
- https://hawkemedia.com/blog/marketing-ai-guide
- https://hawkemedia.com/blog/how-to-use-chatgpt-for-seo
- https://hawkemedia.com/blog/black-friday-ai-marketing-checklist