How to Avoid 'AI Hallucinations' in Client Briefs

We've all seen them: AI videos where hands have seven fingers, cats morph into sofas, and the laws of physics take a holiday.
In the industry, we call these "Hallucinations."
Clients often blame the technology ("This AI isn't ready yet!"). But 90% of the time, the problem isn't the AI. It’s the Input. An ambiguous brief forces the AI to guess. And when AI guesses, it guesses weird.
As an operator, your job isn't just to prompt—it's to interrogate the client until the brief is hallucination-proof. Here is the operational guide to getting better inputs.
The "T-Shirt Test" for Concepts
Clients love abstract buzzwords.
- Bad Brief: "Show a video representing synergy and future-forward innovation."
If you feed "synergy" into Runway or Kling, you will get a terrifying morphing blob of handshakes and circuit boards. This is a hallucination caused by abstraction.
The Fix: Apply the T-Shirt Test. Ask the client: "If we printed a single still frame of this video on a t-shirt, what exactly would I see?"
- Client Answer: "Oh, I guess it would be two people high-fiving in a glass office."
- Good Brief: "A cinematic medium shot of a man and woman high-fiving in a modern glass office. 4k, energetic lighting."
Rule: AI cannot animate concepts. It can only animate nouns and verbs. Force the client to pick them.
The Anchor Image Strategy
Words are slippery. "Cinematic lighting" means something diverse to a filmmaker vs. a CEO. If you rely solely on text briefs, you are gambling.
The Fix: Require an Anchor Image. Before generating a single frame of video, generate (or find) 1 static image that serves as the "Anchor."
- Generate 4 Midjourney options based on the brief.
- Send them to the client.
- Client picks Option B.
- Option B is now your Image-to-Video input for the entire project.
By starting with pixels instead of text, you eliminate 80% of the movement hallucinations because the AI already knows what the subject looks like.
The "Negative Constraint" List
AI models are like eager puppies—they want to give you everything. Sometimes they give you too much. If a client wants a clean, minimal video, you need to explicitly forbid the chaos.
The Fix: Include a "Negative Constraints" section in your standard brief form. Ask the client to check boxes for what they don't want:
- No text/gibberish overlays.
- No morphing transitions.
- No camera movement (static tripod shot).
- No human faces (hands/objects only).
Pro Tip: For B-roll, "No human faces" is the single best way to avoid uncanny valley hallucinations.
Breaking the "One-Shot" Myth
Clients often write briefs like this:
- "A man walks into a store, buys a soda, drinks it, smiles, and flies away."
They want one continuous 10-second AI generation to do all of that. It won't work. Current models lose coherence after 3-4 seconds of complex action. The man will morph into the soda can by second 5.
The Fix: The Scene-Splitter Method. Train your clients to break the brief into "shots," just like a storyboard.
- Shot 1 (3s): Man walks into store.
- Shot 2 (2s): Close up of hand grabbing soda.
- Shot 3 (3s): Man drinks and smiles.
- Shot 4 (2s): VFX shot of man flying.
Explain to the client: "We assemble the story in the edit, not in the generation."
Summary Checklist for Client Briefing
Refuse to start a project until you have:
- Nouns & Verbs (No abstract concepts).
- Anchor Image (Approved visual style).
- Negative Constraints (What to avoid).
- Shot List (Action broken down into 3-4 second chunks).
If you enforce this workflow, you stop fighting the AI and start directing it.
Was this article helpful?
4.14 average rating • 7 votes
Vladimir Terekhov
Founder, Viralix
Scaling creative output with the world's best AI-Video artists. Vladimir is the founder of Viralix marketplace. He is also co-founder & CEO of Attract Group and co-founder of Kira-AI.


