The secret to high-quality AI output

Bridging the Divide Between Human and AI

If you’re only telling your AI what to do, you’re missing 65% of what it could deliver. Most AI prompts rely on one brain. Here’s how to activate both.

You spend 2 hours crafting the perfect AI prompt for an important report, yet the output still needs heavy editing. Your customer service team writes detailed instructions for their AI assistant, but responses sound robotic or off-brand. Sound familiar?

AI success isn’t just about what model you use. It’s about how you talk to it.

If you’re a business leader working with AI tools, you’ve probably seen mixed results. Sometimes it nails the task. Other times, it misses the mark or delivers output that looks fine but isn’t actually useful.

This isn’t just inconsistent behaviour – it’s a sign you’re only using part of the system’s capabilities. New research from Meta and NYU¹ shows there’s a fix, and it’s surprisingly simple.

Before we dive in, let’s be clear: this isn’t about your casual AI chats or one-off questions. This is about the structured, repeatable AI tasks that run your business – the customer responses, report generation, and content workflows you depend on daily. If you’re using AI for consistent business processes, this research will transform your results.

It’s Not the Model – It’s Your Method

Researchers tested how AI responds to two types of input:

  • Instructions (e.g. “Summarise this article focusing on key financial metrics”)
  • Demonstrations (e.g. showing what a good financial summary looks like)

What they found was striking: AI processes these two methods using different internal mechanisms. Your AI processes instructions and examples through mostly different pathways – with only 20-35% overlap

🧠 The Key Insight That Changes Everything

AI has two separate “brains” for understanding tasks:

  1. Brain 1: Activated by demonstrations (examples)
  2. Brain 2: Activated by instructions (telling it what to do)

➡️ Only 20-35% overlap between them

➡️ Using both = Full AI potential unlocked

Translation: You’ve been running your AI at half capacity.

This matters because if you’re only using one method, you’re only unlocking part of the system’s potential. The real boost comes from combining them.

When both instructions and examples are used together, performance improves – often significantly. For your business, this could mean the difference between 70% and 95% accuracy in automated tasks.

When This Matters Most (And When It Doesn’t)

This research breakthrough applies primarily to structured prompting and repeatable AI tasks – the kind that keep your business running. Think:

  1. Customer service responses
  2. Report generation
  3. Data analysis templates
  4. Content creation workflows
  5. Email responses

For casual ChatGPT conversations or creative brainstorming? The impact is minimal.

But for the AI tasks you automate to run 10, 50, or 100 times daily? This is where you’ll see the 20-40% improvement.

The rule of thumb: If you’re doing it more than 5 times a week, optimise it with hybrid prompting.

Understanding Examples vs Demonstrations: A Critical Distinction

Not all examples are demonstrations, and this distinction matters enormously for AI performance – potentially saving your team hours of revision time.

In AI terms, a “demonstration” is a specific type of example that shows the full input-output pattern you want the AI to follow. It’s the difference between:

  • Saying “Here’s what good customer service looks like” (a general example)
  • Versus “Customer: ‘I need this urgently’ → Response: ‘I’ll prioritise this for you and have it ready by 3pm today'” (a demonstration)

Demonstrations activate pattern-matching. General examples just give context.

Here’s another one:

  • General example: “We value friendly service.”
  • Demonstration: “Customer: ‘This product broke after one day’ → Response: ‘I’m sorry to hear that. I’ll arrange a replacement today.'”

And for product copy:

  • Example: “Our descriptions should be engaging.”
  • Demonstration: “Product: Wireless headphones → Description: ‘Crystal-clear audio meets all-day comfort in these lightweight, noise-cancelling headphones that last 30 hours per charge.'”

Why this matters: When your team understands this distinction, you stop wondering why AI outputs are hit-or-miss. You start crafting prompts that reliably produce results. For a company processing 100 customer queries daily, this could mean reducing manual review time from 30 minutes to 10 minutes per hour.

The Power of Precision: Why Fewer, Better Demonstrations Win

The principle “three sharp demonstrations beat ten vague ones” reflects a crucial insight into how AI learns patterns.

When you give a demonstration, you’re not just offering an example – you’re teaching the AI a specific behaviour to replicate. If that demonstration is vague, inconsistent, or messy, you’re giving the model mixed signals.

Sharp, precise demonstrations – ones that follow a consistent structure and show the exact response you want – are far more effective.

Think of it like onboarding a new hire. Three well-executed sales calls teach more than ten that vary wildly in tone and structure. It’s the same with AI.

Here’s an example:

  • Weak: Ten email replies that vary in tone, structure, and resolution strategy
  • Strong: Three replies that follow a clear pattern: acknowledge the problem, offer a specific fix, and provide a timeline

This matters because good demonstrations activate consistent pathways in the AI’s internal systems. Poor ones activate conflicting patterns that interfere with one another.

For SMEs, this is good news. You don’t need dozens of examples or huge datasets. Just three to five excellent demonstrations that clearly show what good looks like. This approach has helped businesses reduce prompt engineering time by 60% while improving output quality.

Quality trumps quantity every time.

Why This Explains So Much

If you’ve been frustrated with AI outputs, this research validates your experience.

Maybe you’re giving detailed instructions and getting bland responses. Or providing examples, but the system fails to apply them properly. That’s because you’re not triggering the right part of the model.

It’s like training a new team member. If you only tell them what to do, or only show them examples without the task structure, they’ll struggle. AI is the same.

Practical Steps

You don’t need to become an AI specialist to fix this. Just shift how you and your team write prompts.

➡️ 1. Audit current prompt styles

Look at how your team is prompting today. Are they relying on instructions or examples? Most teams lean one way.

➡️ 2. Create hybrid templates

Side-by-Side Comparison:

Instruction-only prompt:

“Write a proposal for our social media audit services.”

Result: Generic, lacks company voice, misses key selling points

Hybrid prompt:

“Write a proposal for our social media audit services. Here’s our usual format:

Client: Social media audit
Our approach: We’ll review platform performance, content engagement, and audience trends.
Timeline: 4 weeks
Investment: €6,000

Client: Email marketing strategy
Our approach: [Let the AI continue the pattern]”

Result: Matches company style, includes key elements, maintains consistency

➡️ 3. Run a comparison test

Choose 3–5 typical tasks. One week, use current prompts. The next, use hybrid prompts. Measure:

  • Quality (% needing major edits: expect 30-50% reduction)
  • Consistency (standard deviation in output quality: expect 40% improvement)
  • Time saved (average minutes per task: expect 20-40% reduction)

For a 100-person company using AI 20 times daily, a 30% improvement in output quality could save 15 hours per week – that’s ~ €30,000 annually in productivity gains.

➡️ 4. Train across functions

This isn’t just for IT. Anyone using AI needs to understand how to guide it. Share simple instructions:

  • Give clear direction
  • Add 2–3 demonstrations
  • Use consistent formatting

Business impact: Companies report 25% faster AI adoption when all departments understand these principles.

➡️ 5. Build a prompt library

Document effective hybrid prompts. Note which tasks need more structure (e.g. reporting) versus more patterns (e.g. customer replies).

A Quiet Advantage for Smaller Teams

Larger companies may have more powerful models. But if your team knows how to communicate better with AI, you can outperform them in output quality and speed.

Real example: A 10-person consultancy using standard AI with optimised hybrid prompts produced higher-quality proposals 40% faster than a 100-person competitor using advanced AI with basic prompting. The smaller firm won 3 out of 4 competitive bids where both companies pitched.

This matters for smaller teams because:

  • You can implement best practices faster (days vs months)
  • You’re more agile when testing new approaches
  • You can standardise across roles quickly
  • Training 10 people takes days; training 100 takes months

Pitfalls to Watch Out For

  • Too many weak examples: Three sharp demonstrations beat ten vague ones
  • Vague instructions: Be specific about what success looks like
  • Inconsistent structure: Use formatting the AI can follow

Final Thought: It’s About Communication, Not Code

You don’t need to understand how AI is built – just how to work with it.

Think of prompting as a leadership skill. The better you brief your team – or your AI – the better your results.

Start with one business task this week. Rework the prompt using both instructions and demonstrations. See what happens.

It’s not about choosing between showing or telling – it’s using both, strategically.

Having taught AI literacy to hundreds of business leaders and guided dozens of businesses through AI adoption, I’ve seen one pattern repeatedly: the biggest wins come from understanding how to communicate with AI effectively.

This research validates what the most successful implementers do intuitively. The best part? You can apply these insights today with the AI tools you already have.

¹ Davidson, G., Gureckis, T. M., Lake, B. M., & Williams, A. (2024). “Do different prompting methods yield a common task representation in language models?” FAIR at Meta & New York University. The research examined multiple language models ranging from 1B to 8B parameters, demonstrating that instruction-based and demonstration-based prompting activate distinct neural pathways within AI systems.

💡
Your AI Transformation Starts Here
Get The Free AI Toolkit for Strategic Breakthrough Zero Guesswork, Maximum Impact
💡 Your AI Transformation Starts Here:

Get The Free AI Toolkit for Strategic Breakthrough
Zero Guesswork, Maximum Impact

Get Instant Access
Written by Alastair McDermott

I help business leaders and employees use AI to automate repetitive tasks, increase productivity, and drive innovation, all while keeping a Human First approach. This enables your team to achieve more, focus on strategic initiatives, and make your company a more enjoyable place to work.

Table of Contents

More posts like this.

Bridging the Divide Between Human and AI
AI Strategy

Practical Takeaways from the William Fry AI Summit 2025

Research shows generative AI adoption in Ireland more than doubled – from 49% to 91% – in just 12 months. Here’s what smart businesses are doing next. That kind of growth doesn’t happen by chance. It signals a turning point: AI has

Bridging the Divide Between Human and AI
AI Essentials

How Larger Context Windows Unlock AI Capabilities

Many AI users are running into invisible walls with AI. Those unseen walls are made of token limits. The moment your model can analyse everything – not just snippets of your information  – is the moment your insights stop feeling generic and

Bridging the Divide Between Human and AI
AI Strategy

How Businesses Can Prepare for AGI

Google DeepMind’s AGI Safety Blueprint: What Business Leaders Need to Know AGI is coming faster than most people realise. While the public and many business leaders still debate whether truly general AI is even possible, major AI labs like Google DeepMind are

Bridging the Divide Between Human and AI
AI Strategy

I Won’t Help You Fire Your Staff

I don’t want to see a single human being laid off because of AI. Plain and simple. Some will call this naive. After all, the “inevitable future” is already unfolding – ChatGPT and Gemini are writing marketing copy, Claude is writing software,

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.