AI Frustrations: Why the Hype Rarely Matches Reality

Bridging the Divide Between Human and AI

AI Frustrations: Why the Hype Rarely Matches Reality

Everyone says AI is improving – but what if it’s not? What if the reality is far less impressive than the headlines suggest?

A university educator recently voiced a frustration that many professionals share:

“I feel like I’m going insane. I work at a university, and the number of people so catastrophically pilled with this tech keep telling me, ‘It’s getting better and better.’ And I HAVEN’T seen a single improvement in output quality for more than a year and a half!”

I’ve heard this sentiment from many people – in fact at my training workshops I always ask who the AI sceptics are in the room (they’re often the best at understanding how to use AI effectively).

AI marketing tells one story, but the reality for many users is something else entirely.

Many component parts of AI aren’t evolving as fast as the hype suggests – and that gap is frustrating.

Why AI Feels Revolutionary to Some, But Useless to Others

AI opinions are sharply divided. Some find it transformative, while others – especially in education, creative industries, and technical fields – remain unimpressed.

A key reason? The gap between claimed progress and actual, measurable improvements in quality.

Another educator explained how AI tools were forced into their university’s workflow without consulting the people who actually use them. The result? More frustration than innovation.

The Real Issue: AI Is Built for Some Tasks, But Not Others

Christopher Penn offers a useful way to explain why AI seems brilliant in some areas but terrible in others. He highlights the distinction between two types of tasks:

➡️ Probabilistic tasks – Creative, open-ended, multiple possible “right” answers (brainstorming, first drafts, idea generation).
➡️ Deterministic tasks – Structured, rule-based, with clear right and wrong outcomes (final drafts, fact-checking, technical precision).

AI excels at probabilistic tasks. It’s great at generating ideas, summarising information, and automating repetitive work. But when it comes to deterministic tasks – where precision, nuance, and accuracy matter – it falls short.

This distinction helps explain why proper assessment before implementation is so crucial. When working with clients, I’ve found that understanding which specific tasks AI handles well versus where human expertise remains essential allows for more successful adoption. It’s about finding the right balance rather than wholesale replacement.

As Christopher puts it:

“We used to say, ‘Write drunk, edit sober.’ Now, it’s ‘AI writes, humans edit.'”

This explains why many academics and professionals feel frustrated. Their work isn’t just about creating something plausible – it’s about getting it right. AI isn’t built for that.

Is AI Actually Improving? The Reality Check

Has AI improved in the last 18 months? The answer is yes and no – depending on what you measure.

AI Advancements:

✔️ Multimodal capabilities (text, image, audio, video)
✔️ Code generation
✔️ Following complex instructions
✔️ Mathematical reasoning
✔️ Custom models for specific industries

Areas with Minimal Progress:

❌ Factual accuracy
❌ Deep understanding of specialised fields
❌ Consistent adherence to style guidelines
❌ Subtle error detection
❌ Adaptability to unique educational needs

But beyond performance issues, AI faces another major challenge: the quality of the data it learns from.

The Training Data Problem

AI models are only as good as the data they’re trained on. And right now, that presents two major issues:

1️⃣ Quality and Accuracy Issues

Many AI models are trained on data that is:
➡️ Incomplete or outdated
➡️ Filled with biases and inaccuracies
➡️ Lacking crucial context in specialised fields
➡️ Simply wrong in many instances

When AI confidently presents misinformation, the results range from mildly annoying to dangerously misleading. This is especially problematic in academic and professional settings, where precision isn’t optional.

2️⃣ Ethical Concerns About Intellectual Property

Many AI models have been trained on massive datasets – including copyrighted books, articles, and creative works – without proper permission or compensation.

For those in creative and academic fields, this isn’t just a technical issue – it’s an ethical one. Universities, for example, promote academic integrity, yet many AI tools they adopt are built on scraped content from researchers, authors, and artists without consent.

As an author myself, I’ve seen one of my books appear in a repository of stolen IP like LibGen without my permission. The thought of AI being trained on my work – without credit or compensation – by corporations worth billions is frustrating. And I know I’m not alone.

These aren’t side issues; they’re central to how AI is developed and deployed. If organisations don’t address them, they risk undermining their own values.

A Smarter Way to Use AI: Human-First, Not Tech-First

The real problem isn’t AI itself – it’s how it’s being implemented.

Too often, organisations introduce AI simply because it’s trendy, without considering whether it actually helps. A better approach:

1️⃣ Start with real human needs – AI should solve a problem, not create one.
2️⃣ Be honest about limitations – No tool is magic. AI has strengths and weaknesses.
3️⃣ Use AI selectively – Let it handle the probabilistic tasks where it shines.
4️⃣ Keep humans in charge – AI can assist, but it shouldn’t replace human judgment.
5️⃣ Design systems that combine AI + human expertise – The best results come from balance, not blind adoption.

When AI is forced into environments where it doesn’t fit – like an education system that requires precision and context – it’s no surprise people feel let down.

This human-first approach focuses on specific areas where AI genuinely creates value: improving productivity, extending capabilities, enhancing decision-making, and accelerating learning. The key is starting with actual business needs rather than technology for technology’s sake.

Finding the Balance

I understand why educators and professionals are frustrated. AI tools are often introduced without considering real-world challenges, and their capabilities are oversold.

At the same time, when used appropriately, AI can be a game-changer. Despite the frustrations, I’ve seen firsthand how powerful AI can be when applied correctly.

  • In one project, I used coding-focused AI tools to complete in a single day what would have previously taken me 2-3 weeks of manual development.
  • I’ve seen AI sift through vast amounts of text data, uncovering insights that would have taken months or years to document.
  • I helped a blood testing laboratory implement AI automation that cut processing time by 92% and saved over €250,000 annually – all with a tiny initial investment and without disrupting their scientific work.

I’ve also seen that the risk of waiting too long to adopt AI can often outweigh the risk of measured adoption itself.

But this is only true when that adoption follows a thoughtful, human-centered approach – not the hasty deployment without proper assessment that we’re seeing in many settings.

The key is knowing where to apply it.

Undeniable Progress & The “Jagged Edge”

Let’s be honest on two points:

  1. AI has made remarkable strides in the past 18 months. Models today can handle tasks that were impossible just two years ago. They’re more coherent, more capable, and more adaptable than their predecessors.
  2. Frustration with AI is entirely valid, but it might be field-specific. In some types of work like coding and data analysis, today’s AI tools vastly outperform those from even a year ago.

The real issue isn’t that AI isn’t improving – it’s that we’re experiencing the jagged edge of AI improvement. Progress isn’t smooth or uniform; it’s dramatically uneven across different domains and use cases and different technologies. And the pace of actual improvement doesn’t match the hyperbolic claims of AI marketers and fanboys.

Understanding this disconnect – between genuine progress and exaggerated claims – is key to forming realistic expectations about what AI can do for you today.

What’s Your Experience?

Have AI tools actually helped you, or are you still waiting for the ‘big breakthrough’?

If you’re navigating the challenges of AI implementation in your organisation, you might find my Complete AI Toolkit valuable. This free resource requires no email opt-in, and it includes three practical guides:

🧭 Opportunity and AI Adoption – Helps you identify where AI can genuinely add value versus where it’s just hype
⚠️ Risk and AI Adoption – Provides a framework for understanding and mitigating the risks we’ve discussed
🗺️ AI Adoption Roadmap – Offers a structured approach to implementation that puts human needs first

The toolkit directly addresses many of the frustrations outlined in this post and provides concrete steps for more successful AI implementation. Download it here.

💡 Your AI Transformation Starts Here:

Get The Free AI Toolkit for Strategic Breakthrough
Zero Guesswork, Maximum Impact

Get Instant Access
Written by Alastair McDermott

I help business leaders and employees use AI to automate repetitive tasks, increase productivity, and drive innovation, all while keeping a Human First approach. This enables your team to achieve more, focus on strategic initiatives, and make your company a more enjoyable place to work.

Table of Contents

More posts like this.

Bridging the Divide Between Human and AI
AI Strategy

How Businesses Can Prepare for AGI

Google DeepMind’s AGI Safety Blueprint: What Business Leaders Need to Know AGI is coming faster than most people realise. While the public and many business leaders still debate whether truly general AI is even possible, major AI labs like Google DeepMind are

Bridging the Divide Between Human and AI
AI Strategy

I Won’t Help You Fire Your Staff

I don’t want to see a single human being laid off because of AI. Plain and simple. Some will call this naive. After all, the “inevitable future” is already unfolding – ChatGPT and Gemini are writing marketing copy, Claude is writing software,

Bridging the Divide Between Human and AI
AI Essentials

Why AI Accuracy Doesn’t Always Matter

“That’s about as insane of a statement as anyone can make.” That’s what someone said to me after I posted: “[AI] accuracy doesn’t matter in some fields.” (It was a robust conversation 🙂) And fair enough – on the surface, it does

Bridging the Divide Between Human and AI
AI Essentials

How Smart Teams Choose the Right AI for the Job

Don’t Let Your Software Vendor Choose Your AI Most businesses use only the AI that comes with their software – like Microsoft Copilot or Google Gemini. I’ve found this limits your results without you even realising it. This isn’t about one tool

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.