Where Do You Draw the Line on AI and Stolen Work?

Bridging the Divide Between Human and AI

My book was scraped to train AI. Yet I still use these tools every day.

The fact that big tech corporations have stolen the intellectual property of small creators makes many people – including me – uncomfortable.

One of my books was used to train large language models without my permission or compensation. I’m just one of tens of thousands of creators this happened to. I did not agree to it. I do not like it. And I don’t expect a cheque from Big Tech. That window has closed.

At the same time, I use these tools every day. They help me think, draft, organise, and test ideas faster than I could alone.

Is that contradictory? Possibly. But to me it feels more honest than pretending I can step outside technology that is already reshaping how we all work.

We still have to decide how we respond.

From where I sit, there are three options.

  1. We can refuse to engage with AI and keep our hands clean. That choice has a cost. Competitors move faster. Teams experiment. Markets shift. We stay “pure” and increasingly irrelevant.
  2. We can engage without thinking. We use the tools, ignore where they came from, and brush off the ethical questions as someone else’s problem. That choice has a cost too. It strips creators of agency and trust.
  3. Or we can engage deliberately. We use these tools because they are useful, while staying alert to the tensions they introduce and speaking honestly about them.

I choose that third path.

This is not a new position for me.

Since long before AI entered the mainstream, I’ve been talking about the value of sharing your knowledge freely.

I firmly believe that demonstrating your expertise publicly does not reduce your value:

 

A mechanic can post a video explaining how to change your oil. You do not suddenly become an expert mechanic by watching it. What you do learn is that they know their craft. You see the complexity involved. You can even decide to do the oil change yourself, using the video as a guide.

But quite often we’ll decide that rather than getting dirty, having to buy the right type of oil, raise the front of your car, open the sump nut, to dispose of 6 litres of used motor oil, or even potentially hurting yourself, it’s often easier to pay the $100 for a professional to handle it in 45 minutes while you have a coffee and read a great book.

Awareness and application have never been the same thing.

Knowledge creates understanding. Application requires judgement, context, and experience. That gap is where professional value lives.

AI has not erased it.

The AI debate brings an old tension into sharper focus.

My IP was taken without consent. That matters.

I benefit from tools partly built on that work. That also matters.

I am not convinced those facts can be fully reconciled.

And the complexity doesn’t end there.

When I use these tools, I’m not just navigating MY own stolen content.

I’m also benefiting from other experts’ work – some shared freely, some taken without their consent too.

Meanwhile, I’m actively encouraging AI to train on my public content, hoping it recommends me.

And that’s not even addressing the US and UK governments signalling support for training on copyrighted materials without fair compensation – a position welcomed by Big Tech and opposed by creators everywhere.

The layers keep unfolding.

What I am convinced of is this: pretending the question does not exist helps no one.

So I work with principles.

I use AI extensively as a collaborator. I use it for drafts, brainstorming, structure, refinement, even final copy in “low stakes” environments. Human judgement stays in the loop. Final sign-off is always mine.

Am I transparent about AI use? Yes, within reason. I don’t hide it. But I also don’t perform disclosure for its own sake: the work is mine because the responsibility is mine. If it’s wrong, I own that. The tool doesn’t.

This approach will not satisfy everyone. Some people want a boycott. Others want silence. I think both miss the reality on the ground.

AI processes information. Humans make decisions. The responsibility for those decisions has not disappeared. If anything, it has increased.

Your work product is your work product, regardless of the tools you used – you are professionally responsible for it.

This matters now because the norms are still being set.

How we talk about this tension shapes what comes next.

So that is where I land. Acknowledging the discomfort. Using the tools anyway. Staying honest about the trade-offs.

Where do you land on this? Do you draw a hard line, or are you trying to work inside the mess too?

💡
Your AI Transformation Starts Here
Get The Free AI Toolkit for Strategic Breakthrough Zero Guesswork, Maximum Impact
💡 Your AI Transformation Starts Here:

Get The Free AI Toolkit for Strategic Breakthrough
Zero Guesswork, Maximum Impact

Get Instant Access
Written by Alastair McDermott

I help leadership teams adopt AI the right way: people first, numbers second. I move you beyond the hype, designing and deploying practical systems that automate busywork - the dull bits - so humans can focus on the high-value work only they can do.

The result is measurable capacity: cutting processing times by 92% and unlocking €55,000 per month in extra productivity.

More posts like this.

Bridging the Divide Between Human and AI
AI Strategy

The Big 4 Just Validated the Future of AI-Driven Work

The Big 4 have finally caught on: AI isn’t replacing people, it’s helping us get smarter and faster at what matters EY released a massive 100+ page report early Dec 2025 called “The Augmented Workforce.” Source: EY Studio, “The Augmented Workforce: A

Bridging the Divide Between Human and AI
AI Strategy

Misleading AI Stories in 2025

Everyone’s worried about AI slop, AI hallucinations, and misinformation. But the slop and misinformation that actually moved markets and changed business decisions this year came from peer-reviewed journals, MIT and Harvard researchers, and books from major publishers with fact-checking teams. Consider: a

Bridging the Divide Between Human and AI
AI Strategy

Navigating the Chaos: How to Prepare for AI Disruption

AI is advancing faster than our institutions can respond. This gap is already causing chaos. Not just disruption, but systemic breakdown. It’s the pacing gap: one line on the graph is AI capability, rising exponentially. The other is how businesses, governments, and

Bridging the Divide Between Human and AI
AI Strategy

Why Talk About ROI First?

When I talk with clients about tech projects, I talk about Return On Investment (ROI) before I ever talk about the actual technology. I do this to solve a problem most engineers don’t even acknowledge: we love building the wrong thing really

Bridging the Divide Between Human and AI
AI Essentials

How to Use AI for Professional Writing

My Workflow for Using AI Without Losing Authority I use AI for a huge amount of my high-stakes professional writing now. Not because it’s smarter than me… ok, I mean, it is, but… But it handles the parts of writing I find

Bridging the Divide Between Human and AI
AI Strategy

Where is the ROI?

Where is the return on investment from AI? Why huge AI time savings don’t seem to impact the bottom line I keep seeing the same thing happen. People tell me they’re saving loads of time with AI, but when leadership looks at

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.