Should You Disclose AI Use?

Background image - bright geometric symbols on dark background

AI Disclosure: A Practical Path Forward

Artificial intelligence in content creation has sparked plenty of debate. Many of us are trying to figure out how AI fits into our work – and more importantly, how transparent we should be about using it.

I was recently chatting with a friend about this the nuance of disclosing AI use. Both of us have worked on developing AI tools and work regularly with AI tools. We’ve seen firsthand how they’re reshaping our creative processes.

The question is: When and how should we disclose AI’s involvement in what we create? This issue is not as straight-forward as many think.

It’s More Nuanced than “Yes or No”

Transparency matters. You want to be upfront about how your content is made. But here’s the catch: oversimplifying disclosure can actually mislead people or even undermine the value of your work.

Describing how AI really fits into content creation today is nuanced. Right now the AI tools we have are far from some magic machine churning out finished work on its own that would pass review as if created by an authoritative human subject matter expert.

The Quality Threshold: Where Do You Draw the Line?

AI is a tool – a powerful one, yes – but it very much supports the process rather than replacing human creativity. Unless you have a low bar for quality, and that’s a genuine issue, but for professionals creating substantive content, AI augments rather than replaces human judgment, expertise, and accountability.

Ask yourself: Are you creating content by AI or with AI?

For me, it’s definitely the latter. When I recently worked on a white paper about AI risk, AI tools were involved throughout the process. They helped with research, suggested structure ideas, and refined my language. But the core analysis, the critical thinking, and the final decisions? All human.

This distinction – AI-assisted vs. AI-generated – is often ignored in the disclosure debate.

Is Blanket AI Disclosure Actually Helpful?

Some people suggest for blanket AI disclosure – where everything touched by AI must be flagged. But is that really necessary? And more importantly, is it accurate?

Consider this: You’ve probably used AI-driven tools for years without thinking twice. Microsoft introduced the grammar check tool to Word 97. Have you ever felt the need to disclose that you used them? Of course not – it’s just part of writing.

So why the sudden demand for disclosure now, just because the tools have become more advanced?

Considering Different Perspectives

Not everyone advocates for a nuanced approach to AI disclosure. For instance, Christopher Penn of Trust Insights – someone I have huge respect for in the field of AI – argues for mandatory disclosure of all AI-generated content. In his blog post and accompanying video, Penn presents three main arguments for universal disclosure:

First, he cites the EU AI Act as creating a legal requirement for transparency around AI-generated content, suggesting organizations should “get ahead of the law” rather than retroactively disclosing AI use.

Second, Penn frames it as an ethical obligation: “You shouldn’t claim work that you didn’t actually do. For example, if you use AI to write a blog post, you didn’t write the post—a generative AI did.”

Third, he emphasizes copyright protection: “By disclosing what is AI-made, you are also protecting what is NOT AI-made,” since AI-generated content cannot be copyrighted in jurisdictions like the US.

While these points deserve consideration, they tend to treat AI as a binary factor in content creation rather than acknowledging the spectrum of AI involvement that characterizes professional workflows. This all-or-nothing approach may inadvertently reinforce misconceptions about how AI actually functions in expert-led content development, where tools support rather than replace human judgment.

The Risk of Undermining Your Work

Here’s something you might not expect: Overly simplistic disclosure could damage how your work is perceived.

Screenshot of cover of Research Report from PNAS Nexus, 2024Research backs this up. A 2024 study by Altay and Gilardi in PNAS Nexus found that when content is labelled “AI-generated,” people are less likely to believe or share it – even when it’s factually accurate or mostly human-made.

This effect happens because readers often assume AI involvement means full automation with no human oversight. The impact is significant – though smaller than outright labelling something as false – but still enough to damage trust in high-quality, human-led work that happens to involve AI.

Additional Research Supports The Perception Gap

Beyond the 2024 Altay and Gilardi study, additional research confirms how AI disclosure impacts perception. A September 2024 study in the Journal of Communication led by Professor Haoran Chu found fascinating results when they swapped labels between human-written and AI-generated stories.

Participants who believed they were reading AI-generated content reported being less engaged and more critical—even when unknowingly reading human-written stories. While AI content could be logical and persuasive, readers reported it lacked that crucial ability to “transport” them into the narrative.

As Chu notes: “AI is good at writing something that is consistent, logical and coherent. But it is still weaker at writing engaging stories than people are.” More importantly, this perception gap existed regardless of who actually created the content.

These findings further highlight the risks of simplistic AI disclosure—the mere suggestion of AI involvement can significantly impact how audiences engage with and value the work.

So, What’s a Smarter Way to Handle AI Disclosure?

Rather than flagging every piece of content, a better solution could be organisational AI usage statements.

This would be a clear, external policy outlining how your organisation uses AI. It could cover your ethical principles, your commitment to quality, and the general types of AI tools you use.

Why take this route?

➡️ It’s transparent without cluttering every piece of content.
➡️ It gives your audience context about your approach to AI.
➡️ It shifts focus back to the quality of your work – not just the tools behind it.

Another smart strategy? Focus on your process, not just the tools.

Instead of saying, “This was created using AI assistance,” try something like:
“This report was developed using a rigorous research process, incorporating AI-assisted data analysis and expert human review.”

This approach highlights the depth and thoughtfulness behind the work, rather than making AI sound like a mysterious force behind the scenes.

A Nuanced Approach Is Essential

Disclosure should match the level of AI involvement. A grammar checker? No disclosure necessary. AI-generated first drafts? That might warrant a more detailed explanation of your process.

Different industries will also need different disclosure standards:

➡️ Finance, healthcare, law: Stricter rules and transparency requirements.
➡️ Creative industries: Protecting and highlighting human creativity.
➡️ News media: Emphasising factual accuracy and editorial responsibility.
➡️ Education: Addressing concerns around authorship and assessment.

Regulations and Disclosure

We must also consider the evolving regulatory landscape around AI disclosures. The EU AI Act, which comes into force in 2026, introduces mandatory transparency requirements for certain AI systems. Similar legislation is being considered in other jurisdictions, from the UK’s approach to Canada’s proposed Artificial Intelligence and Data Act.

These regulations often require clear disclosure when people are interacting with AI systems, particularly in high-risk contexts. However, even these frameworks acknowledge that disclosure needs are contextual – varying based on the type of content, potential impact, and intended audience. Organisations will need to balance compliance with these emerging regulations while maintaining effective communication with their audiences.

Ethics Matter Just as Much as Transparency

Beyond practical concerns, there’s an ethical layer here.

Who’s responsible for AI-assisted content? Is it the human creator, the organisation, the AI developer – or some combination of all three? There are big questions around accountability, authenticity, and potential deception that need serious consideration.

It’s a balancing act between the audience’s right to know, organisations’ right to innovate, and society’s need for trust in information.

How I’m Rethinking My Own AI Disclosure

In my own work, I’m aiming for a more thoughtful approach. For upcoming white papers, I plan to include a methodology section that naturally outlines how AI tools supported the process.

For books, I’ve previously used disclaimers like: “This was generated by a human in conjunction with AI.” But I think there’s room for improvement. A better phrasing might be:
“This book is the result of a collaborative process, combining human expertise with AI tools to enhance research and clarity. The final content has been rigorously reviewed to ensure accuracy and insight.”

Moving Forward: No One-Size-Fits-All Solution

AI disclosure is an evolving challenge – and it’s clear that blanket statements aren’t the answer.

We need practical, nuanced, and transparent approaches that don’t devalue the role of human expertise. Organisational AI statements, clearer methodology explanations, and adapting disclosure to fit the level of AI involvement are better ways forward.

Acknowledgment

I want to acknowledge that my thinking on this topic has been influenced by Christopher Penn’s post above, and discussions with Frank Prendergast. Frank’s thoughtful email on AI disclosure is worth a read. We arrived at similar conclusions about the impracticality of blanket disclosure and the need for more nuanced approaches, and Frank has a fantastoc three-point plan for responsible transparency helped crystallise my own perspectives on balancing transparency with practicality.

How are you thinking about AI disclosure in your own work? Are you leaning toward full transparency, or do you think nuance matters more?

Written by Alastair McDermott

I help business leaders and employees use AI to automate repetitive tasks, increase productivity, and drive innovation, all while keeping a Human First approach. This enables your team to achieve more, focus on strategic initiatives, and make your company a more enjoyable place to work.

Table of Contents

More posts like this.

Background image - bright geometric symbols on dark background
AI Strategy

The AI Job Boom

AI is no longer a niche skill – it’s becoming a core competency. Since late 2022, AI-related job postings have increased by 68%, while overall tech job postings have fallen by 17% and general IT roles have declined by 27%. The trend

Background image - bright geometric symbols on dark background
AI Strategy

AI in Lung Cancer Screening

A Breakthrough in Reducing Radiologist Workload Artificial intelligence (AI) is already transforming healthcare, and a recent study published in the European Journal of Cancer confirms its effectiveness in lung cancer screening. Researchers validated a commercially available AI system using data from the

Background image - bright geometric symbols on dark background
AI Strategy

The Future of Professional Services

Clients will soon realise they’re paying for something AI can do at a fraction of the cost – what’s your next move? AI is Reshaping Professional Services – Are You Ready? AI has fundamentally changed the economics of professional work. Reviewing and

Background image - bright geometric symbols on dark background
AI Strategy

Human-First AI

The Business Case for Human-Centered AI If you think AI is just a cost-cutting tool, you’re making a billion-dollar mistake. AI is a fundamental shift transforming how we work and live. As this revolution unfolds, business leaders face a critical choice in

Blog post header
AI Essentials

How Small Businesses Can Use AI to Save Time and Boost Productivity

Practical AI Applications for Small Businesses I recently led a workshop on AI applications for small businesses with 1-10 employees. Here’s what you need to know to start using AI effectively without major investment or technical expertise. Table of Contents What AI

Background image - bright geometric symbols on dark background
AI Strategy

AI is Not Overhyped

There’s a lot of talk about AI and it being overhyped. I think that it’s fundamentally wrong to say it’s overhyped. This is a change on par with the printing press or electricity. I get it. Many individual AI tools and startups

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.