The Right Way to Integrate AI Into Your Organisation

Bridging the Divide Between Human and AI

When Your Staff Are Already Using AI: A Practical Approach to Managing Change

A recent BBC article highlighted how one major law firm is navigating complexities and challenges around AI governance.

Hill Dickinson is an international law firm with over a thousand employees in the UKThe firm discovered something that I suspect many organisations would find if they looked: their staff are already extensively using AI tools.

According to the BBC’s reporting, the firm identified significant usage of various AI tools among their staff – something that speaks to the growing role of AI in professional services. As someone who works with organisations on AI implementation, I believe their experience offers an opportunity to explore how we might all approach AI adoption thoughtfully and safely.

BBC headline: Law firm restricts AI after 'significant' staff use

The numbers are striking: “more than 32,000 hits to the popular chatbot ChatGPT over a seven-day period,” plus “3,000 hits to the Chinese AI service DeepSeek” and “almost 50,000 hits to Grammarly.”

The firm’s response was to restrict access, noting that “much of the usage was not in line with its AI policy.”

What caught my attention, though, was the UK Information Commissioner’s Office’s response. They said something that perfectly aligns with what I’ve been seeing in my consulting & training work:

“With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar.”

What’s Really Happening Here?

I think this situation perfectly illustrates what I’m encountering in my AI consulting work: employees are already using AI tools, whether we have formal policies or not. They’re finding ways to be more efficient and innovative, which is brilliant! However, I understand the firm’s concerns – particularly in a regulated industry like law, where client confidentiality is paramount.

The Challenge We’re All Facing

The law firm’s approach reflects the careful balance many regulated organisations must strike.

As they told the BBC, they are “aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.”

Their AI policy already includes “guidance that prohibits the uploading of client information and requires staff to verify the accuracy of the large language models’ responses.”

These are exactly the right concerns to have. And they’re not alone – according to the article, a survey of 500 UK solicitors found that 62% anticipated an increase in AI usage over the following 12 months. Law firms across the UK are already using the technology for tasks like “drafting documents, reviewing or analysing contracts and legal research.”

The challenge, I think, isn’t whether to use AI – that ship has sailed. It’s how to embrace it safely and effectively. As Ian Jeffery, chief executive of the Law Society of England and Wales, noted in the article, AI “could improve the way we do things a great deal” but these tools “need human oversight.”

A Different Approach to Consider

I think there’s a more productive way forward. Here’s what I’d suggest for any organisation facing similar challenges:

1. Start with Understanding

Rather than monitoring usage to restrict it, why not survey your staff to understand:

  • How they’re currently using AI
  • What problems they’re solving with it
  • Where they see the biggest potential benefits

I’ve found that when organisations do this, they often discover incredibly innovative uses they’d never have thought of themselves.

2. Create Clear Guidelines (Not Just Restrictions)

In my experience, staff want to do the right thing. They’re not trying to breach security – they’re trying to work more efficiently. I think we need to:

3. Build a Sharing Culture

I’ve seen this work brilliantly in other organisations:

  • Set up regular “AI success story” sharing sessions
  • Create a platform – even a simple Slack or Teams chat channel – for sharing tips
  • Consider small rewards for innovative, secure AI use
  • Encourage management to share their own AI experiences

Enterprise AI Solutions

What makes this situation particularly interesting is that many commercial AI models already offer robust security features and enterprise-grade controls. Companies like Microsoft, Anthropic, and OpenAI provide business versions of their AI tools that include:

  • Private instances that don’t train on your data
  • Enhanced security protocols
  • Audit trails of usage
  • Team management features
  • Data handling compliance tools
  • Integration with existing security systems

These enterprise solutions are specifically designed to address the concerns that many organisations have about AI usage. While they often come with a cost, this investment typically pays for itself through increased productivity and reduced risk.

One concern I have when organisations restrict AI access is that staff might end up using less capable tools that don’t have enterprise-grade security features. The most advanced AI models (known as “frontier models”) often provide much better security controls and compliance features compared to basic or consumer-grade AI tools. Without access to these more sophisticated tools, staff might resort to using whatever alternatives they can find – potentially increasing rather than decreasing risk.

This is why choosing the right AI tools is crucial. Different models offer different capabilities and security features. The key is matching the tool to your specific needs and compliance requirements while ensuring staff have access to appropriately secure and capable solutions.

Making it Work in Practice

Here’s what I think this might look like in practice:

  1. Start Small: Perhaps begin with a pilot group using approved AI tools for specific, non-sensitive tasks.
  2. Learn and Adapt: Use the pilot group’s experiences to develop practical guidelines that work in the real world.
  3. Provide the Right Tools: Consider implementing private AI instances for sensitive work. Yes, there’s a cost, but I think it’s worth weighing against the productivity benefits and risk mitigation.
  4. Train and Support: In my workshops, I find that people need about 10-14 hours of hands-on experience to really get comfortable with AI tools. Consider providing structured learning opportunities.

How Do We Control This? Or Should We?

I think the key is to shift our thinking from “how do we control this?” to “how do we harness this safely?” In my experience, when organisations take this approach, they often find their staff become partners in ensuring safe, effective AI use rather than trying to work around restrictions.

What You Can Do Today

If you’re facing similar challenges, here are some practical first steps:

  • Survey your staff anonymously about their AI use
  • Create a small working group to develop initial guidelines
  • Identify a few low-risk areas where AI could be officially piloted
  • Start building your knowledge-sharing platform

The law firm’s situation isn’t unique – I’m seeing this across all sectors. What’s important is how we respond. I think there’s a fantastic opportunity here to harness the enthusiasm and innovation our staff are already showing, while ensuring we maintain appropriate safeguards.

I’d love to hear your thoughts on this. How is your organisation approaching AI adoption? What challenges are you facing? I’d love to hear from you on LinkedIn.

If you’d like to learn more about implementing AI safely and effectively in your organisation, check out my AI workshops and AI training programmes, or schedule your free consultation below to chat with me about your specific needs.

💡
Your AI Transformation Starts Here
Get The Free AI Toolkit for Strategic Breakthrough Zero Guesswork, Maximum Impact
💡 Your AI Transformation Starts Here:

Get The Free AI Toolkit for Strategic Breakthrough
Zero Guesswork, Maximum Impact

Get Instant Access
Written by Alastair McDermott

I help business leaders and employees use AI to automate repetitive tasks, increase productivity, and drive innovation, all while keeping a Human First approach. This enables your team to achieve more, focus on strategic initiatives, and make your company a more enjoyable place to work.

Table of Contents

More posts like this.

Bridging the Divide Between Human and AI
AI Essentials

How Larger Context Windows Unlock AI Capabilities

Many AI users are running into invisible walls with AI. Those unseen walls are made of token limits. The moment your model can analyse everything – not just snippets of your information  – is the moment your insights stop feeling generic and

Bridging the Divide Between Human and AI
AI Strategy

How Businesses Can Prepare for AGI

Google DeepMind’s AGI Safety Blueprint: What Business Leaders Need to Know AGI is coming faster than most people realise. While the public and many business leaders still debate whether truly general AI is even possible, major AI labs like Google DeepMind are

Bridging the Divide Between Human and AI
AI Strategy

I Won’t Help You Fire Your Staff

I don’t want to see a single human being laid off because of AI. Plain and simple. Some will call this naive. After all, the “inevitable future” is already unfolding – ChatGPT and Gemini are writing marketing copy, Claude is writing software,

Bridging the Divide Between Human and AI
AI Essentials

Why AI Accuracy Doesn’t Always Matter

“That’s about as insane of a statement as anyone can make.” That’s what someone said to me after I posted: “[AI] accuracy doesn’t matter in some fields.” (It was a robust conversation 🙂) And fair enough – on the surface, it does

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.