The Right Way to Integrate AI Into Your Organisation

Background image - bright geometric symbols on dark background

When Your Staff Are Already Using AI: A Practical Approach to Managing Change

A recent BBC article highlighted how one major law firm is navigating complexities and challenges around AI governance.

Hill Dickinson is an international law firm with over a thousand employees in the UKThe firm discovered something that I suspect many organisations would find if they looked: their staff are already extensively using AI tools.

According to the BBC’s reporting, the firm identified significant usage of various AI tools among their staff – something that speaks to the growing role of AI in professional services. As someone who works with organisations on AI implementation, I believe their experience offers an opportunity to explore how we might all approach AI adoption thoughtfully and safely.

BBC headline: Law firm restricts AI after 'significant' staff use

The numbers are striking: “more than 32,000 hits to the popular chatbot ChatGPT over a seven-day period,” plus “3,000 hits to the Chinese AI service DeepSeek” and “almost 50,000 hits to Grammarly.”

The firm’s response was to restrict access, noting that “much of the usage was not in line with its AI policy.”

What caught my attention, though, was the UK Information Commissioner’s Office’s response. They said something that perfectly aligns with what I’ve been seeing in my consulting & training work:

“With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar.”

What’s Really Happening Here?

I think this situation perfectly illustrates what I’m encountering in my AI consulting work: employees are already using AI tools, whether we have formal policies or not. They’re finding ways to be more efficient and innovative, which is brilliant! However, I understand the firm’s concerns – particularly in a regulated industry like law, where client confidentiality is paramount.

The Challenge We’re All Facing

The law firm’s approach reflects the careful balance many regulated organisations must strike.

As they told the BBC, they are “aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.”

Their AI policy already includes “guidance that prohibits the uploading of client information and requires staff to verify the accuracy of the large language models’ responses.”

These are exactly the right concerns to have. And they’re not alone – according to the article, a survey of 500 UK solicitors found that 62% anticipated an increase in AI usage over the following 12 months. Law firms across the UK are already using the technology for tasks like “drafting documents, reviewing or analysing contracts and legal research.”

The challenge, I think, isn’t whether to use AI – that ship has sailed. It’s how to embrace it safely and effectively. As Ian Jeffery, chief executive of the Law Society of England and Wales, noted in the article, AI “could improve the way we do things a great deal” but these tools “need human oversight.”

A Different Approach to Consider

I think there’s a more productive way forward. Here’s what I’d suggest for any organisation facing similar challenges:

1. Start with Understanding

Rather than monitoring usage to restrict it, why not survey your staff to understand:

  • How they’re currently using AI
  • What problems they’re solving with it
  • Where they see the biggest potential benefits

I’ve found that when organisations do this, they often discover incredibly innovative uses they’d never have thought of themselves.

2. Create Clear Guidelines (Not Just Restrictions)

In my experience, staff want to do the right thing. They’re not trying to breach security – they’re trying to work more efficiently. I think we need to:

3. Build a Sharing Culture

I’ve seen this work brilliantly in other organisations:

  • Set up regular “AI success story” sharing sessions
  • Create a platform – even a simple Slack or Teams chat channel – for sharing tips
  • Consider small rewards for innovative, secure AI use
  • Encourage management to share their own AI experiences

Enterprise AI Solutions

What makes this situation particularly interesting is that many commercial AI models already offer robust security features and enterprise-grade controls. Companies like Microsoft, Anthropic, and OpenAI provide business versions of their AI tools that include:

  • Private instances that don’t train on your data
  • Enhanced security protocols
  • Audit trails of usage
  • Team management features
  • Data handling compliance tools
  • Integration with existing security systems

These enterprise solutions are specifically designed to address the concerns that many organisations have about AI usage. While they often come with a cost, this investment typically pays for itself through increased productivity and reduced risk.

One concern I have when organisations restrict AI access is that staff might end up using less capable tools that don’t have enterprise-grade security features. The most advanced AI models (known as “frontier models”) often provide much better security controls and compliance features compared to basic or consumer-grade AI tools. Without access to these more sophisticated tools, staff might resort to using whatever alternatives they can find – potentially increasing rather than decreasing risk.

This is why choosing the right AI tools is crucial. Different models offer different capabilities and security features. The key is matching the tool to your specific needs and compliance requirements while ensuring staff have access to appropriately secure and capable solutions.

Making it Work in Practice

Here’s what I think this might look like in practice:

  1. Start Small: Perhaps begin with a pilot group using approved AI tools for specific, non-sensitive tasks.
  2. Learn and Adapt: Use the pilot group’s experiences to develop practical guidelines that work in the real world.
  3. Provide the Right Tools: Consider implementing private AI instances for sensitive work. Yes, there’s a cost, but I think it’s worth weighing against the productivity benefits and risk mitigation.
  4. Train and Support: In my workshops, I find that people need about 10-14 hours of hands-on experience to really get comfortable with AI tools. Consider providing structured learning opportunities.

How Do We Control This? Or Should We?

I think the key is to shift our thinking from “how do we control this?” to “how do we harness this safely?” In my experience, when organisations take this approach, they often find their staff become partners in ensuring safe, effective AI use rather than trying to work around restrictions.

What You Can Do Today

If you’re facing similar challenges, here are some practical first steps:

  • Survey your staff anonymously about their AI use
  • Create a small working group to develop initial guidelines
  • Identify a few low-risk areas where AI could be officially piloted
  • Start building your knowledge-sharing platform

The law firm’s situation isn’t unique – I’m seeing this across all sectors. What’s important is how we respond. I think there’s a fantastic opportunity here to harness the enthusiasm and innovation our staff are already showing, while ensuring we maintain appropriate safeguards.

I’d love to hear your thoughts on this. How is your organisation approaching AI adoption? What challenges are you facing? I’d love to hear from you on LinkedIn.

If you’d like to learn more about implementing AI safely and effectively in your organisation, check out my AI workshops and AI training programmes, or schedule your free consultation below to chat with me about your specific needs.

Written by Alastair McDermott

I help business leaders and employees use AI to automate repetitive tasks, increase productivity, and drive innovation, all while keeping a Human First approach. This enables your team to achieve more, focus on strategic initiatives, and make your company a more enjoyable place to work.

Table of Contents

More posts like this.

Background image - bright geometric symbols on dark background
AI Strategy

The AI Job Boom

AI is no longer a niche skill – it’s becoming a core competency. Since late 2022, AI-related job postings have increased by 68%, while overall tech job postings have fallen by 17% and general IT roles have declined by 27%. The trend

Background image - bright geometric symbols on dark background
AI Strategy

AI in Lung Cancer Screening

A Breakthrough in Reducing Radiologist Workload Artificial intelligence (AI) is already transforming healthcare, and a recent study published in the European Journal of Cancer confirms its effectiveness in lung cancer screening. Researchers validated a commercially available AI system using data from the

Background image - bright geometric symbols on dark background
AI Strategy

The Future of Professional Services

Clients will soon realise they’re paying for something AI can do at a fraction of the cost – what’s your next move? AI is Reshaping Professional Services – Are You Ready? AI has fundamentally changed the economics of professional work. Reviewing and

Background image - bright geometric symbols on dark background
AI Strategy

Human-First AI

The Business Case for Human-Centered AI If you think AI is just a cost-cutting tool, you’re making a billion-dollar mistake. AI is a fundamental shift transforming how we work and live. As this revolution unfolds, business leaders face a critical choice in

Blog post header
AI Essentials

How Small Businesses Can Use AI to Save Time and Boost Productivity

Practical AI Applications for Small Businesses I recently led a workshop on AI applications for small businesses with 1-10 employees. Here’s what you need to know to start using AI effectively without major investment or technical expertise. Table of Contents What AI

Background image - bright geometric symbols on dark background
AI Strategy

Should You Disclose AI Use?

AI Disclosure: A Practical Path Forward Artificial intelligence in content creation has sparked plenty of debate. Many of us are trying to figure out how AI fits into our work – and more importantly, how transparent we should be about using it.

Get regular updates on AI strategies that work.

You're almost there!

I turn AI tech & strategy into clear, actionable insights. You’ll discover how to leverage AI, how to integrate it strategically to get a competitive edge, automate tedious tasks, and improve business decision-making.

– Alastair.