When Your Staff Are Already Using AI: A Practical Approach to Managing Change
A recent BBC article highlighted how one major law firm is navigating complexities and challenges around AI governance.
Hill Dickinson is an international law firm with over a thousand employees in the UKThe firm discovered something that I suspect many organisations would find if they looked: their staff are already extensively using AI tools.
According to the BBC’s reporting, the firm identified significant usage of various AI tools among their staff – something that speaks to the growing role of AI in professional services. As someone who works with organisations on AI implementation, I believe their experience offers an opportunity to explore how we might all approach AI adoption thoughtfully and safely.
The numbers are striking: “more than 32,000 hits to the popular chatbot ChatGPT over a seven-day period,” plus “3,000 hits to the Chinese AI service DeepSeek” and “almost 50,000 hits to Grammarly.”
The firm’s response was to restrict access, noting that “much of the usage was not in line with its AI policy.”
What caught my attention, though, was the UK Information Commissioner’s Office’s response. They said something that perfectly aligns with what I’ve been seeing in my consulting & training work:
“With AI offering people countless ways to work more efficiently and effectively, the answer cannot be for organisations to outlaw the use of AI and drive staff to use it under the radar.”
Table of Contents
What’s Really Happening Here?
I think this situation perfectly illustrates what I’m encountering in my AI consulting work: employees are already using AI tools, whether we have formal policies or not. They’re finding ways to be more efficient and innovative, which is brilliant! However, I understand the firm’s concerns – particularly in a regulated industry like law, where client confidentiality is paramount.
The Challenge We’re All Facing
The law firm’s approach reflects the careful balance many regulated organisations must strike.
As they told the BBC, they are “aiming to positively embrace the use of AI tools to enhance our capabilities while always ensuring safe and proper use by our people and for our clients.”
Their AI policy already includes “guidance that prohibits the uploading of client information and requires staff to verify the accuracy of the large language models’ responses.”
These are exactly the right concerns to have. And they’re not alone – according to the article, a survey of 500 UK solicitors found that 62% anticipated an increase in AI usage over the following 12 months. Law firms across the UK are already using the technology for tasks like “drafting documents, reviewing or analysing contracts and legal research.”
The challenge, I think, isn’t whether to use AI – that ship has sailed. It’s how to embrace it safely and effectively. As Ian Jeffery, chief executive of the Law Society of England and Wales, noted in the article, AI “could improve the way we do things a great deal” but these tools “need human oversight.”
A Different Approach to Consider
I think there’s a more productive way forward. Here’s what I’d suggest for any organisation facing similar challenges:
1. Start with Understanding
Rather than monitoring usage to restrict it, why not survey your staff to understand:
- How they’re currently using AI
- What problems they’re solving with it
- Where they see the biggest potential benefits
I’ve found that when organisations do this, they often discover incredibly innovative uses they’d never have thought of themselves.
2. Create Clear Guidelines (Not Just Restrictions)
In my experience, staff want to do the right thing. They’re not trying to breach security – they’re trying to work more efficiently. I think we need to:
- Clearly outline what information can and can’t go into public AI tools
- Provide appropriate alternatives for sensitive work
- Create simple decision trees for staff to follow
3. Build a Sharing Culture
I’ve seen this work brilliantly in other organisations:
- Set up regular “AI success story” sharing sessions
- Create a platform – even a simple Slack or Teams chat channel – for sharing tips
- Consider small rewards for innovative, secure AI use
- Encourage management to share their own AI experiences
Enterprise AI Solutions
What makes this situation particularly interesting is that many commercial AI models already offer robust security features and enterprise-grade controls. Companies like Microsoft, Anthropic, and OpenAI provide business versions of their AI tools that include:
- Private instances that don’t train on your data
- Enhanced security protocols
- Audit trails of usage
- Team management features
- Data handling compliance tools
- Integration with existing security systems
These enterprise solutions are specifically designed to address the concerns that many organisations have about AI usage. While they often come with a cost, this investment typically pays for itself through increased productivity and reduced risk.
One concern I have when organisations restrict AI access is that staff might end up using less capable tools that don’t have enterprise-grade security features. The most advanced AI models (known as “frontier models”) often provide much better security controls and compliance features compared to basic or consumer-grade AI tools. Without access to these more sophisticated tools, staff might resort to using whatever alternatives they can find – potentially increasing rather than decreasing risk.
This is why choosing the right AI tools is crucial. Different models offer different capabilities and security features. The key is matching the tool to your specific needs and compliance requirements while ensuring staff have access to appropriately secure and capable solutions.
Making it Work in Practice
Here’s what I think this might look like in practice:
- Start Small: Perhaps begin with a pilot group using approved AI tools for specific, non-sensitive tasks.
- Learn and Adapt: Use the pilot group’s experiences to develop practical guidelines that work in the real world.
- Provide the Right Tools: Consider implementing private AI instances for sensitive work. Yes, there’s a cost, but I think it’s worth weighing against the productivity benefits and risk mitigation.
- Train and Support: In my workshops, I find that people need about 10-14 hours of hands-on experience to really get comfortable with AI tools. Consider providing structured learning opportunities.
How Do We Control This? Or Should We?
I think the key is to shift our thinking from “how do we control this?” to “how do we harness this safely?” In my experience, when organisations take this approach, they often find their staff become partners in ensuring safe, effective AI use rather than trying to work around restrictions.
What You Can Do Today
If you’re facing similar challenges, here are some practical first steps:
- Survey your staff anonymously about their AI use
- Create a small working group to develop initial guidelines
- Identify a few low-risk areas where AI could be officially piloted
- Start building your knowledge-sharing platform
The law firm’s situation isn’t unique – I’m seeing this across all sectors. What’s important is how we respond. I think there’s a fantastic opportunity here to harness the enthusiasm and innovation our staff are already showing, while ensuring we maintain appropriate safeguards.
I’d love to hear your thoughts on this. How is your organisation approaching AI adoption? What challenges are you facing? I’d love to hear from you on LinkedIn.
If you’d like to learn more about implementing AI safely and effectively in your organisation, check out my AI workshops and AI training programmes, or schedule your free consultation below to chat with me about your specific needs.