The more data you give AI, the smarter it gets – and the riskier it becomes.
In this post, I’ll cover:
- Why sharing data makes AI more useful- and more dangerous
- Real-world examples of AI-related privacy breaches
- Practical steps to protect yourself while using AI tools
In security, there’s a well-known trade-off: security vs convenience.
If something is more secure, it’s less convenient.
E.g. if you have 12 dead-bolts on your door, it’s a pain to lock and unlock, but it’s super secure. If you have a password, retina scanner, physical key and thumbprint to open a door, it’s incredibly secure, but again is inconvenient – especially if one of those security methods is problematic.
In AI, we also have a trade-off. AI is more useful when we give it access to more of our data. Specifically, it’s more useful to use personally when we give it more access to our personal data – our email, our calendar, our documents, our computers, our credit cards, our medical information.
In AI, we face a similar trade-off.
AI is more useful when we give it access to more of our data.
Specifically, it’s more useful to use personally when we give it more access to our personal data – our email, our calendar, our documents, our computers, our credit cards, our medical information.
Why this trade-off exists
The more personal data an AI system has access to, the more tailored and effective it is – the more useful it becomes.
It can:
- Draft emails that sound like you
- Spot calendar clashes before you do
- Summarise and connect documents from across years of work
That can save you real time and energy.
But every new source of data adds risk. More exposure means more potential for leaks, misuse, or sharing with third parties you didn’t expect. In effect, you trade one form of friction (manual work) for another (privacy anxiety).
This fundamental tension isn’t accidental. Most AI companies make money by improving their models with your data. Even when they promise not to train on your specific inputs, the competitive pressure to gather more data is enormous. The better their AI performs, the more customers they attract.
I’ve seen this trade-off up close
Before working in AI and business strategy, I spent years in computer security. I was part of the Solaris Networking Security team at Sun Microsystems, where we built systems designed to protect critical infrastructure. That experience taught me a simple truth:
Every added layer of access is a potential vulnerability.
Back then, we focused on protecting systems from external attacks. Today, the challenge is more nuanced. We’re giving access willingly – sometimes without fully understanding the long-term implications.
The old principle still applies: convenience comes at a cost.
In the world of AI, that cost is often your privacy.
Recent breaches show the real risks
These aren’t theoretical concerns. Major AI companies have suffered serious privacy failures that demonstrate exactly what can go wrong.
23andMe’s Multiple Genetic Pivacy Issues: Hackers used credential stuffing attacks to access 6.9 million users’ genetic profiles. The stolen data was sold on dark web forums. The company initially blamed users rather than taking responsibility. They settled for $30 million in September 2024 and then filed for Chapter 11 bankruptcy, leading to questions about what will happen to their massive database of user genetic data. The potential sale is raising alarms, as the new owner would inherit the sensitive data while potentially not having to respect the original users terms and conditions.
This shows how “anonymised” data becomes a target for discrimination and profiling.
OpenAI’s string of security failures:
- March 2023: A Redis library bug exposed chat histories and payment info of 1.2% of ChatGPT Plus users
- 2023: An unreported breach of OpenAI’s internal employee forums discussing AI developments (they chose not to inform law enforcement)
- July 2024: ChatGPT’s macOS app stored user conversations in plain text in unprotected locations
Microsoft Copilot vulnerabilities:
- CVE-2024-38206: A server-side request forgery flaw allowed access to Microsoft’s internal cloud infrastructure
- March 2024: US House of Representatives banned Copilot for congressional staff due to data leak concerns
Samsung banned employees from using ChatGPT after they accidentally leaked source code and meeting notes. This shows how even sophisticated companies struggle with the privacy trade-off.
AI agents raise the stakes dramatically
Until recently, most AI tools were read-only. They helped you make decisions – but they didn’t act on your behalf.
That’s changing fast.
AI agents can now take action for you, including:
- Booking flights or hotels
- Making purchases with saved credit card info
- Sending emails and messages
- Accepting or rescheduling meetings
- Replying to customer queries in your voice
These aren’t hypothetical features. They’re real, and already rolling out across platforms.
And this is where the privacy trade-off gets sharper.
It’s one thing for AI to read your calendar. It’s another for it to confirm a £500 flight because it thinks you’re free that weekend.
To act on your behalf, agents need broad access and permission. That includes sensitive systems like:
- Your email and messaging apps
- Your payment accounts
- Your file storage, CRM, and calendar tools
Once connected, that access often stays open. And if you’re not watching closely, an error or misfire could cause real damage – financial, reputational, or legal.
The core tension: usefulness vs privacy
Here’s the core equation:
- More data shared → More AI power
- More AI power → Higher privacy risk
And when the AI can take action, the stakes are no longer theoretical. You’re trusting it not just to analyse data, but to make decisions and take steps on your behalf.
Would you let an AI agent reschedule your meetings without review? Would you give it access to your Stripe or PayPal account for automatic client billing?
There’s no right answer.
But there is a right question: what level of risk are you comfortable with, given the benefit offered?
Can you trust your AI provider?
Different AI providers have very different approaches to privacy. Before you share anything sensitive, ask:
➡️ What data do they collect?
Is it everything on your system, or just what you explicitly choose?
➡️ How long do they keep it?
Some delete raw data after model training. Others hang on indefinitely.
➡️ Do they share it?
That includes partners, advertisers, or law enforcement.
➡️ How transparent are they?
Are their policies clear and accessible – or buried in legal jargon?
➡️ How do they handle breaches?
Do they notify users quickly and take responsibility, or blame users like 23andMe did?
Comparing big players like OpenAI, Google and Microsoft shows how varied these policies are. Some offer granular controls and audits. Others assume consent by default. It’s worth digging into the details before you connect your tools.
The regulatory landscape is shifting
Governments are taking notice. The EU’s GDPR enforcement is getting more aggressive with AI companies. Italy alone has fined OpenAI €15 million and temporarily banned ChatGPT. The US government is expressing serious concerns about AI and national security.
When the US House of Representatives bans an AI tool for congressional staff, that’s a signal about where regulatory attitudes are heading.
This is important because stricter rules are coming. Companies that don’t get their privacy practices sorted now will face bigger problems later.
Red flags to watch for
Based on recent breaches, here are warning signs:
🚩 AI tools that don’t clearly explain where your data goes
If they can’t tell you in plain English, that’s a problem.
🚩 Companies that change privacy policies after breaches
Like 23andMe blocking class action lawsuits in their terms of service.
🚩 Tools that require broad access but only use it for narrow tasks
Why does it need access to everything when it only reads your calendar?
🚩 Providers who blame users for security failures
Professional companies take responsibility when things go wrong.
🚩 Unreported breaches
OpenAI didn’t tell anyone about their 2023 internal breach. That’s not transparency.
A checklist before you connect
Before you give an AI tool access to your inbox, calendar, or card statements, run through a few quick checks:
➡️ Do I actually need this feature?
Start with minimal access and build up.
➡️ Can I limit the scope?
Look for folder-level or read-only permissions.
➡️ What’s the worst-case scenario?
Imagine a leak, breach, or legal demand for your data.
➡️ Is there an audit log?
You should be able to see who accessed what, and when.
➡️ Can I revoke access easily?
One click – not a customer support ticket.
These don’t take long. But they could save you from major problems down the road.
A practical approach to AI privacy
You don’t need to avoid AI. You just need to be thoughtful.
- Start small – Grant minimal access first
- Review regularly – Monthly or quarterly access audits
- Keep backups – Encrypted local copies of key data
- Educate your team – Make sure they understand what’s at stake
- Plan for breaches – Because they will happen
That’s how you get the benefits of AI without handing over the keys to your digital life.
Final thoughts
The AI privacy paradox isn’t going away – it’s intensifying. AI agents are crossing the line from helpful to autonomous. And the tools you connect today may evolve in ways you didn’t anticipate tomorrow.
But I’m not anti-AI. Quite the opposite. I think these tools can be transformative. But we need better instincts – and clearer boundaries – when it comes to access and action.
The same principles I used in enterprise security apply here: minimise access, monitor activity, and don’t rely on blind trust.
The companies that get this balance right will thrive. Those that don’t will become cautionary tales like 23andMe.
Your turn: do you use AI agents to take actions on your behalf? What steps do you take to keep that safe and sensible?
Share your thoughts with me on LinkedIn – I’d love to hear how others are thinking about this shift.
PS: If you’re mapping out your AI stack, try breaking it into three zones:
- Read-only vs. full access
- Suggest vs. act
- Manual review vs. auto-execute
It’s a helpful lens for deciding where to draw the line.