AI is a power-hungry tool – there’s no denying that.
It can also be a problem-solver for energy efficiency.
There’s a lot of talk about the environmental cost of AI.
🔥 On one side, people are that AI is burning the planet.
🌤️ On the other, some seem to think it’s just another tech cycle – nothing to panic about.
The impact is real. But how we build and use AI matters more than ever.
The truth is in the middle
AI models, especially large ones, use a ludicrous amounts of energy compared with regular computing.
By 2027, AI could consume 85-134 terawatt-hours a year – similar to Argentina or the Netherlands.
That’s not nothing. But it’s also not the full picture.
Most projections assume AI systems run at full capacity, all the time. That’s rarely the case. Businesses have strong financial incentives to be efficient – nobody wants sky-high bills.
Also – let’s be honest – we were doing a pretty good job burning the world even before AI came along. Fossil fuels, inefficient systems, short-term thinking… AI didn’t start the fire, although it certainly seems to be accelerating it right now.
The question now is if and how it can help us solve the problem too.
You may not control the industry, but you shape the outcomes
Whether or not you or I use AI personally won’t move the dial all that much – it’s here to stay. Big tech has already committed billions upon billions. That trajectory won’t change overnight.
So the better question is: how do we use this technology responsibly?
You and I can influence how AI is applied – in your team, your projects, or your company.
Here’s how I approach it in practice:
➡️ Look at the impact
- Start with cost and benefit
- Assess each use case individually
- Keep up with efficiency research and tools
➡️ Build with restraint
- Use models sized for the job
- Fine-tune existing models when possible
- Prefer cloud providers using renewables
➡️ Prioritise net-positive outcomes
- Choose projects that reduce waste or resource use
- Help teams measure energy impact
- Adjust or retire systems when they overperform without purpose
Here’s something that might surprise you:
AI can actually help cut emissions – when used deliberately.
For example, one of my medical lab clients used AI to automate their testing workflow. Processing time dropped by 92%, while capacity jumped 12x. The energy savings from more efficient operations far outweighed the energy used by the AI system.
We’re seeing similar results in logistics, infrastructure, and manufacturing – wherever there’s waste to remove or systems to optimise.
But only if we’re intentional about it.
Let me ask you this:
When you start an AI project, do you factor in the environmental cost? Or is it still an afterthought?
I’ve been bringing this issue into client conversations from day one because this matters.
What tools can help?
Platforms like Hugging Face, OpenAI, and Google Cloud are starting to surface energy usage metrics and sustainability dashboards. Research communities are also publishing benchmarks – from model size to estimated carbon output.
It’s not perfect. But it’s improving quickly.
Final thought: This isn’t about picking sides
AI isn’t good or bad. It’s a tool. It reflects how we use it.
The environmental debate around AI doesn’t need more drama – it needs better decisions. And let’s be clear: sometimes the best decision might be not to use AI for a particular task.
We need to be honest about both the costs and benefits. AI will consume significant energy. That’s unavoidable. But with thoughtful implementation, we can ensure it delivers value that justifies those costs – and in some cases, helps reduce our overall environmental impact.
The uncomfortable truth is that we’re making tradeoffs. Every AI implementation has an environmental price tag. The question is whether the benefits – efficiency gains, waste reduction, or new capabilities – are worth it.
I believe we can build AI that supports both people and the planet, but only if we approach it with eyes wide open to the real challenges involved. By putting humans – and humanity itself – at the center of our decisions about AI – considering their needs, their work, and their future – we naturally create more thoughtful, efficient systems that respect both people and resources.It won’t happen by accident.
Let’s make this human-centered, responsible approach to AI the default.
How are you factoring sustainability into your AI work?
I’d love to hear what you’re doing – or what’s been hard to figure out.
PS: If you’re building AI solutions and want to make them cleaner and smarter, feel free to reach out. I’m happy to share what’s worked.