Quick answers
What are the main types of AI?
Six categories cover most of what business leaders meet in 2026: large language models (text), image/audio/video generation, computer vision, predictive models (classical machine learning), reinforcement learning, and agents. Neural networks and deep learning aren't separate types - they're the underlying architecture for most of the categories above.
What's the difference between LLMs and predictive AI?
LLMs (like ChatGPT and Claude) generate text by predicting the next chunk of language - they're flexible but can confidently produce things that aren't true. Predictive models forecast or classify from structured data (sales numbers, transactions, customer attributes) - they're narrower but more measurable, cheaper to run, and easier to explain to a regulator. For numerical forecasting, a classical predictive model is usually a better fit than an LLM.
What is an AI agent?
An AI agent is a system that doesn't just produce output - it takes actions. It uses tools, browses the web, runs code, fills in forms, and executes multi-step tasks. Agent modes are now built into Claude, ChatGPT, and Gemini directly. They're powerful but still unreliable for high-stakes work, so the right pilot is something where the cost of getting it wrong is low and supervision is acceptable.
Are neural networks a type of AI?
No - they're the architecture that most modern AI is built on, not a separate category. LLMs are neural networks. Image generators are neural networks. Modern computer vision is neural networks. 'Deep learning' just means a neural network with many layers. When someone says a product 'uses neural networks,' that tells you about the underlying engineering, not what the system actually does.
Why does the AI category matter for business decisions?
Because different categories fail in different ways. A predictive model might give a wrong forecast - that's a calibration problem you can measure and manage. A generative model might confidently produce something untrue - that's a verification problem, which is a different beast entirely. The questions you ask, how you score the project, and the controls you put in place all depend on which type of system you're dealing with.
Most of the conversations I have about AI start the same way: a senior leader in pharma, manufacturing, or professional services wants to make sensible decisions about AI for their organisation without having to become an engineer first. They've used ChatGPT. They've sat through a vendor demo. They've heard the word "agentic" four times this week and they're not entirely sure what it means.
That's reasonable. ChatGPT is the most accessible AI tool that's ever existed, and for a lot of people, it's the on-ramp into the whole field. But it's one window onto a much wider room. If you're making decisions about where AI fits in your organisation, you need to be able to see the rest of it.
This isn't about becoming technical. It's about being able to recognise what's in front of you when a vendor pitches you an "AI-powered platform," when a team member proposes a pilot, or when a tool you've never heard of starts showing up in someone's workflow. The category tells you what kind of thing you're looking at, and what kind of risk and opportunity comes with it.
Here's a working map of the AI categories worth knowing in 2026.
At a Glance
| Category | What It Does | Where You Meet It |
|---|---|---|
| Large Language Models | Generates and works with text | ChatGPT, Claude, Copilot, Gemini |
| Image, Audio, and Video Generation | Creates new visual and audio content | Midjourney, DALL-E, Sora, ElevenLabs |
| Computer Vision | Recognises what's in images and video | Quality inspection, OCR, security |
| Predictive Models | Forecasts and classifies from data | Demand planning, churn, fraud detection |
| Reinforcement Learning | Learns by trial and error | Robotics, dynamic pricing, model training |
| Agents | Takes multi-step actions, not just outputs | Computer-use agents, agentic workflows |
Large Language Models
The chatbots. ChatGPT, Claude, Gemini, Microsoft Copilot, and the open-weight models like Llama, Mistral, DeepSeek, and Qwen. They're built on an architecture called the transformer, and what they do, fundamentally, is predict the next chunk of text. Very, very well.
You meet LLMs in drafting, summarising, extracting structured information from messy documents, brainstorming, and answering questions over your own files. They're already in someone's workflow in your organisation, whether IT knows about it or not. Reasoning models like OpenAI's o1 sit in this category too, trained to spend more compute on harder questions before answering.
Image, Audio, and Video Generation
DALL-E, Midjourney, Flux, Sora, ElevenLabs, Suno, Runway. Most modern image and video tools are built on something called diffusion models now. The previous generation used GANs, which still get a lot of mentions in older articles, but diffusion has largely taken over for the high-quality stuff. Video models have moved on quickly in the last 18 months - what was clearly synthetic this time last year now passes a casual glance.
You meet these in marketing imagery, product mockups, design exploration, internal training materials, and increasingly in voice and video for things like overdubs and synthetic narration.
Honest note: copyright and authenticity questions are still live, and the legal position varies by jurisdiction. Treat the output as a draft to review, not a finished asset to publish.
Computer Vision
Recognising what's in an image or video. This includes quality inspection on a manufacturing line, document parsing, optical character recognition (OCR), security and surveillance, and medical imaging.
Computer vision is the unhyped success story of AI. It's been working reliably in narrow tasks for over a decade. If you're in manufacturing, pharma, or anything involving physical inspection or document handling, this is often where AI pays back the fastest, because the use cases are well-defined and the failure modes are well-understood.
Predictive Models (Classical Machine Learning)
The unglamorous workhorses. Demand forecasting, customer churn prediction, fraud detection, recommendation systems, credit scoring. The maths underneath - regressions, decision trees, gradient-boosted models - is decades old and well understood.
If you're a mid-sized or larger organisation, predictive models have been running in your business for years. They're often dressed up as "AI" now in vendor pitches, but the underlying technology hasn't changed dramatically. Worth knowing because for problems involving structured numerical data - like forecasting next quarter's demand - a classical model is often a better fit than an LLM, cheaper to run, and easier to explain to a regulator. The hard part is rarely picking the model; it's orchestrating it into the rest of your operation.
Reinforcement Learning
Systems that learn by trial and error, optimising towards a goal through repeated feedback. Famous for AlphaGo and game-playing AI. Practical applications include dynamic pricing, robotic control, and route optimisation. It's also a key part of how the latest reasoning models from OpenAI, Anthropic, and Google are trained to produce more useful answers.
You're less likely to encounter reinforcement learning directly as a leader than the other categories on this list, but it's worth recognising the term. It's a learning approach rather than a use case - the same approach can train a robot to walk and a chatbot to be more helpful.
Agents
The category that's moved fastest over the past twelve months. Agents are AI systems that don't just produce output - they take actions. They use tools, browse the web, run code, fill in forms, and execute multi-step tasks. In April 2025 this was experimental. Now agent modes are built into Claude, ChatGPT, and Gemini directly, and you'll see "agentic workflows" in customer service, software development, research, and operations.
This is where most enterprise AI investment is heading right now, and where you'll see the most product launches over the next twelve months.
Honest note: agents are still unreliable for high-stakes work. Treat them like a capable junior employee who needs supervision and clear constraints, not a finished product. The right pilot for an agent is something where the cost of getting it wrong is low and the cost of supervision is acceptable.
A Note on Neural Networks
You'll see "neural networks" mentioned a lot in AI conversations. They're not a separate category of AI alongside the ones above. They're the architecture under most of what's listed here. LLMs are neural networks. Image generators are neural networks. Modern computer vision is neural networks. Neural networks are part of the foundation that supports the items in this list, not a separate item on it.
The same goes for "deep learning." That just means a neural network with many layers. It's a description of the system's structure, not a different kind of system.
What This Means for Your Decisions
Why does any of this matter for a leader who isn't going to write the code?
Because the category tells you what kind of risk and what kind of opportunity you're looking at. A predictive model and a generative model fail in completely different ways. A predictive model might give you a wrong forecast - that's a calibration problem you can usually measure and manage over time. A generative model might confidently produce something untrue - that's a verification problem, which is a different beast entirely. The questions you ask, the way you score the project (we use the RATES framework for this in workshops), and the controls you put in place all depend on which type of system you're dealing with.
You don't need to be a machine learning engineer. You do need a working mental map. If you can put the thing in front of you into one of these categories, you can ask sharper questions, set better expectations, and make better decisions about whether to pilot it, scale it, or walk away.
If you want to go deeper on the practical side - opportunity, risk, and the adoption roadmap - the AI Toolkit covers all three in short, free whitepapers. No opt-in.
If you're working through this for your own organisation and want a sounding board, I run a 25-minute Focus Call where we look at where AI is most likely to save you time and where the real risks sit for your business. I also run cohort programmes for leadership teams in Irish and UK industry where we work through this material together. Book a Focus Call - free, no obligation, no hard sell.