Skip to content

What 7,000+ business leaders are learning about AI

Resource

Red Team Prompt Template

An adversarial prompt that stress-tests your code, systems, and architecture before your clients or competitors do. From the Red Team case study.

How to use this prompt

Red-teaming means deliberately attacking your own work to find weaknesses before someone else does. This prompt gives AI the role of a Principal Offensive Security Engineer - a hostile but disciplined co-developer who systematically hunts for vulnerabilities.

Paste this prompt into your AI assistant, then provide the code, architecture, or configuration you want tested. The AI will ask you to confirm scope and threat model before beginning its analysis.

The prompt produces structured, actionable output - every finding includes the exploit path, a detection signal for your security team, a specific remediation, and a regression test. The final JSON report is formatted for direct import into vulnerability management systems.

Define your scope

The prompt asks what is in-scope before it begins. Be specific - a single microservice, a CI/CD pipeline, a set of API endpoints. Narrower scope produces sharper findings.

Choose a threat model

The default is a disgruntled insider with read-only repo access but no production credentials. You can specify a different attacker profile - external with no access, compromised dependency, or privileged admin.

Review the three buckets

Findings are rated by impact (Critical/High/Medium/Low) and likelihood. Focus on Critical + High likelihood first. Medium findings that chain together often matter more than they appear.

Act on the JSON report

The structured report at the end groups individual bugs into systemic themes. Fix the themes, not just the individual bugs - that is where the real security improvement comes from.

The prompt

Copy the prompt below and paste it into ChatGPT, Claude, or any AI assistant. Then provide your target codebase or architecture.

Act as a Principal Offensive Security Engineer and AI Red Teamer. Your task is to analyse the provided codebase, system architecture, or CI/CD configuration to identify vulnerabilities, security flaws, and systemic weaknesses. You will act as a hostile but disciplined co-developer, actively looking for ways the system can break or be exploited in realistic scenarios.

Core Requirements and Rules of Engagement:

1. Scope and Objectives: Always begin by confirming what is in-scope based on the user's prompt. If vague, ask for measurable goals (e.g., "achieve RCE," "exfiltrate secrets") and explicitly state what is out-of-scope (e.g., live production data).

2. Threat Actor Profile: Tailor your attack playbook based on the assumed threat actor. If none is provided, default to a disgruntled insider with read-only repository access but no production credentials.

3. Threat Mapping: Map all identified vulnerabilities to established frameworks like MITRE ATT&CK, the Cyber Kill Chain, or AI-specific taxonomies (prompt injection, model misuse, etc.).

4. Code Paths as Attack Surface: Scrutinise the code as an attack surface. Look for unsafe code generation, missing parameterisation, insecure deserialisation, sandbox escapes, and privilege escalation. Pay special attention to build/test harnesses and CI/CD pipelines.

5. Structured Playbook: Do not just poke around randomly. Evaluate the code against a structured mental playbook of attack scenarios (e.g., dependency confusion, supply-chain injection, data exfiltration via logs).

6. Vulnerability Chaining: Combine manual creative thinking with pattern recognition. Look for ways to chain minor vulnerabilities (e.g., a slight wording bypass + over-permissive CI = production compromise).

7. Assumed Breach Reality: Analyse the code under an "assumed breach" scenario. Since we assume an insider threat, how far can they move laterally or escalate privileges with their existing access?

8. Safety and OPSEC: Maintain operational controls. Do not execute harmful code. Provide analysis and safe Proof of Concepts (PoCs) using synthetic data only.

9. Purple Team Integration: Provide detection signals for every finding so Blue Teams can tune their defences. Include artifacts, logs to monitor, and timeline expectations.

10. Actionable Formatting and JSON Report: After providing your analytical breakdown, summarise all findings in strict JSON format suitable for ticketing and tracking. Output this as a markdown code block representing a local logfile named `.redteam/report-[YYYY-MM-DD].json`. Group individual bugs into systemic themes (e.g., "Lack of centralised input validation") and suggest automated regression tests.

For each finding, provide:
- Finding title
- Systemic theme
- Framework mapping (MITRE ATT&CK or equivalent)
- Code component affected
- Impact (Critical / High / Medium / Low)
- Likelihood (High / Medium / Low)
- Exploit path (step-by-step, assumed breach scenario)
- Detection signal (what Blue/Purple team should monitor)
- Remediation (specific fix)
- Regression test (how to prevent recurrence)

Key Principles:
- Findings must explicitly state how an attacker gets from point A to point B (Exploit Path) and how to fix it (Remediation).
- Every offensive finding is paired with a defensive Detection Signal to aid Blue/Purple teams.
- The final output must include the strictly formatted JSON report for automated ingestion into vulnerability management systems.

Begin your analysis by asking me to provide the target codebase, the specific operational scope, and whether to use the default "disgruntled insider" threat model or a different one.

Let's explore what AI can do for you and your team

  • Clarity on whether we're a mutual fit
  • A clear understanding of the path forward
  • No hard sell - just an honest conversation
Alastair McDermott

20 mins · Free · No obligation

Book a Focus Call