Contents

Capturing AI Threats with the MITRE ATLAS Matrix

Read time: 15 mins
Last Updated on June 23, 2025
Published May 17, 2025

AI and machine learning are everywhere now, which means less pleasant people are paying attention too. To keep these systems safe, security teams need a clear map of how attackers operate. That’s exactly what the MITRE ATLAS Matrix provides—it’s basically a cheat sheet of the tricks and tactics bad actors use when going after AI.

Think of it like the classic MITRE ATT&CK framework, but with an AI twist. The Matrix lays things out in columns (the hacker’s big-picture goals) and rows (the specific moves they use to get there). It’s a way to turn the chaos of AI threats into something you can actually understand—and defend against.

16 Main Tactics

It’s matrix features 16 main tactics – here’s a quick list:

  1. Reconnaissance (8 techniques)
  2. Resource Development (7 techniques)
  3. Initial Access (8 techniques)
  4. AI Model Access (6 techniques)
  5. Execution (6 techniques)
  6. Persistence (5 techniques)
  7. Privilege Escalation (5 techniques)
  8. Defense Evasion (8 techniques)
  9. Credential Access (5 techniques)
  10. Discovery (9 techniques)
  11. Lateral Movement (4 techniques)
  12. Collection (4 techniques)
  13. AI Attack Staging (3 techniques)
  14. Command and Control (8 techniques)
  15. Exfiltration (7 techniques)
  16. Impact (8 techniques)

Here’s how each can be broken down into its own subcategories.

Reconnaissance (8 techniques)

  • Acquire Training Data

  • Active Scanning

  • Better Vector Identity

  • Search Model Information

  • Search Open LLM Vulnerability

  • Search Open Technical Information

  • Search Publications

  • Website Domain Discovery

Resource Development (7 techniques)

  • Acquire Infrastructure

  • Acquire LLM Capabilities

  • Establish Infrastructure

  • LLM Prompt Crafting

  • Poison Training Dataset

  • Staging Environment Setup

  • Supply Chain Compromise

Initial Access (8 techniques)

  • AI Supply Chain Compromise

  • AI Enabled Product Service Access

  • Exploit API Vulnerability

  • Exploit Publishing Platform Vulnerability

  • Phishing

  • Prompt Injection via Access

  • Training Application Access

  • Valid Accounts

AI Model Access (6 techniques)

  • AI Model Inference Access

  • AI-Enabled Product Service Compromise

  • Exploit LLM Prompt Vulnerability

  • Poison Training Environment

  • Prompt Injection via Model Execution

  • Prompt Injection via User Interface

Execution (6 techniques)

  • AI Agent Tool Imposing

  • Exploit LLM Prompt Vulnerability

  • Modify LLM Agent Behavior

  • Model Execution

  • Poison Training Environment

  • Prompt Injection via Model Execution

Persistence (5 techniques)

  • AI Agent Tool Imposing

  • AI Agent Configuration Modification

  • Modify LLM Agent Behavior

  • RAG Poisoning

  • Valid Accounts

Privilege Escalation (5 techniques)

  • AI Agent Tool Imposing

  • Exploit LLM Jailbreak

  • Exploit RAG

  • False Role Entry Injection

  • Impersonation

Defense Evasion (8 techniques)

  • Corrupt AI Configuration

  • Data Exfiltration via LLM Response

  • Exploit RAG

  • Exploit Role Entry Injection

  • Impersonation

  • LLM Jailbreak

  • LLM Trusted Output Modification

  • Masquerading

Credential Access (5 techniques)

  • Credentials from Cloud Configuration

  • Credentials from LLM Configuration

  • Credentials from RAG

  • LLM Jailbreak

  • Reverse Proxying

Discovery (9 techniques)

  • Credential Discovery

  • Discover AI Model Configuration

  • Discover AI Model Deployment

  • Discover AI Model Outputs

  • Discover AI Model Platform/Service

  • Discover AI Model Structure

  • Discover LLM Configuration

  • Discover LLM Discovery

  • Discover RAG

Lateral Movement (4 techniques)

  • Low Alternative Authentication Method

  • Network Service Discovery

  • Remote Replication

  • Valid Accounts

Collection (4 techniques)

  • AI Artifact Acquisition

  • Exfiltration over Alternate Service

  • Output Information Gathering

  • Transfer from Local System

AI Attack Staging (3 techniques)

  • Craft Adversarial Data

  • Reverse Proxy Model Access

  • Verify Attack Success

Command and Control (8 techniques)

  • Data Exfiltration via LLM Response

  • Data Exfiltration via Training Dataset

  • Impersonation

  • LLM Response Modification

  • Output Information Gathering

  • Reverse Shell

  • Supply Chain Compromise

  • Transfer from Local System

Exfiltration (7 techniques)

  • Exfiltration via AI Model

  • Exfiltration via AI UI

  • Exfiltration via LLM Response

  • Exfiltration via Training Dataset

  • Manipulate AI Output

  • Reverse Proxying

  • Verify Attack Success

Impact (8 techniques)

  • Data Destruction

  • Denial of Service

  • Exploit LLM Integrity

  • Inhibit AI Model Integrity

  • Inhibit Data Set Integrity

  • Inhibit Local System

  • Poison Training Dataset

  • System Shutdown

Dividing them into 4 phases

Now that you know which technique belongs to which tactic, we can simply things even further by dividing them into 4 phases;

  1. Preparation, Access, and Environment Exploitation
  2. Execution, Control, and Evasion
  3. Internal Discovery and Data Movement
  4. Final Stages and Objective Achievement

As a result of the above we can now neatly sum it all up into 4 tables.

Phase 1: Preparation, Access, and Environment Exploitation

These tactics focus on the initial reconnaissance, setting up the necessary infrastructure, and gaining the first foothold into the AI system or its environment.

Tactic Techniques (Attacks)
Reconnaissance (8 techniques) Acquire Training Data, Active Scanning, Better Vector Identity, Search Model Information, Search Open LLM Vulnerability, Search Open Technical Information, Search Publications, Website Domain Discovery.
Resource Development (7 techniques) Acquire Infrastructure, Acquire LLM Capabilities, Establish Infrastructure, LLM Prompt Crafting, Poison Training Dataset, Staging Environment Setup, Supply Chain Compromise.
Initial Access (8 techniques) AI Supply Chain Compromise, AI Enabled Product Service Access, Exploit API Vulnerability, Exploit Publishing Platform Vulnerability, Phishing, Prompt Injection via Access, Training Application Access, Valid Accounts.
AI Model Access (6 techniques) AI Model Inference Access, AI-Enabled Product Service Compromise, Exploit LLM Prompt Vulnerability, Poison Training Environment, Prompt Injection via Model Execution, Prompt Injection via User Interface.

Phase 2: Execution, Control, and Evasion

Once initial access is established, the adversary seeks to execute their objective, maintain control, and avoid detection.

Tactic Techniques (Attacks)
Execution (6 techniques) AI Agent Tool Imposing, Exploit LLM Prompt Vulnerability, Modify LLM Agent Behavior, Model Execution, Poison Training Environment, Prompt Injection via Model Execution.
Persistence (5 techniques) AI Agent Tool Imposing, AI Agent Configuration Modification, Modify LLM Agent Behavior, RAG Poisoning, Valid Accounts.
Privilege Escalation (5 techniques) AI Agent Tool Imposing, Exploit LLM Jailbreak, Exploit RAG, False Role Entry Injection, Impersonation.
Defense Evasion (8 techniques) Corrupt AI Configuration, Data Exfiltration via LLM Response, Exploit RAG, Exploit Role Entry Injection, Impersonation, LLM Jailbreak, LLM Trusted Output Modification, Masquerading.

Phase 3: Internal Discovery and Data Movement

These tactics focus on gathering intelligence within the compromised environment, moving laterally, and collecting high-value assets (like training data, configurations, or sensitive output).

Tactic Techniques (Attacks)
Credential Access (5 techniques) Credentials from Cloud Configuration, Credentials from LLM Configuration, Credentials from RAG, LLM Jailbreak, Reverse Proxying.
Discovery (9 techniques) Credential Discovery, Discover AI Model Configuration, Discover AI Model Deployment, Discover AI Model Outputs, Discover AI Model Platform/Service, Discover AI Model Structure, Discover LLM Configuration, Discover LLM Discovery, Discover RAG.
Lateral Movement (4 techniques) Low Alternative Authentication Method, Network Service Discovery, Remote Replication, Valid Accounts.
Collection (4 techniques) AI Artifact Acquisition, Exfiltration over Alternate Service, Output Information Gathering, Transfer from Local System.

Phase 4: Final Stages and Objective Achievement

The final stages involve staging the attack, establishing communication channels for command, exfiltrating stolen data, and achieving the final malicious impact.

Tactic Techniques (Attacks)
AI Attack Staging (3 techniques) Craft Adversarial Data, Reverse Proxy Model Access, Verify Attack Success.
Command and Control (8 techniques) Data Exfiltration via LLM Response, Data Exfiltration via Training Dataset, Impersonation, LLM Response Modification, Output Information Gathering, Reverse Shell, Supply Chain Compromise, Transfer from Local System.
Exfiltration (7 techniques) Exfiltration via AI Model, Exfiltration via AI UI, Exfiltration via LLM Response, Exfiltration via Training Dataset, Manipulate AI Output, Reverse Proxying, Verify Attack Success.
Impact (8 techniques) Data Destruction, Denial of Service, Exploit LLM Integrity, Inhibit AI Model Integrity, Inhibit Data Set Integrity, Inhibit Local System, Poison Training Dataset, System Shutdown.

By systematically documenting tactics like Prompt Injection via Model Execution (under Initial Access and Execution) and Poison Training Dataset (under Resource Development and Impact), the MITRE ATLAS Matrix provides security professionals with a comprehensive, threat-informed perspective necessary to build resilient AI defenses.

back to more articles

security   AI Adversary   AI Adversary Techniques   AI Adversary Tactics and Techniques   AI Attack Staging   AI Model   AI Model Access   ATLAS   Access   Attack Staging   Collection   Command   Command & Control   Command and Control   Control   Credential Access   Data Movement   Defense Evasion   DevSecOps   Discovery   Execution   Execution Control and Evasion   Exfiltration   Final Stage   Final Stages and Objective Achievement   Impact   Initial Access   Internal Discovery   Internal Discovery and Data Movement   Lateral Movement   MITRE   MITRE ATLAS Matrix   Matrix   Model Access   Objective Achievement   Persistence   Preparation   Privilege Escalation   Reconnaissance   Resource Development   SecDevOps   SecOps   Staging   Tactics   and Environment Exploitation   and Evasion   secure engineering   security architecture   AI   2025