CSIPE

Published

- 32 min read

What Is Threat Modeling and How to Start


Secure Software Development Book

How to Write, Ship, and Maintain Code Without Shipping Vulnerabilities

A hands-on security guide for developers and IT professionals who ship real software. Build, deploy, and maintain secure systems without slowing down or drowning in theory.

Buy the book now
The Anonymity Playbook Book

Practical Digital Survival for Whistleblowers, Journalists, and Activists

A practical guide to digital anonymity for people who can’t afford to be identified. Designed for whistleblowers, journalists, and activists operating under real-world risk.

Buy the book now
The Digital Fortress Book

The Digital Fortress: How to Stay Safe Online

A simple, no-jargon guide to protecting your digital life from everyday threats. Learn how to secure your accounts, devices, and privacy with practical steps anyone can follow.

Buy the book now

Introduction

In today’s rapidly evolving digital landscape, applications are increasingly exposed to sophisticated cyber threats. As developers strive to build secure systems, threat modeling emerges as a proactive approach to identify and mitigate potential security risks during the design phase. By thinking like an attacker, developers can anticipate vulnerabilities and implement defenses before they become exploitable.

This guide delves into the concept of threat modeling, its benefits, and how you can integrate it into your development lifecycle to enhance application security.

What Is Threat Modeling?

Threat modeling is a structured process that helps identify, evaluate, and mitigate potential threats to a system. It involves understanding the system’s architecture, pinpointing potential vulnerabilities, and prioritizing risks based on their likelihood and impact.

Key Objectives of Threat Modeling:

  1. Identify Assets: Recognize what needs protection (e.g., user data, API endpoints).
  2. Enumerate Threats: Predict potential threats and attack vectors.
  3. Mitigate Risks: Define strategies to address identified threats.
  4. Document Findings: Create a comprehensive threat model for ongoing reference.

Why Is Threat Modeling Important?

1. Proactive Security

Threat modeling allows developers to address security issues during the design phase, reducing the cost and effort of fixing vulnerabilities later.

2. Improved Collaboration

It fosters communication among developers, security teams, and stakeholders, ensuring a unified understanding of risks and priorities.

3. Compliance and Standards

Many regulatory frameworks, such as GDPR and PCI DSS, emphasize the importance of identifying and mitigating risks, which threat modeling supports.

4. Resilience Against Evolving Threats

By continuously updating threat models, teams can adapt to new attack vectors and maintain robust security postures.

How to Start with Threat Modeling

Step 1: Understand the System

Document System Architecture

Start by creating a high-level diagram of your application, highlighting components such as:

  • APIs and endpoints
  • Databases and storage systems
  • User interfaces
  • External integrations

Example (Sample Diagram):

Visualize data flow between components to identify points of interaction where threats might occur.

Step 2: Identify Assets

Determine which components hold value and require protection. Assets can include:

  • Sensitive data (e.g., PII, financial information)
  • Critical APIs
  • Authentication mechanisms

Step 3: Enumerate Threats

Use threat modeling frameworks like STRIDE or PASTA to identify threats.

STRIDE Framework:

  • Spoofing: Impersonation of users or systems.
  • Tampering: Unauthorized data modifications.
  • Repudiation: Denying performed actions.
  • Information Disclosure: Exposing sensitive information.
  • Denial of Service (DoS): Disrupting system availability.
  • Elevation of Privilege: Gaining unauthorized access.

Example (STRIDE Applied to Login API):

  • Spoofing: Brute-force attack on login credentials.
  • Tampering: Altering session tokens.
  • Information Disclosure: Intercepting plaintext credentials.

Step 4: Prioritize Threats

Assess risks based on their likelihood and impact using frameworks like DREAD or FAIR.

Example (DREAD Risk Assessment):

  • Damage Potential: How severe is the impact?
  • Reproducibility: Can the threat be easily replicated?
  • Exploitability: How easy is it to exploit?
  • Affected Users: How many users are impacted?
  • Discoverability: How visible is the vulnerability?

Step 5: Define Mitigations

For each identified threat, outline potential mitigation strategies. Examples include:

  • Encryption: For information disclosure threats.
  • Rate Limiting: To prevent brute-force attacks.
  • Input Validation: To mitigate injection attacks.

Step 6: Document and Maintain the Threat Model

Create a centralized document or use threat modeling tools to keep track of identified threats, mitigations, and updates.

Threat Modeling Frameworks and Tools

  1. STRIDE: Focuses on six threat categories for systematic analysis.
  2. PASTA (Process for Attack Simulation and Threat Analysis): Aligns business objectives with technical threats.
  3. VAST (Visual, Agile, and Simple Threat Modeling): Designed for scalability in large organizations.

Tools for Threat Modeling:

  1. Microsoft Threat Modeling Tool: Visualizes data flow diagrams and generates threat reports.
  2. OWASP Threat Dragon: An open-source tool for collaborative threat modeling.
  3. ThreatModeler: A cloud-based platform for enterprise threat modeling.

Challenges in Threat Modeling

1. Complexity of Systems

Solution: Break down complex systems into smaller components and model them individually.

2. Evolving Threat Landscape

Solution: Regularly revisit and update the threat model to address new vulnerabilities.

3. Limited Resources

Solution: Focus on critical assets and high-impact threats when resources are constrained.

4. Balancing Security and Usability

Solution: Collaborate with stakeholders to prioritize mitigations that align with business objectives.

Best Practices for Effective Threat Modeling

  1. Involve Cross-Functional Teams Collaborate with developers, security experts, and business stakeholders to ensure a comprehensive understanding of risks.

  2. Integrate Into Agile Processes Incorporate threat modeling into sprints to address security iteratively.

  3. Automate Where Possible Use tools to streamline repetitive tasks like data flow diagram generation and risk scoring.

  4. Educate Your Team Provide training on threat modeling methodologies to ensure consistency and effectiveness.

  5. Leverage Historical Data Review past incidents to identify recurring threats and improve mitigation strategies.

Step-by-Step Walkthrough: Threat Modeling a Sample REST API

Abstract descriptions of threat modeling only go so far. To make the process concrete, let’s walk through a complete example: threat modeling a simplified e-commerce REST API. This application allows users to register, log in, browse products, place orders, and make payments. The backend is a Node.js REST API; a PostgreSQL database stores user and order data; Redis handles sessions; and an external payment gateway processes transactions. We will apply the full six-step process to this real scenario.

1. Model the System with a Data Flow Diagram

Before identifying threats you need a map. A Data Flow Diagram (DFD) captures how data moves between components and where trust boundaries exist. The DFD3 notation (developed by Adam Shostack) uses four primitives: external entities, processes, data stores, and data flows.

   flowchart LR
    subgraph Internet["Internet (Untrusted Zone)"]
        User["User Browser"]
        Attacker["Attacker"]
    end
    subgraph DMZ["DMZ"]
        API["REST API\nNode.js"]
    end
    subgraph Internal["Internal Network (Trusted Zone)"]
        Auth["Auth Service"]
        DB[("PostgreSQL")]
        Cache[("Redis Cache")]
        Pay["Payment Gateway\nExternal"]
    end

    User -->|"HTTPS login, orders, browse"| API
    Attacker -.->|"Attack surface"| API
    API -->|"JWT validation request"| Auth
    API -->|"Read / Write"| DB
    API -->|"Session data"| Cache
    API -->|"Payment charge request"| Pay
    Auth -->|"User records"| DB

Two trust boundaries are visible: Internet to DMZ, and DMZ to the Internal network. Every data flow that crosses a trust boundary is a candidate for STRIDE analysis.

2. Enumerate Assets and Their Sensitivity

AssetSensitivityRegulatory Relevance
User PII (email, address)HighGDPR Article 4
Payment card dataCriticalPCI DSS
Session tokens / JWTsHighAccount takeover risk
Order historyMediumGDPR, business sensitivity
Product catalogLowPublicly visible

3. Apply STRIDE to Each Data Flow

Take the data flow User Browser → REST API (POST /auth/login) and systematically apply each STRIDE category:

STRIDE CategoryThreatRoot Cause
SpoofingCredential stuffing using a leaked password listNo MFA, no breach-detection
TamperingModify the userId field in a request body to access another accountMissing authorization check
RepudiationNo audit log of login attemptsInsufficient logging
Information DisclosureError messages reveal whether an email is registeredVerbose error responses
Denial of ServiceFlooding the login endpoint to lock legitimate users outNo rate limiting
Elevation of PrivilegeJWT with alg: none bypasses signature verificationMissing algorithm pinning

Repeat this exercise for every data flow in the DFD until the full threat surface is mapped.

4. Prioritize with DREAD Scoring

Score each threat from 1 to 10 across five dimensions. For the Credential Stuffing threat:

DimensionScoreReasoning
Damage Potential8Full account takeover
Reproducibility9Automated tools are freely available
Exploitability8Requires only a leaked credential list
Affected Users7All users without MFA enabled
Discoverability9Login endpoints are in every web application
Average8.2Critical — address immediately

5. Define Specific Mitigations

For each threat, define an actionable control — not a vague recommendation but a concrete technical requirement:

  • Credential Stuffing → Enforce MFA for sensitive operations; add per-IP rate limiting (10 attempts per 15 min); integrate with HaveIBeenPwned API to reject compromised passwords at registration.
  • JWT alg:none → Pin the allowed algorithm list server-side; reject any token where alg is not HS256 or RS256.
  • Username Enumeration → Return an identical response body and HTTP status for both valid and invalid email addresses on the login endpoint.
  • Missing Authorization Check → Enforce resource ownership (compare order.userId to JWT sub) in every request handler that accesses user-owned resources.
  • Insufficient Logging → Log every authentication event (success and failure) to an append-only, centralized audit log with user ID, IP address, and timestamp.

6. Document, Track, and Maintain

Store the threat model as a versioned Markdown file at docs/threat-model.md in the repository. Link each identified threat to a GitHub or Jira issue tagged security. Add a threat model update requirement to your pull request template for any change that alters data flows, authentication logic, or authorization decisions. Revisit the threat model at least once per quarter and after every significant architectural change.


Deep Dive: STRIDE Framework

STRIDE is the most widely adopted threat modeling framework in software engineering. Originally developed at Microsoft by Loren Kohnfelder and Praerit Garg in 1999 and later popularized by Adam Shostack in Threat Modeling: Designing for Security, STRIDE works as a structured checklist that maps each threat category to a violated security property. Its low complexity barrier makes it accessible to development teams without a dedicated security background.

S — Spoofing (Violates: Authentication)

Spoofing threats involve adversaries misrepresenting their identity to gain unauthorized access. In web applications the most common spoofing vectors are credential theft, session hijacking, phishing, and CSRF (Cross-Site Request Forgery) attacks. The common thread is that the receiving party is deceived about who is actually communicating with it.

Attack scenario: An attacker captures a session cookie sent over an HTTP (non-TLS) connection using a passive network sniff on a coffee shop Wi-Fi network. They replay the cookie in subsequent requests, impersonating the victim for the rest of the session lifetime.

Mitigations: Enforce HTTPS on all endpoints using HSTS (Strict-Transport-Security: max-age=31536000; includeSubDomains); set the Secure and HttpOnly flags on all session cookies; enforce SameSite=Strict or SameSite=Lax to prevent CSRF; regenerate session tokens after privilege escalation events such as login and password change.

T — Tampering (Violates: Integrity)

Tampering threats cover unauthorized modification of data, whether in transit or at rest. SQL injection is the canonical tampering threat: an attacker injects malicious SQL via a user-supplied input to modify or exfiltrate database records. Parameter pollution and mass assignment vulnerabilities are subtler variants — the attacker supplies additional parameters that the application processes without authorization.

Attack scenario: A user of an e-commerce platform sends a PATCH /orders/123 request with the field "status": "shipped" included in the body. Because the API applies all provided fields without an authorization check, the user advances their own order to a shipped state without the warehouse confirming fulfillment.

Mitigations: Use parameterized database queries or ORMs that prevent injection; implement explicit field allowlists at the serializer layer so only designated fields can be updated; validate that state transitions (pending → shipped) are performed by authorized roles.

R — Repudiation (Violates: Non-Repudiation)

Repudiation threats arise when a user or system component can plausibly deny performing an action because no reliable audit evidence exists. This is particularly critical for financial applications, compliance-sensitive workflows, and any system where actions have legal or contractual consequences.

Attack scenario: A system administrator with access to the application server deletes a batch of customer records and the application logs stored on the same server. Because they also have write access to the log directory, they delete the relevant log entries, leaving no trace.

Mitigations: Send all audit logs to a centralized, append-only logging service (a SIEM such as Splunk, Elastic, or AWS CloudWatch Logs with write-once policies) that the application server cannot modify after writing. Log all state-changing operations with: actor identity (user ID from JWT), action, affected resource, timestamp (UTC), and outcome.

I — Information Disclosure (Violates: Confidentiality)

Information disclosure threats expose sensitive data to parties who should not receive it. The spectrum is wide: verbose error messages leaking stack traces in production, debug endpoints left enabled, insecure direct object references (IDOR) that expose other users’ records, or serializers returning fields that shouldn’t leave the server.

Attack scenario: The API endpoint GET /api/users/42 returns the full user record including the password_hash, api_key, and internal role fields because the ORM mapper serializes every column by default.

Mitigations: Define explicit response schemas using allowlist-based serializers (e.g., class UserResponseDTO with only id, name, and email); disable stack traces and detailed error messages in production; apply column-level database encryption for sensitive fields; use Content-Security-Policy and X-Content-Type-Options headers to prevent browser-based information leaks.

D — Denial of Service (Violates: Availability)

DoS threats aim to make a service unavailable to legitimate users. Application-layer (Layer 7) DoS is particularly dangerous because it can be triggered with legitimate-looking traffic. Expensive database queries, unbounded search operations, file upload endpoints, and image processing pipelines are common targets.

Attack scenario: An attacker sends thousands of concurrent requests to GET /api/products?search=*&limit=10000, triggering full-table scans on the products database. CPU and I/O saturation causes timeouts for all legitimate users within minutes.

Mitigations: Implement rate limiting per user and per IP; enforce hard maximum pagination limits (reject requests where limit > 100); apply query complexity analysis for GraphQL APIs; use circuit breakers to prevent cascading failures; deploy a CDN and WAF for DDoS mitigation at the network edge.

E — Elevation of Privilege (Violates: Authorization)

Privilege escalation allows an attacker to perform actions beyond their authorized scope. Vertical escalation (gaining admin rights) often gets the most attention, but horizontal escalation (accessing a peer’s data) is more prevalent in web applications and equally dangerous.

Attack scenario: A standard user makes a PATCH /api/users/99 request updating the field "role": "admin" in the JSON body. Because the endpoint applies Object.assign(user, req.body) without an explicit field allowlist, the mass assignment succeeds and the user becomes an administrator.

Mitigations: Enforce RBAC or ABAC at the service layer, not just the routing layer; maintain explicit allowlists of fields that any given role is permitted to modify; verify JWT claims server-side on every protected request; log and alert on unexpected privilege changes.


Deep Dive: PASTA Framework

PASTA (Process for Attack Simulation and Threat Analysis) was developed by Tony UcedaVélez and Marco Morana and introduced in Risk Centric Threat Modeling (2015). Where STRIDE asks “what category of threat could affect this component?”, PASTA asks “given who our adversaries are and what they want, how would they realistically attack us, and what is the business impact?” This distinction makes PASTA compelling for organizations where security investments must be justified in financial or compliance terms.

PASTA is organized around seven sequential stages, each building on the last:

Stage 1 — Define Business Objectives: Identify what the organization is trying to protect and why, anchoring the analysis in business risk rather than technical detail alone. Define success metrics: acceptable uptime, regulatory compliance requirements (PCI DSS, HIPAA, GDPR), and financial loss thresholds.

Stage 2 — Define Technical Scope: Enumerate all in-scope technical components — services, APIs, databases, third-party integrations, and cloud infrastructure. This produces a comprehensive inventory that informs the later attack simulation.

Stage 3 — Application Decomposition: Decompose the application into components and data flows, similar to DFD creation in STRIDE, but explicitly mapped to business functions. The goal is to understand which technical components support which business objectives — a connection that later drives risk prioritization.

Stage 4 — Threat Analysis: Analyze the realistic threat landscape for this specific application. Who are the likely attacker personas — nation-state adversaries, financially motivated cybercriminals, insider threats, opportunistic script kiddies? What are their capabilities, motivations, and known tactics? Reference MITRE ATT&CK to map generic tactics to specific techniques relevant to the application’s technology stack.

Stage 5 — Vulnerability and Weakness Analysis: Feed in findings from SAST tools (static analysis), DAST tools (dynamic scanning), SCA (dependency vulnerability scanners), and penetration test reports. Map discovered vulnerabilities to the application components identified in Stage 3.

Stage 6 — Attack Modeling: Build attack trees (see the section on visualization below) that model how an attacker would realistically chain the vulnerabilities from Stage 5 to achieve the goals identified in Stage 4. This is where PASTA becomes distinct: it simulates plausible, end-to-end attack scenarios rather than cataloguing abstract threat categories.

Stage 7 — Risk and Impact Analysis: Quantify residual risk in business terms. What is the expected financial impact of each attack scenario succeeding? How does the cost of mitigation compare to the reduced expected loss? This output directly feeds into risk acceptance decisions and security budget justifications.

When to Choose PASTA: PASTA is the right choice for regulated industries such as FinTech, healthcare, and critical infrastructure where risk decisions require board-level justification. Its outputs — risk-quantified attack scenarios mapped to business impact — resonate with CISOs, legal teams, and regulators far more than a STRIDE threat table. The trade-off is time: a complete PASTA analysis for a complex system requires days of effort from cross-functional experts.


Deep Dive: LINDDUN Framework

LINDDUN (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of Information, Unawareness, Non-compliance) is a privacy-specific threat modeling framework developed by researchers at KU Leuven, Belgium. It is the privacy counterpart to STRIDE: where STRIDE identifies security threats (unauthorized access, integrity violations, availability degradation), LINDDUN identifies privacy threats (surveillance, profiling, data leakage).

With GDPR now fully enforced across Europe and similar regulations emerging globally (CCPA in California, LGPD in Brazil, PIPL in China), LINDDUN has become an essential tool for any application that processes personal data.

CategoryPrivacy Property ViolatedTypical Threat
LinkabilityUnlinkabilityCorrelating user actions across sessions via browser fingerprinting
IdentifiabilityAnonymity / PseudonymityRe-identifying pseudonymized records by combining multiple quasi-identifiers
Non-repudiationDeniabilityLogs that irrefutably prove a user visited a sensitive medical page
DetectabilityUndetectabilityDatabase membership inference — the mere presence of a record reveals sensitive information
Disclosure of InformationConfidentialityThird-party analytics scripts with access to full page URLs on a healthcare portal
UnawarenessTransparency / ControlUsers unaware that their behavioral data is sold to advertising networks
Non-complianceComplianceProcessing a special category of data (health, biometric) without explicit consent

Applying LINDDUN to a User Analytics Feature: Imagine a product team wants to add detailed click-tracking to a telehealth platform to improve UX. A LINDDUN analysis reveals:

  • Linkability: Click sequences across appointment scheduling and symptom logging pages can be correlated to build a longitudinal health behavior profile, even if individual events appear innocuous.
  • Identifiability: Combining visit timestamps, symptom search terms, and browser characteristics generates a quasi-identifier capable of re-identifying users even if explicit PII is stripped.
  • Disclosure: The analytics SDK (running in the user’s browser) has access to the full URL including query parameters like ?condition=diabetes&doctor_id=42, which it transmits to a third-party server.
  • Unawareness: Users have no visible indication that their clinical interactions are tracked and analyzed. The consent notice in the privacy policy is buried in legalese.
  • Non-compliance: Processing health data for profiling purposes without an explicit legal basis violates GDPR Article 9 and likely requires a Data Protection Impact Assessment (DPIA) under Article 35.

This analysis would lead the team to either strip analytics from health-related flows entirely, use a privacy-preserving alternative (differential privacy or on-device aggregation), or ensure proper consent and data minimization controls are in place.

Combining STRIDE and LINDDUN: The two frameworks are complementary. Run STRIDE for every component to identify security threats, and run LINDDUN for every data flow touching personal data to identify privacy threats. The intersection — threats that are both security failures and privacy violations — typically represents your highest-priority findings.


Visualizing Threats: Attack Trees and Data Flow Diagrams

Two visualization techniques underpin almost all threat modeling work: Data Flow Diagrams (DFDs) for system decomposition and Attack Trees for threat decomposition. Understanding both and knowing when to use each is fundamental to effective threat modeling.

Data Flow Diagram Elements

The DFD3 notation (maintained by Adam Shostack) standardizes four elements:

  • External Entity (rectangle): Any person or system outside the trust boundary of the application — a browser, a partner API, a third-party service.
  • Process (rounded rectangle or circle): Code or infrastructure that transforms or routes data — an API handler, a Lambda function, a message queue consumer.
  • Data Store (cylinder or parallel lines): Any component that persists data — a relational database, a Redis cache, an S3 bucket, a file system.
  • Data Flow (arrow): The movement of data between elements, annotated with the protocol and data type (e.g., “HTTPS: JWT + request body”).
  • Trust Boundary (dashed box or line): A boundary crossing which requires privilege verification. Trust boundaries exist between the internet and the DMZ, between microservices in different security zones, and between application code and an operating system.

The value of a DFD is not just documentation — it is the primary input to threat identification. Every trust boundary crossing, every external entity, every data store, and every process is a potential attack surface that should be systematically analyzed.

Attack Trees

Attack trees, introduced by Bruce Schneier, model how an attacker achieves a goal by decomposing it into sub-goals connected by AND/OR logic. An OR node means any one of the child paths can achieve the parent goal. An AND node means all child conditions must be met simultaneously. This structure makes it easy to identify the cheapest mitigation: cutting any branch of an OR node eliminates an attack path; cutting any single branch of an AND node defeats the entire attack.

Below is an attack tree modeling unauthorized access to payment data in our e-commerce API:

   graph TD
    A["Goal: Access Payment Data"] --> B["OR: Multiple paths"]
    B --> C["Path 1: Steal Admin Credentials"]
    B --> D["Path 2: Exploit API Vulnerability"]
    B --> E["Path 3: Compromise Database Directly"]

    C --> F["AND: Both required"]
    F --> G["Phishing Campaign Succeeds"]
    F --> H["Bypass MFA Challenge"]

    D --> I["OR: Any one sufficient"]
    I --> J["SQL Injection in Order Search"]
    I --> K["JWT alg:none Attack"]
    I --> L["IDOR on /payments/id"]

    E --> M["AND: Both required"]
    M --> N["Obtain Database Credentials"]
    M --> O["Database Port Accessible from Internet"]

Reading this tree, Path 3 requires both obtaining database credentials AND the database port being accessible. Implementing network segmentation (blocking public database port access) severs this branch regardless of whether credentials are ever compromised — a cheap, high-impact control. Path 1 requires both a successful phishing attack AND MFA bypass; enforcing MFA blocks this path. Path 2 has three OR branches, each of which must be closed independently through parameterized queries, algorithm pinning, and authorization checks.

Attack trees are a natural complement to STRIDE: STRIDE generates a broad list of threat categories; attack trees model the specific, realistic chains by which an attacker would realize those threats against your system.


Practical Code Examples: Implementing Security Controls

Threat modeling produces a prioritized list of threats. The measure of a useful threat model is the quality of security controls it generates. Below are concrete, production-ready code examples in Node.js and Python that implement the mitigations identified during the walkthrough — each traceable back to a specific STRIDE threat.

Rate Limiting on the Login Endpoint (Mitigates: DoS, Credential Stuffing)

   // Express.js with express-rate-limit
import rateLimit from 'express-rate-limit'

const loginLimiter = rateLimit({
	windowMs: 15 * 60 * 1000, // 15-minute sliding window
	max: 10, // Max 10 failed attempts per window per IP
	standardHeaders: true,
	legacyHeaders: false,
	skipSuccessfulRequests: true, // Only count failed attempts toward the limit
	message: {
		error: 'Too many login attempts. Please try again in 15 minutes.'
	}
})

// Apply before the handler, not globally
app.post('/api/auth/login', loginLimiter, loginHandler)

Algorithm-Pinned JWT Verification (Mitigates: Elevation of Privilege via alg:none)

   import jwt from 'jsonwebtoken'

const JWT_SECRET = process.env.JWT_SECRET // Never hardcode secrets
const ALLOWED_ALGORITHMS = ['HS256'] // Explicit allowlist prevents alg confusion

export function verifyAccessToken(token) {
	try {
		const payload = jwt.verify(token, JWT_SECRET, {
			algorithms: ALLOWED_ALGORITHMS, // Rejects alg:none and RS256/HS256 confusion
			issuer: process.env.JWT_ISSUER,
			audience: process.env.JWT_AUDIENCE
		})
		return { valid: true, payload }
	} catch (err) {
		// Return a generic error — do not expose why verification failed
		return { valid: false, error: 'Invalid or expired token' }
	}
}

Parameterized Queries (Mitigates: SQL Injection / Tampering)

   import psycopg2
from psycopg2.extras import RealDictCursor

def get_user_orders(conn, user_id: int, status: str) -> list[dict]:
    """
    Retrieve orders for a user filtered by status.
    Uses parameterized query to prevent SQL injection.
    """
    query = """
        SELECT order_id, total, status, created_at
        FROM orders
        WHERE user_id = %s
          AND status = %s
        ORDER BY created_at DESC
        LIMIT 100
    """
    with conn.cursor(cursor_factory=RealDictCursor) as cur:
        cur.execute(query, (user_id, status))  # Parameters are escaped by the driver
        return cur.fetchall()

# NEVER do this — vulnerable to SQL injection:
# query = f"SELECT * FROM orders WHERE user_id = {user_id} AND status = '{status}'"

Ownership-Enforced Resource Access (Mitigates: IDOR / Horizontal Privilege Escalation)

   export async function getOrder(req, res) {
	const { orderId } = req.params
	const requestingUserId = req.auth.userId // Extracted from verified JWT — never from request body

	const order = await db.orders.findById(orderId)

	if (!order) {
		// Return 404 even when the order exists but belongs to another user
		// to avoid leaking existence information
		return res.status(404).json({ error: 'Order not found' })
	}

	// Explicit ownership check — enforced in the handler, not just the router
	const isOwner = order.userId === requestingUserId
	const isAdmin = req.auth.roles.includes('admin')

	if (!isOwner && !isAdmin) {
		return res.status(403).json({ error: 'Access denied' })
	}

	return res.json(sanitizeOrderResponse(order)) // Return only allowed fields
}

Allowlist-Based Field Updates (Mitigates: Mass Assignment / Elevation of Privilege)

   // Define exactly which fields a regular user can update
const USER_UPDATABLE_FIELDS = new Set(['displayName', 'shippingAddress', 'phoneNumber'])

export async function updateUserProfile(req, res) {
	const updates = {}

	// Build the update object using only allowlisted fields
	for (const [key, value] of Object.entries(req.body)) {
		if (USER_UPDATABLE_FIELDS.has(key)) {
			updates[key] = value
		}
	}

	if (Object.keys(updates).length === 0) {
		return res.status(400).json({ error: 'No valid fields provided for update' })
	}

	await db.users.update({ id: req.auth.userId }, updates)
	return res.json({ message: 'Profile updated' })
	// The 'role', 'email', 'passwordHash' fields cannot be updated through this endpoint
}

Structured Audit Logging (Mitigates: Repudiation)

   import json
import logging
from datetime import datetime, timezone
from typing import Optional

# Dedicate a separate logger for audit events
# Configure this to write to an append-only, off-host destination
audit_log = logging.getLogger("audit")

def record_audit_event(
    actor_id: str,
    action: str,
    resource: str,
    outcome: str,  # "success" | "failure"
    metadata: Optional[dict] = None,
) -> None:
    """
    Write an immutable audit record. Always send to a centralized SIEM.
    Fields must be sufficient to reconstruct what happened, by whom, and when.
    """
    entry = {
        "timestamp": datetime.now(timezone.utc).isoformat(),
        "actor_id": actor_id,
        "action": action,
        "resource": resource,
        "outcome": outcome,
        "metadata": metadata or {},
    }
    audit_log.info(json.dumps(entry))

# Example usage in an order cancellation handler
record_audit_event(
    actor_id="user_42",
    action="ORDER_CANCEL",
    resource="orders/1337",
    outcome="success",
    metadata={"reason": "customer_request", "refund_amount": 49.99},
)

Each of these controls exists because a specific threat was identified during the modeling session. This traceability — from threat to control to code — is what distinguishes security decisions backed by a threat model from ad-hoc security intuition.


Comparison: Threat Modeling Frameworks at a Glance

No single framework is optimal for every context. The right choice depends on your team’s security maturity, the nature of the system, compliance requirements, and the primary audience for the threat model’s outputs.

FrameworkPrimary FocusThreat CategoriesComplexityLearning CurveBest For
STRIDESecurity threats6 (Spoof, Tamper, Repudiate, Disclose, DoS, EoP)Low–MediumDaysDeveloper teams, web/API security, Agile sprints
PASTABusiness-aligned riskAttack simulation stagesHighWeeksEnterprise, regulated industries, risk justification
LINDDUNPrivacy threats7 (Linkability, Identifiability, etc.)MediumDays–WeeksGDPR/CCPA-regulated apps, personal data processing
VASTScalable processVisual, pipeline-orientedMediumWeeksLarge enterprise DevOps pipelines
OCTAVEOrganizational riskOperational / people focusHighWeeksInfrastructure, organizational resilience
MITRE ATT&CKAdversary TTPs14 tactics, 200+ techniquesHighWeeks–MonthsBlue team / red team, threat hunting, SOC analysts
TRIKERisk acceptanceRisk-based enumerationHighWeeksRisk-driven security programs

Practical Guidance:

  • Start with STRIDE. For most development teams, STRIDE provides the best return on investment. It is well-documented, tooling support is mature, and its six categories are immediately actionable for developers.
  • Add LINDDUN for privacy. Any feature that collects, processes, or transmits personal data warrants a LINDDUN analysis alongside STRIDE. The two frameworks take roughly two hours combined for a well-scoped feature.
  • Graduate to PASTA for high-risk systems. When your application handles financial transactions, healthcare records, or critical infrastructure, the deeper business risk quantification that PASTA provides justifies its higher time investment.
  • Use MITRE ATT&CK for adversary-centric analysis. When you want to validate mitigations against the specific Tactics, Techniques, and Procedures (TTPs) used by known threat actors, ATT&CK provides an unmatched reference. It pairs particularly well with STRIDE: STRIDE identifies what type of threat exists; ATT&CK describes how real attackers execute that type of threat.

Comparison: Threat Modeling Tools

Tooling choices significantly affect the efficiency and adoption of threat modeling in a team. Below is a practical comparison of the main options available as of 2024.

ToolTypeLicenseIntegration TargetsStandout Feature
Microsoft Threat Modeling ToolDesktop applicationFreeAzure DevOpsBuilt-in STRIDE stencils; generates threat reports automatically from DFDs
OWASP Threat DragonWeb + DesktopOpen Source (Apache 2.0)GitHub, GitLabBrowser-based, minimal setup; excellent for open-source and Agile teams
IriusRiskSaaSFreemium / EnterpriseJira, GitHub, Azure DevOpsAutomatically creates security tickets and maps to compliance frameworks
ThreatModelerSaaSPaidAWS, Azure, GCP, CI/CDCloud-native DFD import from IaC; enterprise workflow automation
OWASP pytmPython library (code)Open Source (MIT)Any CI/CD pipelineThreat-modeling-as-code; DFDs and threat reports generated from Python scripts
draw.io + TM LibraryDiagrammingFreeConfluence, VS CodeLowest setup friction; flexible for custom approaches
CairisWeb applicationOpen Source (Apache 2.0)StandaloneCombines security, privacy, and usability modeling in one tool
TaaC-AICLI toolOpen Source (MIT)GitHub ActionsAI-assisted threat discovery from code or architecture descriptions

Choosing Based on Team Maturity:

  • Beginner teams benefit most from OWASP Threat Dragon — the visual DFD editor with annotatable STRIDE components removes the blank-page problem and makes threat modeling accessible without prior training.
  • Teams using infrastructure-as-code (Terraform, CDK, Pulumi) should explore pytm: define your system model in Python, run python model.py, and receive a rendered DFD plus an auto-generated threat report. This integrates naturally into CI pipelines.
  • Enterprise teams running SAFe or LeSS frameworks will find IriusRisk’s Jira integration invaluable: threats become automatically tracked security stories with compliance traceability baked in.

When and How Often to Do Threat Modeling

One of the most practical questions developers ask when starting their threat modeling practice is not how to do it, but when to do it. The answer depends on where you are in the software development lifecycle.

At Project Inception: Architectural Threat Modeling

Before writing production code, create a lightweight architectural threat model. The purpose here is not to enumerate every possible threat exhaustively but to identify secure-design decisions with long-term consequences: choice of authentication protocol, trust boundaries between services, data residency and encryption strategy, and external integration points. These decisions are expensive to reverse once the system is built. A two-hour whiteboarding session with your team and a rough DFD sketched on a whiteboard is far more valuable at this stage than a comprehensive tool-generated report.

During Feature Development: Feature-Level Threat Modeling

Every significant new feature that changes data flows, authentication paths, authorization logic, or external integrations warrants a feature-level threat model. In Agile terms, add a threat modeling review to the Definition of Ready for user stories that touch sensitive functionality. A useful technique here is the abuser story: for every user story, write a corresponding negative story from an attacker’s perspective.

User story: “As a registered user, I want to reset my password via email so that I can regain access to my account.”

Abuser story: “As an attacker, I want to trigger password reset emails for arbitrary accounts so that I can enumerate valid email addresses and carry out account takeover via predictable tokens.”

The abuser story drives security acceptance criteria: use time-limited tokens, same-response for registered and unregistered emails, single-use token invalidation.

Triggers for Re-Assessment

An existing threat model becomes stale whenever the system changes in substantive ways. Common re-assessment triggers include:

TriggerRecommended Action
Significant new feature (auth, payments, file upload)Feature-level threat model session
New third-party integrationFull integration point analysis
Migration to microservices or cloud-native architectureArchitectural threat model refresh
New data type collected (PII, financial, health)STRIDE + LINDDUN analysis
Security incident occursPost-incident threat model retrospective
New dependency with known CVEsSupply chain threat model update
Quarterly reviewSweep for new threats based on MITRE ATT&CK

Integrating Into Agile Processes

Threat modeling does not require a waterfall design phase to be effective — it adapts well to Agile. The key is right-sizing each session. A full architectural threat model might take a two-hour refinement session at the start of a new initiative. A feature-level analysis might be a 30-minute conversation during sprint planning for a security-sensitive story. The goal is continuity, not ceremony.

Practically, consider adding a security story template to your backlog tooling:

Security Story: As the security team, we want [threat-to-mitigate] to be addressed before the feature ships, so that [security property] is maintained.

Acceptance Criteria:

  • Control implemented and code-reviewed
  • Automated test verifies control blocks the attack vector
  • Threat model document updated to reflect new component

The Minimum Viable Threat Model

For teams just starting out, a minimum viable threat model for a feature consists of three things: a rough DFD (pen and paper is acceptable), a STRIDE table for each trust boundary crossing in the DFD, and a prioritized list of the top three threats with assigned owners and due dates. This takes under two hours and produces actionable output immediately — which is more valuable than a perfect process that never gets started.


Common Mistakes and Anti-Patterns in Threat Modeling

Even experienced teams fall into predictable failure modes. Recognizing these anti-patterns is the first step toward building a threat modeling practice that actually improves security rather than producing documentation theater.

Anti-Pattern 1: The One-Time Deliverable

What it looks like: Threat modeling is performed once during the design phase, the deliverable is filed in Confluence, and it is never updated as the system evolves.

Why it fails: The system changes — new features are added, dependencies are updated, infrastructure migrates. The threat model anchored to the original design describes a system that no longer exists. It provides false assurance while real attack surface accumulates.

The fix: Store the threat model in the repository alongside the code so updates are committed with architectural changes. Add a PR checklist item: “Does this change affect the threat model? If so, update docs/threat-model.md.” A threat model that lives in the repo decays at the same rate as the code — slowly and visibly.

Anti-Pattern 2: The Security Team Silo

What it looks like: The security team conducts threat modeling in isolation, produces a list of findings, and hands it to developers as a security requirements document.

Why it fails: Developers who were not involved lack the context to understand why controls are required, implement them correctly, or preserve them during refactoring. Security controls get dropped because they seem arbitrary. Misunderstandings in implementation negate the protection entirely.

The fix: Threat modeling sessions must include the developers who will implement the feature. Security specialists facilitate, provide expertise, and ensure the analysis is complete — but the session is collaborative. The shared understanding produced in the room is often more valuable than the written output.

Anti-Pattern 3: Scope Creep (Boiling the Ocean)

What it looks like: The team attempts to produce a comprehensive threat model for the entire application before starting any mitigations, iterating through hundreds of potential threats.

Why it fails: Exhaustive enumeration in complex systems produces hundreds of low-priority findings alongside critical ones, paralyzes prioritization, and consumes weeks of effort that produces a document so large no one reads it.

The fix: Scope each session tightly. “What are we working on?” is the first question in the Threat Modeling Manifesto for a reason. A focused 90-minute session on one microservice or one user-facing feature produces more actionable output than a week-long attempt to model everything.

Anti-Pattern 4: Equal Priority for All Threats

What it looks like: Every identified threat is added to a flat backlog without severity scoring, and the team works through them in the order they were identified.

Why it fails: A team working through a 50-item flat list will fix trivial threats because they appear early while critical ones wait. Priority without risk scoring is random.

The fix: Score every threat using a consistent framework — DREAD, CVSS, or even a simple High/Medium/Low rating based on a 3×3 likelihood and impact matrix. Make prioritization explicit and record the scoring in the threat model document so future teams understand why certain threats were deferred.

Anti-Pattern 5: No Validation of Implemented Controls

What it looks like: Mitigations are documented in the threat model and linked to development tickets, but there is no process to verify that implementations correctly address the threats.

Why it fails: Security controls frequently fail subtly in implementation. Rate limiting has off-by-one errors. JWT validation has a misconfigured audience claim. Authorization checks exist in middleware but are bypassed by a direct database call in a background job.

The fix: For every mitigation, write at minimum one automated negative test that verifies the attack is blocked. A rate limit should have a test that sends 11 requests within the window and asserts a 429 response on the 11th. A JWT algorithm check should have a test that sends a token with "alg": "none" and asserts rejection. These tests live in the repository as machine-verifiable proof that the threat model is effective.

Anti-Pattern 6: Ignoring the Software Supply Chain

What it looks like: The threat model covers first-party code components only. Third-party libraries, open-source dependencies, SDKs, and SaaS integrations are excluded from analysis.

Why it fails: The majority of modern application code is open-source dependencies. Supply chain attacks — SolarWinds, Log4Shell, XZ Utils, the npm event-stream incident — demonstrate that third-party components are a primary attack vector, not an afterthought. An SDK that processes user data on behalf of your application is as much a threat modeling subject as your own API handlers.

The fix: Include a dedicated supply chain section in your threat model. For each third-party library or SaaS integration that touches sensitive data flows, record: what data it accesses, under what conditions, and what your monitoring and incident response procedure is if that component is compromised. Integrate SCA (Software Composition Analysis) tooling such as Snyk, OWASP Dependency-Check, or GitHub Dependabot, and route critical CVE findings into your threat model backlog.

Anti-Pattern 7: Documenting the Present Instead of Analyzing the Future

What it looks like: The “threat model” describes existing security controls rather than identifying gaps. It reads: “The login endpoint is protected by rate limiting and MFA. The database uses encryption at rest.”

Why it fails: This is a security control inventory, not a threat model. It tells you what you have, not what you’re missing. It cannot reveal blind spots because it starts from controls and works backward to threats, rather than starting from threats and evaluating whether controls adequately address them.

The fix: Always begin from the attacker’s perspective, independent of existing controls. Identify threats first; then evaluate whether existing controls adequately address them; then identify residual risks. The question is not “what controls do we have?” but “given all possible attacks, what is our residual exposure?”


Conclusion

Threat modeling is a proactive and essential step in building secure applications. By identifying potential risks early and implementing effective mitigations, developers can reduce vulnerabilities and build resilient systems. With the right frameworks, tools, and collaboration, threat modeling becomes a valuable part of the development lifecycle.

The frameworks covered here — STRIDE for systematic security analysis, PASTA for business-aligned risk quantification, and LINDDUN for privacy threat modeling — are complementary, not competing. Mature teams combine them: STRIDE for every feature, LINDDUN for anything touching personal data, and PASTA for high-risk components requiring executive-level risk review. The code examples demonstrate that threat modeling is not documentation for its own sake but a direct driver of specific, justified security controls in production code.

Most importantly, threat modeling is a conversation as much as it is a document. The act of walking through “what can go wrong” with your team builds shared security intuition that surfaces in design decisions, code reviews, and incident response long after the session ends. The anti-patterns described above all share a common root cause: treating threat modeling as a one-time compliance exercise rather than an ongoing security practice embedded in the development process.

Start incorporating threat modeling into your projects today — not as a gate before shipping, but as a habit during design — and you will build systems that are harder to attack, easier to defend, and more deserving of your users’ trust.