CSIPE

Published

- 30 min read

How to Use Static Application Security Testing (SAST) Tools


Secure Software Development Book

How to Write, Ship, and Maintain Code Without Shipping Vulnerabilities

A hands-on security guide for developers and IT professionals who ship real software. Build, deploy, and maintain secure systems without slowing down or drowning in theory.

Buy the book now
The Anonymity Playbook Book

Practical Digital Survival for Whistleblowers, Journalists, and Activists

A practical guide to digital anonymity for people who can’t afford to be identified. Designed for whistleblowers, journalists, and activists operating under real-world risk.

Buy the book now
The Digital Fortress Book

The Digital Fortress: How to Stay Safe Online

A simple, no-jargon guide to protecting your digital life from everyday threats. Learn how to secure your accounts, devices, and privacy with practical steps anyone can follow.

Buy the book now

Introduction

Static Application Security Testing (SAST) tools are critical for identifying vulnerabilities in source code during the early stages of development. By analyzing code without executing it, these tools help developers pinpoint security flaws, adhere to best practices, and reduce the cost of fixes.

This article provides a comprehensive guide on how to use SAST tools effectively, from integration into workflows to interpreting results and taking action.

What Are SAST Tools?

SAST tools are designed to analyze an application’s source code, bytecode, or binaries for vulnerabilities. Unlike dynamic testing tools that require a running application, SAST tools perform a static analysis to uncover issues such as:

  1. Injection Vulnerabilities:
  • SQL injection, command injection, etc.
  1. Hardcoded Secrets:
  • API keys, credentials, or sensitive data within the codebase.
  1. Insecure Coding Practices:
  • Improper error handling, lack of input validation, etc.
  1. Outdated Cryptography:
  • Usage of weak or deprecated encryption methods.

Benefits of Using SAST Tools

1. Early Detection of Vulnerabilities

SAST tools identify security flaws during development, allowing developers to fix issues before deployment.

2. Cost Efficiency

Fixing vulnerabilities early in the development lifecycle is significantly cheaper than addressing them in production.

3. Compliance

Many regulatory frameworks, such as GDPR and PCI DSS, require secure coding practices. SAST tools help ensure compliance by detecting non-conformities.

4. Improved Code Quality

By enforcing coding standards, SAST tools contribute to cleaner, more maintainable code.

How to Use SAST Tools

1. Choose the Right Tool

Select a SAST tool that aligns with your project’s requirements, programming languages, and workflow. Popular options include:

  • SonarQube: Supports multiple languages with robust integrations.
  • Checkmarx: Offers comprehensive scanning capabilities and detailed reports.
  • Bandit: Specialized for Python applications.
  • Veracode: A cloud-based SAST solution.

2. Set Up the Tool

Installation

  • Install the SAST tool locally or integrate it with your CI/CD pipeline.
  • Ensure compatibility with your development environment.

Configuration

  • Configure the tool to align with your coding standards and security requirements.
  • Specify file paths and exclude unnecessary directories (e.g., node_modules, vendor).

3. Run Initial Scans

Perform an initial scan of your codebase to establish a baseline. Analyze the results to identify existing vulnerabilities.

Example (Using Bandit for Python):

   bandit -r my_project/

4. Integrate into Development Workflows

Incorporate SAST tools into your daily workflows to ensure ongoing security:

  • IDE Integration: Use plugins to scan code directly in your development environment.
  • Pre-Commit Hooks: Automatically run scans before committing code.
  • CI/CD Integration: Add SAST scans to your CI/CD pipelines for automated security checks.

Example (GitHub Actions):

   jobs:
  sast:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: Run Bandit
        run: bandit -r my_project/

5. Review and Prioritize Results

SAST tools often generate extensive reports. Focus on high-severity vulnerabilities and issues affecting critical components.

Key Metrics to Consider:

  • Severity: How critical is the vulnerability?
  • Impact: What is the potential damage if exploited?
  • Ease of Fixing: How complex is the remediation?

6. Remediate Vulnerabilities

Address identified vulnerabilities using secure coding practices. Refer to tool-specific recommendations or industry guidelines like OWASP Top 10.

Example (Fixing Hardcoded Secrets):

Before:

   API_KEY = "hardcoded_secret_key"

After:

   import os
API_KEY = os.getenv("API_KEY")

7. Re-Scan and Monitor

After remediating vulnerabilities, re-scan your codebase to ensure fixes are effective. Regularly monitor new scans to identify emerging issues.

Best Practices for Using SAST Tools

1. Customize Rules

Tailor the tool’s ruleset to match your project’s specific needs. Exclude false positives and focus on relevant vulnerabilities.

2. Combine with Other Tools

While SAST tools are powerful, they should be complemented with dynamic analysis (DAST) and runtime monitoring tools for comprehensive security coverage.

3. Educate Your Team

Train developers on how to use SAST tools effectively and interpret their results.

4. Automate Wherever Possible

Automate scans in CI/CD pipelines to ensure consistent security checks throughout the development lifecycle.

Common Challenges and Solutions

Challenge: False Positives

SAST tools may flag issues that are not actual vulnerabilities.

Solution:

  • Customize rulesets to filter out irrelevant alerts.
  • Conduct manual reviews to verify flagged issues.

Challenge: Integration Complexity

Integrating SAST tools into existing workflows can be challenging.

Solution:

  • Start with IDE plugins and gradually expand to CI/CD pipelines.
  • Use documentation and community support for setup assistance.

Challenge: Large Codebases

Scanning large codebases can be time-consuming.

Solution:

  • Break scans into smaller, modular components.
  • Run full scans during off-hours or specific stages of the pipeline.
  1. Secure Code Reviews:
  • Use SAST tools during code reviews to identify security issues early.
  1. Compliance Audits:
  • Demonstrate adherence to security standards by providing SAST reports.
  1. Onboarding New Developers:
  • Train new team members in secure coding practices using SAST results.

Setting up a SAST tool is the first concrete step toward a more secure codebase. The guides below walk through installation, configuration, and first-run for the four tools most commonly encountered in modern development teams. Each tool has a distinct sweet spot in terms of language support, performance, and ecosystem fit, and many teams layer two or more of them together to achieve broader coverage.

Semgrep

Semgrep is a fast, open-source static analysis engine that runs locally without sending code to external servers. It is language-agnostic and supports over 30 languages with a single, consistent rule format. Its rule syntax is readable YAML, which means developers — not just security engineers — can contribute and review rules.

Installation:

   # Install via pip
pip install semgrep

# Or via Homebrew on macOS/Linux
brew install semgrep

# Verify installation
semgrep --version

Running built-in security rulesets:

   # Scan for OWASP Top 10 patterns
semgrep --config "p/owasp-top-ten" ./src

# Auto-detect language and apply recommended rules
semgrep --config auto ./src

# Output results as JSON for downstream tooling
semgrep --config "p/security-audit" --json ./src > semgrep-results.json

Semgrep’s public registry at semgrep.dev/r contains thousands of community-maintained rules. Running --config auto is an excellent starting point — Semgrep detects which languages your project uses and applies the most relevant ruleset automatically.

Excluding noisy paths:

Create a .semgrepignore file in the project root (it follows .gitignore syntax):

   node_modules/
vendor/
dist/
build/
**/*.test.js
**/*.spec.ts

Excluding generated, vendored, and test-only code is essential to keep the signal-to-noise ratio high from day one.

IDE integration: The Semgrep VS Code extension and JetBrains plugin surface findings inline as you type. Install the extension, configure it to point at your .semgrep/rules/ directory, and vulnerabilities appear as diagnostic warnings with one-click documentation.


SonarQube

SonarQube is a mature, widely deployed SAST platform that supports over 30 languages and provides a centralised security dashboard, trend tracking, and quality gate enforcement. The Community Edition is free and fully functional for single-instance use.

Quick start with Docker:

   docker run -d --name sonarqube \
  -p 9000:9000 \
  sonarqube:community

Wait about 30 seconds, then open http://localhost:9000. The default credentials are admin / admin — change them immediately on first login.

Install SonarScanner CLI:

   # macOS/Linux via Homebrew
brew install sonar-scanner

# Alternatively, download the ZIP from:
# https://docs.sonarsource.com/sonarqube/latest/analyzing-source-code/scanners/sonarscanner/

Configure your project by creating sonar-project.properties in the repository root:

   sonar.projectKey=my-app
sonar.projectName=My Application
sonar.sources=src
sonar.exclusions=**/node_modules/**,**/vendor/**,**/*.test.*,**/migrations/**
sonar.host.url=http://localhost:9000
sonar.token=your_generated_token_here

Run the scan:

   sonar-scanner

The dashboard at http://localhost:9000 shows bugs, vulnerabilities, code smells, duplications, and coverage trends over time. The Quality Gate feature lets you define pass/fail thresholds — for example, zero new critical vulnerabilities and a minimum 80% test coverage — and fail CI pipelines automatically when any threshold is breached.


Bandit (Python)

Bandit is a purpose-built SAST tool for Python developed under the PyCQA umbrella. It parses Python ASTs and walks the tree looking for common issues: hardcoded secrets, command injection, use of insecure functions, and weak cryptography.

Installation and basic usage:

   pip install bandit

# Recursive scan of a directory
bandit -r src/

# Filter to medium+ severity and medium+ confidence
bandit -r src/ -l -ii

# Generate an HTML report
bandit -r src/ -f html -o bandit-report.html

# Skip specific test IDs to reduce noise
bandit -r src/ --skip B101,B311

Understanding Bandit output:

   >> Issue: [B105:hardcoded_password_string] Possible hardcoded password: 'supersecret'
   Severity: Low   Confidence: Medium
   Location: app/config.py:14
   More info: https://bandit.readthedocs.io/en/latest/plugins/b105_hardcoded_password_string.html

Each finding includes a test ID (linking to docs), a severity (Low / Medium / High), and a confidence level. Begin by filtering to High severity + High confidence using -lll -iii to focus on unambiguous, serious findings before tackling lower-priority items.

Bandit also supports a .bandit configuration file for project-level settings:

   [bandit]
exclude = /tests,/migrations
skips = B101

ESLint Security Rules (JavaScript / TypeScript)

For JavaScript and TypeScript codebases, ESLint combined with security-focused plugins delivers SAST coverage with minimal friction — most teams already use ESLint for code quality enforcement.

Install the security plugins:

   npm install --save-dev eslint eslint-plugin-security eslint-plugin-no-secrets

Configure eslint.config.js (flat config):

   import security from 'eslint-plugin-security'
import noSecrets from 'eslint-plugin-no-secrets'

export default [
	security.configs.recommended,
	{
		plugins: { 'no-secrets': noSecrets },
		rules: {
			'security/detect-sql-injection': 'error',
			'security/detect-non-literal-regexp': 'warn',
			'security/detect-object-injection': 'warn',
			'security/detect-possible-timing-attacks': 'warn',
			'security/detect-child-process': 'error',
			'no-secrets/no-secrets': ['error', { tolerance: 4.2 }]
		}
	}
]

Run the security scan:

   npx eslint src/ --ext .js,.ts,.jsx,.tsx

The eslint-plugin-security ruleset detects patterns including eval() with dynamic input, child_process.exec() with user-supplied data, prototype pollution via unsafe object access, and ReDoS-vulnerable regular expressions. For Node.js back-end code, the @microsoft/eslint-plugin-sdl plugin adds checks aligned with Microsoft’s Security Development Lifecycle, such as detecting unsafe innerHTML assignments and postMessage without origin validation.


CI/CD Integration Deep Dive

Integrating SAST into CI/CD pipelines transforms security from a periodic gate into a continuous safeguard. Every pull request automatically receives a security evaluation, dangerous patterns never merge silently, and the team accumulates a live history of its security posture.

The diagram below shows where SAST slots into a typical GitOps pipeline:

   flowchart LR
    Dev([Developer]) -->|git push| PR[Pull Request]
    PR --> CI[CI Pipeline Triggered]
    CI --> Lint[Linting and Unit Tests]
    CI --> SAST[SAST Scan]
    SAST -->|findings| Gate{Quality Gate}
    Gate -->|pass| Review[Code Review]
    Gate -->|fail| Feedback[Feedback to Developer]
    Review -->|approved| Deploy[Deploy to Staging]
    Deploy --> DAST[DAST Scan]
    DAST --> Production[Production Release]

Pre-Commit Hooks for Instant Feedback

CI pipeline scans provide comprehensive coverage but introduce a feedback delay — a developer may only learn about a finding 10 to 20 minutes after pushing. Pre-commit hooks close that gap by running lightweight scans on the exact files being committed, in under five seconds, before the commit is even recorded in Git history.

The pre-commit framework (available at pre-commit.com) provides a language-agnostic way to manage and share hooks across a team.

Install pre-commit:

   pip install pre-commit

Create .pre-commit-config.yaml in the repository root:

   repos:
  - repo: https://github.com/PyCQA/bandit
    rev: 1.7.8
    hooks:
      - id: bandit
        args: ['-c', '.bandit', '-r']
        files: \.py$

  - repo: https://github.com/returntocorp/semgrep
    rev: v1.70.0
    hooks:
      - id: semgrep
        args: ['--config', '.semgrep/rules/', '--error']

  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.18.0
    hooks:
      - id: gitleaks

Install the hooks:

   pre-commit install

From this point on, every git commit automatically runs the configured hooks. A developer who attempts to commit a file containing a hardcoded password will see an immediate, actionable error message identifying the exact line — and the commit will be rejected until the issue is resolved.

Pre-commit hooks are intentionally designed to run fast. They should focus on targeted, high-priority rules, not a comprehensive full-repo scan. The goal is to catch the most obvious issues instantly, with the understanding that the full CI pipeline scan provides thorough coverage on pull request. Think of pre-commit as a spell checker — not a full editorial review — that prevents avoidable mistakes from ever reaching the remote repository.

An important consideration is that pre-commit hooks run locally and can be bypassed with git commit --no-verify. This is acceptable. Pre-commit hooks are a developer convenience tool, not a security control. The CI pipeline quality gate is the enforceable control. The hooks’ value is in reducing the number of findings that ever reach CI, making the pipeline faster and keeping the developer in flow.

For teams using Node.js, the husky package provides a similar capability. Configure it in package.json alongside lint-staged to run ESLint security rules only on the files staged for commit:

   {
	"lint-staged": {
		"*.{js,ts,jsx,tsx}": [
			"eslint --max-warnings=0 --rule 'security/detect-eval-with-expression: error'"
		]
	}
}

Combining pre-commit for immediate local feedback, a full CI scan for comprehensive coverage, and quality gates for enforceable blocking creates a layered defence that catches the vast majority of introduced vulnerabilities before they reach the main branch — and does so with a developer experience that feels supportive rather than obstructive.

GitHub Actions

Semgrep:

   name: Semgrep SAST

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  semgrep:
    name: Semgrep Scan
    runs-on: ubuntu-latest
    container:
      image: semgrep/semgrep
    steps:
      - uses: actions/checkout@v4
      - name: Run Semgrep
        run: semgrep --config "p/security-audit" --config "p/owasp-top-ten" --error .
        env:
          SEMGREP_APP_TOKEN: ${{ secrets.SEMGREP_APP_TOKEN }}

The --error flag causes the workflow step — and therefore the entire pipeline — to fail when any finding is present, enforcing a hard gate against merging vulnerable code.

Bandit with SARIF upload to GitHub Code Scanning:

   name: Bandit Python SAST

on: [push, pull_request]

jobs:
  bandit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.12'
      - name: Install Bandit
        run: pip install bandit[sarif]
      - name: Run Bandit
        run: bandit -r src/ -f sarif -o bandit.sarif || true
      - name: Upload SARIF to GitHub Security
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: bandit.sarif

Uploading SARIF to GitHub Code Scanning surfaces findings in the repository’s Security tab, where they can be triaged, dismissed with justifications, and tracked alongside Dependabot alerts.

SonarCloud:

   name: SonarCloud Analysis

on:
  push:
    branches: [main]
  pull_request:
    types: [opened, synchronize, reopened]

jobs:
  sonarcloud:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0 # Required for blame information and new code detection
      - name: SonarCloud Scan
        uses: SonarSource/sonarcloud-github-action@master
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

A sonar-project.properties file at the repo root supplies the project key and source path configuration.

GitLab CI/CD

GitLab supports SAST natively. You can use the pre-built template or run Semgrep directly:

   semgrep-sast:
  image: semgrep/semgrep
  stage: test
  script:
    - semgrep --config "p/security-audit" --gitlab-sast --output gl-sast-report.json . || true
  artifacts:
    reports:
      sast: gl-sast-report.json
  rules:
    - if: $CI_PIPELINE_SOURCE == 'merge_request_event'
    - if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH

GitLab natively parses gl-sast-report.json and renders findings directly inside the merge request review UI, giving reviewers full context without leaving GitLab.

Jenkins Pipeline

   pipeline {
    agent any
    stages {
        stage('SAST – Semgrep') {
            steps {
                sh '''
                    pip install semgrep
                    semgrep --config "p/security-audit" \
                            --json --output semgrep-results.json \
                            . || true
                '''
                archiveArtifacts artifacts: 'semgrep-results.json'
            }
        }
        stage('SAST – Bandit') {
            steps {
                sh '''
                    pip install bandit
                    bandit -r src/ -f json -o bandit-results.json || true
                '''
                archiveArtifacts artifacts: 'bandit-results.json'
            }
        }
    }
    post {
        always {
            publishHTML(target: [
                reportDir: '.',
                reportFiles: 'bandit-results.json',
                reportName: 'Bandit SAST Report'
            ])
        }
    }
}

A best practice in Jenkins is to use || true after scan commands so the pipeline does not fail before the results are archived. A downstream quality gate step — or parsing the JSON to check severity counts — then decides whether to mark the build as failed.


Writing Custom SAST Rules

Built-in rulesets cover common, well-known vulnerabilities. But every codebase has internal APIs, proprietary patterns, and domain-specific risks that generic community rules will never detect. Writing custom rules lets you encode your team’s security contracts into automated, repeatable checks.

Custom Semgrep Rules

Semgrep rules are defined in YAML and are straightforward enough for any developer to read and review. A minimal rule consists of an id, a pattern, a message, a languages list, and a severity.

Detect a hardcoded JWT secret:

   rules:
  - id: hardcoded-jwt-secret
    patterns:
      - pattern: jwt.sign($PAYLOAD, "...")
    message: >
      Hardcoded JWT secret detected. Use an environment variable:
      jwt.sign($PAYLOAD, process.env.JWT_SECRET)
    languages: [javascript, typescript]
    severity: ERROR
    metadata:
      category: security
      cwe: CWE-798
      owasp: 'A02:2021 - Cryptographic Failures'

Detect eval() with HTTP request data:

   rules:
  - id: eval-with-user-input
    patterns:
      - pattern: eval($REQ.body.$FIELD)
      - pattern: eval($REQ.query.$FIELD)
      - pattern: eval($REQ.params.$FIELD)
    message: >
      eval() called with user-supplied input. This allows arbitrary code
      execution. Use JSON.parse() or a safe evaluation library instead.
    languages: [javascript, typescript]
    severity: ERROR
    metadata:
      cwe: CWE-95
      owasp: 'A03:2021 - Injection'

Rule with an automated fix suggestion:

Semgrep rules support a fix field that semgrep --autofix applies automatically:

   rules:
  - id: md5-insecure-hash
    pattern: hashlib.md5(...)
    fix: hashlib.sha256(...)
    message: MD5 is cryptographically broken. Use SHA-256 or stronger.
    languages: [python]
    severity: WARNING
    metadata:
      cwe: CWE-327
      owasp: 'A02:2021 - Cryptographic Failures'

Organise rules in a directory structure:

   .semgrep/
  rules/
    auth.yaml
    crypto.yaml
    injection.yaml
    secrets.yaml

Reference the directory in your scan command:

   semgrep --config .semgrep/rules/ ./src

Test your rules with inline annotations:

Semgrep supports inline test comments that verify rules match exactly what they should — and nothing more:

   # ruleid: hardcoded-jwt-secret
token = jwt.sign(payload, "my_super_secret")

# ok: hardcoded-jwt-secret
token = jwt.sign(payload, os.getenv("JWT_SECRET"))

Run the tests with:

   semgrep --test .semgrep/rules/

This makes rule behaviour as repeatable and code-reviewable as unit tests — a critical property as your custom rule library grows. When rules are tested, regressions from rule edits are caught before they ship.


SAST Tool Comparison

Choosing the right SAST tool for your project depends on your language stack, team size, budget, and integration requirements. The table below compares the most widely used open-source and freemium options:

ToolPrimary LanguagesLicenseCI/CD IntegrationCustom RulesSARIF OutputBest For
Semgrep30+ (Python, JS, Java, Go, Ruby …)OSS / Commercial platformGitHub Actions, GitLab, Jenkins, CircleCIYes — YAMLYesPolyglot repos, fast custom rules
SonarQube30+Community free / EnterpriseGitHub, GitLab, Azure DevOps, Bitbucket, JenkinsQuality profiles + Java APIYes (via plugins)Centralised dashboard, quality gates
BanditPython onlyMITAny CI via CLIYes — plugin systemYes (bandit-sarif-formatter)Python microservices
ESLint + pluginsJavaScript, TypeScriptMITAny CI via CLIYes — custom rulesVia SARIF formatterFront-end / Node.js teams
Checkmarx SAST25+CommercialGitHub, GitLab, Jenkins, Azure DevOpsYes — platform rulesYesEnterprises needing deep taint analysis
Snyk Code20+Free tier / CommercialGitHub, GitLab, Bitbucket, CLINo (cloud-managed rules)YesDev-friendly UX, SCA + SAST combined
Bearer CLIJS/TS, Ruby, JavaOpen SourceAny CI via CLIYes — YAMLYesPrivacy / sensitive data flow analysis
Veracode20+Commercial SaaSGitHub, Jenkins, Azure DevOpsLimitedYesBinary scanning, external audit evidence

Key takeaways:

  • For polyglot monorepos, Semgrep offers the best combination of speed and customisability.
  • For Python-only services, Bandit is the simplest, lowest-overhead choice available.
  • For Java and enterprise stacks, SonarQube with the FindSecBugs plugin provides the deepest taint analysis coverage.
  • For Node.js front-end teams, ESLint security plugins add near-zero overhead to an existing toolchain.
  • For compliance-driven organisations, commercial tools like Checkmarx and Veracode provide auditor-friendly reports, SLA-backed support, and evidence packages.

When budget allows, running Semgrep (fast, custom) alongside SonarQube (comprehensive dashboard) covers both developer speed-of-feedback and management reporting in a single pipeline.


Common Mistakes and Anti-Patterns

Even teams that adopt SAST tools often fall into predictable traps that undermine the value of their investment. Recognising these patterns early prevents them from becoming expensive habits.

1. Ignoring Findings Until “Later”

The most damaging anti-pattern is accumulating SAST findings without addressing them. When the finding count grows unchecked, teams develop alert fatigue — they stop reading reports because they expect no actionable signal. Critical findings become invisible beneath hundreds of stale warnings.

Fix: Treat new findings the same as failing unit tests. Define a quality gate that blocks merges when new high-severity findings are introduced. Enforce a triage deadline — for example, all new findings must be reviewed within 48 hours. Never let the count trend upward without intention.

2. Blanket Suppression Without Justification

The opposite failure mode is suppressing every finding that slows a developer down without documenting why. Over months, a codebase fills with unexplained # nosec and // eslint-disable comments that hide real vulnerabilities and make audits impossible.

Fix: Require a justification comment alongside every suppression. Make the reasoning visible to code reviewers:

   # nosec B311 -- random.random() is acceptable here;
# this value is used for UI shuffle order, not for cryptographic purposes.
value = random.random()

For Semgrep:

   // nosemgrep: detect-eval -- legacy compatibility shim, input is
// validated against a strict allowlist before reaching this point.
eval(sanitizedExpression)

Suppression with a justification is a documented, reviewable decision. Suppression without one is a hidden liability.

3. Running SAST Only at Release Time

Treating SAST as a release gate — run manually once before shipping — means developers hear about findings at the worst possible moment. They are already context-switching, and the cost of understanding and fixing old code is high.

Fix: Shift SAST left. Install IDE plugins so developers see findings as they type. Add pre-commit hooks for cheap, fast checks on changed files only. Reserve the full pipeline scan for comprehensive coverage. The earlier the feedback, the lower the cognitive and calendar cost.

4. Never Tuning the Ruleset After the Initial Setup

Default rulesets are written for the broadest possible audience. Rules designed for generic use will produce findings irrelevant to your specific frameworks, internal libraries, and coding conventions. A 40% false positive rate is not unusual with an untuned ruleset.

Fix: After the initial scan, dedicate a sprint to tuning. Disable rules irrelevant to your stack. Write custom rules for internal APIs. The upfront investment pays compound dividends because every subsequent scan is more actionable.

5. No Ownership Model for Findings

When SAST findings are owned by “the security team,” developers disengage. Security becomes a bureaucratic gate that blocks releases rather than a property of code developers own and take pride in.

Fix: Assign findings to the team that owns the affected code — the same team who owns the tests and the CI pipeline. Treat a high-severity finding the same as a failing test: it blocks merge, the developer fixes it, and fixing it is celebrated.


Testing and Tuning Your SAST Configuration

A SAST configuration that is never revisited degrades over time. Rules become stale as new frameworks are adopted, false positive rates climb, and developers learn to bypass the tool rather than benefit from it. Regular tuning keeps the investment healthy.

Establish a False Positive Baseline

After your first full scan, categorise every finding:

  • True positive — a real vulnerability that must be remediated.
  • False positive — a flagged pattern that presents no actual risk in context.
  • Accepted risk — a known issue where the business has consciously accepted the risk.

Track these during the first sprint. A false positive rate above 30% is a strong signal to tune aggressively before the scan loses developer trust.

Use Exclude Patterns Thoughtfully

Most tools support ignore files (.semgrepignore, .banditignore, sonar.exclusions). Good candidates for exclusion include:

  • Test files (**/test/**, **/*.spec.*) — test code intentionally exercises edge cases that trigger SAST patterns.
  • Generated code (**/generated/**, **/migrations/**) — ORM-generated SQL and similar patterns are beyond developer control.
  • Vendored dependencies (node_modules/, vendor/) — scanning third-party code is the job of SCA tools, not SAST.

Progressive Quality Gate Configuration

Define quality gates that match your team’s current maturity. Jumping straight to strict gates on a codebase with thousands of existing findings alienates developers and breeds bypass workarounds:

Maturity LevelQuality Gate Configuration
Starting outFail on CRITICAL severity findings in new code only
EstablishedFail on HIGH and CRITICAL in new code; report MEDIUM
AdvancedFail on MEDIUM and above; all suppressions require documented justification

Move between levels deliberately, accompanied by a sprint to remediate the previously unreported tier before raising the bar.

Metrics to Track Over Time

Measure these per sprint or release cycle and share them with the team:

  • New findings introduced — are developers adding new vulnerabilities?
  • Mean time to remediate (MTTR) — how long does it take to close a finding?
  • False positive rate — is the ruleset improving or degrading?
  • Suppressed findings count — are suppressions growing uncontrolled?

Surfacing these metrics in a team dashboard creates accountability without blame and gives technical leadership objective evidence of a security posture improving over time.

Re-Scan After Major Dependency Upgrades

Major framework or library upgrades can introduce new vulnerability patterns or make existing suppressions incorrect. Schedule a fresh, un-filtered scan after significant dependency changes, review the delta against your previous baseline, and update suppressions accordingly. Treat it like updating snapshot tests after an intentional UI change.


Understanding SAST Results and Prioritisation

A SAST tool is only useful if its output drives action. An unread report has the same value as no report at all. Learning to interpret, triage, and prioritise findings efficiently is therefore as important as setting up the tool in the first place.

Anatomy of a SAST Finding

Most SAST tools report findings with a common set of fields. Understanding each field helps you make a quick, well-informed triage decision:

  • Rule ID / Check name: A stable identifier linking to documentation explaining what the tool detected and why it matters. Always read the linked documentation before dismissing or suppressing a finding.
  • Severity: An assessment of how serious the vulnerability would be if exploited. Common scales are CRITICAL, HIGH, MEDIUM, LOW, and INFO. Severity reflects the potential impact of the vulnerability in isolation — it does not account for whether the code is reachable in production.
  • Confidence: How certain the tool is that the flagged code is actually vulnerable, rather than a pattern that merely resembles a vulnerability. High-confidence findings warrant immediate investigation; medium and low-confidence findings require more contextual judgement.
  • Location: The file path, line number, and — in better tools — the full call trace leading to the vulnerability. A complete trace (sometimes called a “flow”) shows you not just where the vulnerable pattern exists, but where user-controlled data flows in from and where it reaches a dangerous sink.
  • CWE / OWASP mapping: Most production-grade tools map findings to the Common Weakness Enumeration or the OWASP Top 10. These mappings help you understand the class of vulnerability, estimate regulatory relevance, and communicate risk to non-technical stakeholders using industry-standard language.

Triage Workflow

When a new SAST report lands, resist the urge to start reading line by line. Instead, apply a systematic triage process:

Step 1 — Filter by severity. Start with CRITICAL and HIGH findings only. Resolve or suppress every finding in those buckets before spending any time on MEDIUM or below. A single unaddressed critical vulnerability in production is more damaging than a hundred medium-confidence warnings.

Step 2 — Check reachability. A finding in dead code, a private internal utility never exposed to the internet, or a code path gated behind authenticated admin roles carries a very different risk profile from a finding in a public API endpoint. Ask: can an unauthenticated attacker reach this code? If not, document the reason and downgrade the priority.

Step 3 — Reproduce the issue mentally. Walk through the call chain manually. If you can construct a realistic attack scenario — even in your head — the finding is likely a true positive. If the code pattern matches the rule but the flow is impossible given surrounding constraints, it is a false positive that should be suppressed with a justification comment.

Step 4 — Assign and track. Create a ticket in your project management system for every confirmed true positive, linked to the SAST finding. Untracked findings get forgotten. Tracked findings get fixed.

Communicating Risk to Stakeholders

Developers and security engineers understand CVSS scores and CWE identifiers, but product managers and executives think in terms of business outcomes. When escalating SAST findings upward, translate technical severity into business impact:

Instead of: “We have a B105 Bandit finding of medium confidence in the authentication module.”

Say: “A hardcoded credential was found in the authentication service. If this credential were extracted from a leaked build artifact, an attacker could bypass login for all user accounts. Remediation takes approximately two hours.”

This framing provides a clear risk narrative, a concrete impact, and an estimated cost to fix — everything a decision-maker needs to prioritise the work correctly. SAST findings that are communicated well get fixed faster because they compete effectively for engineering time.


Advanced SAST Techniques

Once basic SAST integration is running smoothly and the team has developed a rhythm of scanning, triaging, and fixing, there are several advanced techniques that significantly improve coverage and reduce false positive rates.

Differential Scanning

Running a full scan on every commit is thorough but slow on large codebases. Differential scanning — scanning only the files changed in a pull request — reduces scan time dramatically while still catching vulnerabilities introduced by the current change.

Semgrep supports differential scanning natively with the --baseline flag when used with Managed Scanning in the Semgrep platform. GitHub’s CodeQL supports incremental analysis out of the box. SonarQube’s Developer Edition introduces the concept of “new code” — findings are reported only against code added or modified since a configurable baseline date, making pull request feedback focused and noise-free.

Configuring differential scanning correctly requires a consistent baseline reference. In practice, the cleanest approach is to use the merge base commit — the point where the feature branch diverged from the main branch — as the baseline. Any findings present in the baseline are filtered from the PR report, so developers only see issues they introduced.

Taint Analysis

Basic pattern-matching SAST catches syntactic vulnerabilities — dangerous function calls, hardcoded strings, and obviously insecure patterns. Taint analysis goes deeper by tracking the flow of untrusted data from its entry point (an HTTP request parameter, a file read, a database value) through the application’s call graph to a dangerous sink (an SQL query, a shell command, a rendered HTML response).

Tools like Semgrep Pro, SonarQube, and Checkmarx perform inter-procedural taint analysis. A taint rule defines:

  • Sources: Where untrusted data enters the system. Common examples include request.query, request.body, os.environ, and file reads.
  • Sanitisers: Functions that clean or validate data and break the taint chain. Examples include parameterised queries, HTML escape functions, and schema validators.
  • Sinks: Dangerous functions where tainted data could cause harm. Examples include execute() (SQL), exec() (shell), innerHTML (XSS), and pickle.loads() (deserialisation).

A finding from taint analysis is significantly more reliable than a simple pattern match, because it confirms that unsanitised user data actually reaches a dangerous operation. The trade-off is that taint analysis is computationally more expensive and requires more configuration to define sources and sinks for your specific frameworks.

Secrets Detection as Part of Your SAST Pipeline

SAST tools increasingly include or integrate with secrets detection scanners. Unlike simple pattern matching for strings that look like API keys, modern secrets scanners use entropy analysis combined with pattern recognition to identify credentials with lower false positive rates.

Tools to consider alongside your main SAST scanner include:

  • Gitleaks: Fast Git-history secrets scanner. Run it across the full commit history once, then in pre-commit and CI modes going forward.
  • Truffleog: Scans commit history and detects high-entropy strings. Supports verification against real API endpoints to confirm whether found credentials are still active.
  • GitHub Secret Scanning: Built into GitHub repositories, it scans pushes in real time and notifies the repository owner when known credential patterns (AWS keys, Stripe keys, etc.) are detected.

Running secrets detection separately from SAST is a sensible architecture because secrets scanning has different latency requirements — it needs to run in seconds at pre-commit time — while comprehensive SAST scans can afford to run over minutes in a CI pipeline.

Using SAST to Enforce Architecture Rules

Beyond vulnerability detection, SAST tools can enforce higher-level architectural constraints. Custom Semgrep rules can detect:

  • Forbidden import patterns: Ensuring internal services never import directly from each other’s database layers, enforcing clear domain boundaries.
  • Framework-specific anti-patterns: Detecting direct use of a deprecated internal API that has been replaced by a more secure abstraction.
  • Dependency injection violations: Flagging instantiation of concrete classes where an interface is expected, which can mask hard-to-detect security misconfigurations.

Encoding architectural decisions as SAST rules turns the scanner into a living specification of your codebase’s intended structure. When a new developer inadvertently violates an architectural boundary, they learn about it in seconds — not in a quarterly architecture review.


Building a SAST-First Engineering Culture

The most technically sophisticated SAST configuration is worthless if developers perceive it as an obstacle rather than a tool that helps them write better code. Sustainable security improvement requires cultural investment alongside technical investment.

Security Champions Programme

A security champion is a developer on a product team who takes extra interest in security and acts as a liaison between the security engineering team and product development. Security champions are not full-time security engineers — they are primarily developers who happen to care deeply about secure code.

In the context of SAST, security champions play three key roles. First, they own the triage process for their team’s SAST findings, preventing the backlog from growing unchecked. Second, they write custom SAST rules that encode their team’s domain-specific security requirements — they know the codebase well enough to know which patterns are dangerous and which are benign. Third, they mentor teammates on how to read findings, understand why a pattern is risky, and write the fix correctly rather than just suppressing the warning.

Building a champion programme requires investment: dedicated time for champions to learn, a community of champions across teams who share rules and lessons learned, and recognition that security contributions — writing rules, fixing findings, reviewing others’ security decisions — are valued engineering work that counts toward career progression.

Making SAST Findings Visible

Invisible metrics do not drive behaviour. If SAST findings live in a separate tool that developers never open, adoption stalls. Strategies to improve visibility include:

  • Inline code review comments: Tools like Semgrep Cloud and SonarCloud post findings as pull request comments, so findings appear in the exact same place where code review happens. Developers do not need to context-switch to a separate security dashboard.
  • IDE notifications: SonarLint and the Semgrep IDE plugin surface findings as inline warnings in the code editor. A developer sees the finding the moment they write the vulnerable pattern — before the code is even saved.
  • Team-level dashboards: A lightweight dashboard showing each team’s open finding count, MTTR, and false positive rate creates friendly accountability. Teams that see their peers improving tend to invest more in their own posture.
  • Sprint retrospectives: Include a brief SAST review in each retrospective: which findings were fixed, which were suppressed, and whether the suppression justifications were sound. This keeps the practice present in the team’s regular rhythm.

Celebrating Fixes, Not Just Penalising Findings

Security culture suffers when SAST is used only to penalise — to block releases, to generate compliance failures, to assign blame. Sustainable improvement requires that fixing a security finding be treated as a valued contribution.

Explicitly acknowledge when a developer finds and fixes a SAST finding, especially a genuinely subtle taint vulnerability or a complex custom rule they wrote to protect a shared API. Recognising this work signals that the organisation views secure coding as a skill to develop, not an inconvenience to tolerate. Over time, this shapes a team that catches vulnerabilities not because a scanner forced them to, but because they understand why it matters.


SAST in the Secure Software Development Lifecycle

SAST does not operate in isolation. Its maximum value is realised when it is one consistent layer within a broader Secure SDLC that runs from planning through to production monitoring.

   flowchart TD
    A[Requirements and Threat Modelling] --> B[Secure Design Review]
    B --> C[Development]
    C --> D[SAST in IDE\nreal-time feedback]
    D --> E[Pre-Commit Hook\nfast targeted scan]
    E --> F[Pull Request CI\nfull SAST scan and Quality Gate]
    F -->|pass| G[Code Review]
    F -->|fail| C
    G --> H[Staging Deployment]
    H --> I[DAST and IAST\ndynamic scanning]
    I --> J[Security Sign-Off]
    J --> K[Production Release]
    K --> L[Runtime Monitoring\nSIEM and WAF]
    L -->|new threat intelligence| A

At each touch point, the nature of the security activity changes:

SDLC PhaseSecurity ActivityExample Tooling
RequirementsThreat modellingSTRIDE, Attack Trees
DesignArchitecture reviewManual review, IaC scanning (Checkov, Trivy)
DevelopmentSAST (IDE)Semgrep IDE plugin, SonarLint
Pre-commitSAST (fast rules)pre-commit + Semgrep, Bandit, ESLint
Pull requestSAST (full scan, quality gate)Semgrep CI, SonarCloud, Bandit
StagingDASTOWASP ZAP, Burp Suite
ProductionRuntime monitoringDatadog, Elastic SIEM, WAF

SAST is most effective in the leftmost stages — IDE and pre-commit — because feedback in seconds is far more actionable than feedback after a 20-minute pipeline run. A developer who sees a finding while typing is dramatically more likely to fix it cleanly than one who receives an email hours after pushing.

The relationship between SAST and DAST is complementary, not competitive. SAST finds what the source code says. DAST finds what the running application actually does under realistic conditions. Injection vulnerabilities are well-suited to SAST detection; complex authentication bypass bugs caused by session state interactions are easier to discover dynamically. Running both provides the highest confidence.

Incorporating SAST results into your threat models creates a virtuous feedback loop: findings from scanning inform which attack surfaces to prioritise in the next threat model review, which in turn drives tighter rules for the following development cycle. Over time this shifts the team’s posture from reactive to proactive — catching entire classes of vulnerability before a single line of vulnerable code is ever committed.

It is worth noting that SAST adoption is not a binary event. Most successful teams start with a single tool scanning one language in one pipeline, achieve a stable false positive rate, and then expand incrementally. Adding too many tools simultaneously creates a flood of unaddressed findings and overwhelms the team. A focused rollout — one tool, one pipeline stage, measured by one key metric — gives teams the space to build good habits before scaling. Once the core loop of scan, triage, fix, suppress, and tune is second nature, adding additional tools, rules, and integrations compounds the investment rather than straining it.

The ultimate goal of embedding SAST throughout the SDLC is not a zero-finding report. It is a development culture where security is a natural, expected, and respected part of writing code — where developers think about data flows as they design functions, where architectural boundaries are encoded in automated rules, and where a failing security check is as unremarkable and correctable as a failing unit test. That cultural shift, enabled by the right tooling and the right processes, is what makes an application genuinely and durably secure.

Conclusion

Static Application Security Testing tools are invaluable for building secure, high-quality applications. By integrating SAST tools into your development workflow, you can identify vulnerabilities early, ensure compliance, and deliver reliable solutions.

Start leveraging the tools and practices outlined in this guide to enhance your application’s security posture and safeguard against evolving threats.