AIMIT
Home
Security Domains
Frameworks
Arch. Diagrams
Interview Q&A📖Glossary🎯Mock Interview📄Resume BuilderSecurity News
📱Download
Mobile App
Home / Security Domains / SAST/DAST & PenTesting
OWASPNISTMITRE

🧪 SAST/DAST & PenTesting

Static and dynamic application security testing, penetration testing methodologies, red teaming, bug bounty programs, and security assessment tools.

Security testing validates that applications, systems, and networks are resilient against attacks. It combines automated scanning (SAST/DAST/IAST) with manual expertise (penetration testing) to identify vulnerabilities before adversaries do. A mature security testing program integrates into CI/CD pipelines, runs continuously, and includes both internal assessments and external engagements.

Vani
Vani
Choose a section to learn

Key Concepts

Bug Bounty Programs

Crowdsourced security testing where external researchers find and report vulnerabilities for rewards. Platforms: HackerOne, Bugcrowd, Synack. Requires clear scope, rules of engagement, and responsible disclosure policies.

DAST (Dynamic Analysis)

Tests running applications by sending HTTP requests and analyzing responses. Discovers runtime vulnerabilities — authentication flaws, misconfigurations, injection attacks. Tools: Burp Suite, OWASP ZAP, Acunetix, Nuclei.

IAST (Interactive Analysis)

Combines SAST and DAST by instrumenting the application runtime. Provides real-time analysis with context about code paths and data flows. Tools: Contrast Security, Seeker.

Penetration Testing

Manual ethical hacking to discover complex vulnerabilities that automated tools miss. Follows methodologies: OWASP Testing Guide, PTES, OSSTMM. Includes black-box, gray-box, and white-box approaches.

Red Teaming

Simulates full adversary campaigns against an organization — not just technical exploits but also social engineering, physical access, and supply chain attacks. Tests people, processes, and technology holistically.

SAST (Static Analysis)

Analyzes source code, bytecode, or binaries without executing the application. Finds vulnerabilities early in the SDLC — SQL injection, XSS, buffer overflows, insecure crypto. Tools: SonarQube, Checkmarx, Semgrep, Fortify.

Security Testing Pipeline

📝 Code Commit (Developer Push)
↓
🔍 SAST Scan (SonarQube / Semgrep / Checkmarx)
↓
🏗️ Build & Deploy to Staging
↓
🌐 DAST Scan (Burp Suite / OWASP ZAP / Nuclei)
↓
🎯 Pen Test / Red Team Engagement
↓
📊 Report & Remediate → Production Deploy

Security Testing in CI/CD Pipeline

From code commit through SAST, DAST, pen testing, to production

SAST vs DAST Comparison

AspectSASTDAST
WhenDuring development (code review)After deployment (running app)
AccessSource code / bytecodeHTTP interface (black-box)
SpeedFast (minutes)Slower (hours)
False PositivesHigherLower
FindsCode-level bugs (injection, XSS)Runtime issues (auth, config)
LanguagesLanguage-specificLanguage-agnostic
CI/CD FitPre-build gatePost-deploy gate

🔍 11 Types of Security Assessments — Choose the Right One

Choosing the right assessment depends on your objective — regulatory compliance, breach readiness, cloud migration, or vendor risk.

🛡️ Application Security Review

Secure SDLC validation, API security testing, code review, and supply chain risk analysis. Perform before major app launches handling regulated or customer data.

🏗️ Architecture Security Review

Threat modeling, zero trust design validation, AI system risk review, and resilience analysis. Use when designing new platforms or integrating AI into your environment.

☁️ Cloud Security Assessment

Review of IAM, zero trust posture, container security, logging, and multi-cloud configurations. Essential for hybrid or multi-cloud environments or rapid cloud scaling.

📋 Compliance & Gap Assessment

Evaluate controls against ISO 27001, NIST CSF, GDPR, PCI DSS. Use when preparing for audits, certifications, regulatory reviews, or client due diligence requests.

🏢 Enterprise Risk Assessment

Org-wide view of digital assets, cyber exposure, and board-level ROI justification. Use when you need board-level visibility or need to prioritize cyber investment across the organization.

🚨 Incident Response Readiness

Tabletop exercises, ransomware simulations, breach response testing, and playbook validation. Run annually, post-incident, or after organizational changes.

🎯 Penetration Test

Human-led exploitation of networks, systems, APIs, and applications to validate real-world impact. Use for new deployments, major releases, and compliance requirements.

🎣 Phishing & Human Risk Assessment

Controlled phishing simulations and social engineering testing for measurable human risk metrics. Validates security awareness training effectiveness.

🟥 Red Team Exercise

Goal-driven adversary simulation across people, process, cloud, and endpoint controls. Use when you want to test detection, response, and SOC maturity under realistic attack scenarios.

🔗 Third-Party & Supply Chain Risk

Evaluation of vendor security posture, software supply chain exposure, and concentration risk. Critical when you rely on SaaS platforms or globally distributed suppliers.

🔍 Vulnerability Assessment

Automated scanning of known CVEs, misconfigurations, and exposed services. Run monthly or continuously to guide remediation and reduce your attack surface.

💡 Interview Question

You're asked to design a security testing program for a mid-size organization. Which assessments would you prioritize and why?

A layered approach:

1START WITH VULNERABILITY ASSESSMENTS — automated, low cost, continuous. Deploy Qualys/Tenable to scan all assets monthly. This is your baseline.

2COMPLIANCE & GAP ASSESSMENT — if regulated (finance, healthcare), this is non-negotiable. Map controls to NIST CSF or ISO 27001. Identify gaps before auditors do.

3PENETRATION TESTING — quarterly or after major releases. Start with external network + web app pentests. Validates that vulns found by scanners are actually exploitable.

4APPLICATION SECURITY REVIEW — integrate SAST/DAST into CI/CD. Every code change gets scanned.

5PHISHING SIMULATION — monthly campaigns targeting all employees. Measure click rate, report rate.

6RED TEAM — annually for mature orgs. Tests detection and response, not just prevention.

7THIRD-PARTY RISK — assess all critical vendors. SolarWinds taught us supply chain is a top attack vector.

8CLOUD SECURITY — if cloud-first, quarterly CSPM reviews. Misconfigurations cause 80% of cloud breaches.

9IR READINESS — annual tabletop exercises. Test playbooks before you need them. Prioritize by: regulatory requirements → attack surface exposure → organizational maturity → budget.

Interview Preparation

💡 Interview Question

What is the difference between SAST and DAST, and when would you use each?

SAST (Static Application Security Testing) analyzes source code without running the app — it finds vulnerabilities like SQL injection, XSS, and hardcoded secrets early in development. DAST (Dynamic Application Security Testing) tests a running application from the outside, discovering runtime issues like authentication bypass, misconfigurations, and session management flaws. Best practice: use SAST as a pre-commit/pre-build gate for fast feedback, and DAST as a post-deploy gate against staging environments. Combine both with IAST for the most comprehensive coverage. Neither replaces manual pen testing for business logic flaws.

💡 Interview Question

Walk me through a penetration testing engagement from start to finish.

1) Scoping & Rules of Engagement: Define targets, exclusions, timeline, and communication protocols.

2Reconnaissance: OSINT, DNS enumeration, technology fingerprinting, social media research.

3Scanning: Port scanning (Nmap), vulnerability scanning (Nessus), web crawling (Burp Suite).

4Exploitation: Attempt to exploit discovered vulnerabilities — injection, auth bypass, privilege escalation. Use frameworks like Metasploit, manual testing.

5Post-Exploitation: Assess impact — lateral movement, data access, persistence mechanisms.

6Reporting: Document findings with severity (CVSS), evidence (screenshots, PoC), and remediation recommendations.

7Remediation Verification: Re-test after fixes to confirm issues are resolved.

💡 Interview Question

How would you set up a bug bounty program?

1) Define scope: which assets are in-scope (web apps, APIs, mobile) and out-of-scope (third-party, production databases).

2Set rules of engagement: no DoS, no data destruction, no social engineering unless approved.

3Choose a platform: HackerOne, Bugcrowd, or self-hosted.

4Create a vulnerability disclosure policy (VDP).

5Define reward tiers: Critical ($2K-$10K+), High ($500-$2K), Medium ($100-$500), Low ($50-$100).

6Assign a triage team to validate, deduplicate, and prioritize reports.

7Establish SLAs for response and remediation.

8Start with a private program, then expand to public.

💡 Interview Question

Walk through conducting SAST scans using Veracode to identify vulnerabilities in source code.

SAST analyzes source code without executing it to find vulnerabilities early in the SDLC. The Veracode process:

1PREPARATION
  • Compile the app — Veracode needs compiled artifacts (.war, .jar, .dll)
  • For interpreted languages (Python, JS, PHP), package source into a ZIP
  • Exclude test files and third-party libs (those go through SCA)
2UPLOAD & SCAN
  • Upload via Veracode UI, CLI, or API
  • In CI/CD, use Pipeline Scan (~90 seconds for PR checks) or Policy Scan (full depth for release gates)
3ANALYSIS
  • Veracode performs data flow analysis and taint tracking — traces untrusted input to dangerous operations
  • Maps findings to CWE IDs
4REMEDIATION
  • Review findings with CWE ID, file/line number, data flow trace, and fix guidance
  • Triage: confirm, mitigate, or mark false positive
  • Track Veracode Level (VL1-VL

5for compliance.

5CI/CD

Pipeline Scan in PRs, Policy Scan on main, break build on Critical/High findings.

💡 Interview Question

How do you conduct SCA scans using Veracode to identify vulnerabilities in open-source components?

SCA (Software Composition Analysis) identifies vulnerabilities in third-party and open-source libraries — the code you didn't write but ship anyway (~80% of modern apps). Process:

1HOW IT WORKS

Analyzes manifest files (package.json, pom.xml, requirements.txt, go.sum), builds full dependency tree including transitive dependencies, cross-references against NVD and Veracode's proprietary DB.

2INTEGRATION

Agent-Based Scan in CI/CD, Upload Scan alongside SAST, IDE Plugin for real-time alerts, SCM Integration for auto-scanning PRs.

3WHAT IT FINDS

Known CVEs (e.g., Log4Shell), license risks (GPL vs MIT), outdated libraries, and vulnerable methods your code actually calls.

4REMEDIATION
  • Auto-suggests safe version upgrades, generates SBOM in CycloneDX/SPDX format
  • KEY DISTINCTION: SAST finds bugs in YOUR code; SCA finds vulnerabilities in LIBRARIES you use
  • Both are essential
💡 Interview Question

Describe your experience using Burp Suite for manual testing, including authenticated scans and reducing false negatives.

Burp Suite is the industry-standard manual testing tool.

1SETUP
  • Configure browser proxy to Burp (127.0.0.1:8080), install CA cert for HTTPS interception, set target scope
  • For authenticated scanning: record login sequence in session handling rules or use cookie/token injection
2MANUAL WORKFLOW
  • Proxy Intercept to modify requests in real-time
  • Repeater to replay/modify requests for SQLi, XSS, SSRF, IDOR
  • Intruder for automated fuzzing with wordlists
  • Comparer to diff responses
3REDUCING FALSE NEGATIVES
  • Automated scanners miss business logic flaws — Burp's manual tools catch these
  • Authenticated scans reach deeper functionality
  • Secondary scans with different user roles test horizontal/vertical privilege escalation
  • Burp Collaborator for out-of-band testing (blind SSRF, blind XSS)
  • Key extensions: ActiveScan++, Autorize, Logger++
  • Automated DAST catches ~60-70%; manual Burp testing catches the remaining business logic flaws that scanners fundamentally cannot detect
💡 Interview Question

How do you analyze scan results, identify root causes, and collaborate with developers to implement effective remediations?

This is a core AppSec workflow.

1TRIAGE & PRIORITIZE
  • Filter false positives by tracing data flow
  • Prioritize by CVSS + EPSS + business context (internet-facing? handles PII?)
  • Group findings by root cause — 50 XSS findings might stem from 1 missing output encoding library
2ROOT CAUSE ANALYSIS

Trace back to WHY the vulnerability exists — missing input validation framework? No parameterized query pattern? Insecure defaults? Often 1 root cause produces dozens of findings.

3DEVELOPER COLLABORATION
  • Present in developer-friendly terms — show the vulnerable code, attack scenario, and fix (not just 'CWE-89')
  • Use IDE integrations (Veracode Greenlight, SonarLint)
  • Conduct pair-programming 'fix-it' sessions
  • Create reusable secure coding patterns
  • Never throw findings over the wall
4REMEDIATION
  • WAF virtual patches for immediate protection
  • Code-level — parameterized queries, output encoding, CSP headers
  • Architectural — centralized validation middleware
5METRICS
  • Track MTTR by severity
  • SLAs — Critical: 24-48 hrs, High: 7 days, Medium: 30 days
  • Measure recurring vulnerability reduction over time
💡 Interview Question

How do you review and approve false positives and mitigated-by-design requests for DAST, SAST, and SCA findings?

False positive triage and mitigated-by-design approvals are critical to maintaining scanner credibility and developer trust — if the security team marks everything as Must Fix without nuance, developers stop paying attention.

1UNDERSTANDING THE CATEGORIES
  • False Positive — the scanner flagged something that is genuinely not a vulnerability
  • Example — SAST flags SQL injection but the code uses a parameterized query through an ORM, so injection is impossible
  • Mitigated by Design — the vulnerability technically exists in the code path but architectural controls make exploitation impossible
  • Example — SAST flags hardcoded credentials in a test file that is excluded from production builds, or DAST finds a reflected XSS but a strict Content Security Policy blocks script execution
2SAST FALSE POSITIVE REVIEW
  • Examine the data flow the tool traced — follow the source (user input) to the sink (dangerous function)
  • Verify if input validation, encoding, or parameterization exists along the path that the scanner missed
  • Common SAST false positives — ORM-generated queries flagged as SQL injection, encoded output flagged as XSS, dead code paths, test files
3DAST FALSE POSITIVE REVIEW
  • Reproduce the finding manually — send the same payload the scanner used and verify if the vulnerability actually triggers
  • Check if a WAF, CSP, or application-level control blocks the attack in practice
4SCA FALSE POSITIVE REVIEW
  • Check if the vulnerable function in the library is actually called by the application (reachability analysis)
  • Verify if the vulnerability applies to the deployment context
5APPROVAL WORKFLOW
  • Developer submits a mitigation request with evidence — code snippets showing the control, architecture diagrams, or test results proving non-exploitability
  • Security engineer reviews the evidence independently — never approve based on developer assertion alone
  • Document the decision with rationale, CWE ID, reviewer name, and expiration date
  • Set mitigations to expire and require re-review (e.g., every 12 months)
6GOVERNANCE
  • Track false positive rates per scanner and per application
  • Periodically audit approved mitigations — sample 10% quarterly
  • Never approve mitigated-by-design for Critical severity findings without a second reviewer
💡 Interview Question

How do you review and approve SDLC security tasks such as MME and Secure-by-Design processes for DAST, SAST, and SCA?

In large enterprises — especially financial institutions — SDLC security tasks are formal governance checkpoints ensuring every application meets security standards before production release.

1UNDERSTANDING MME AND SBD
  • MME (Mitigate by Mitigation, Mitigate by Environment) — these are Veracode mitigation categories where a finding is accepted because either a code-level mitigation exists that the scanner cannot detect (Mitigate by Mitigation) or network/infrastructure controls prevent exploitation (Mitigate by Environment — e.g., WAF rules, network segmentation, IP whitelisting)
  • SbD (Secure by Design) — a formal process where the application architecture is reviewed upfront to confirm security controls are baked into the design rather than bolted on after scanning
2SDLC SECURITY GATE REVIEW PROCESS
  • At each SDLC phase, specific security tasks must be completed and approved
  • Design Phase — threat model review, security requirements sign-off, data classification
  • Development Phase — SAST scan completion, SCA scan with no unapproved Critical/High findings, secure code review
  • Testing Phase — DAST scan against staging, penetration testing for high-risk applications
  • Pre-Production — all findings remediated or formally mitigated with approved MME requests, policy scan passing at required Veracode Level
3REVIEWING MME REQUESTS
  • Verify the mitigation type is appropriate — Mitigate by Mitigation requires code evidence (show the sanitization, encoding, or parameterization the scanner missed)
  • Mitigate by Environment requires infrastructure evidence (WAF rule screenshots, network diagram showing segmentation)
  • Reject if evidence is insufficient
  • Critical findings require a second AppSec reviewer plus manager approval
4REVIEWING SBD SUBMISSIONS
  • Validate that the threat model covers all relevant attack vectors
  • Confirm security controls are mapped to specific threats
  • Review architecture diagrams for secure patterns — defense in depth, least privilege, secure defaults
5GOVERNANCE
  • Financial regulators (OCC, FFIEC, MAS) require evidence of SDLC security controls
  • Maintain separation of duties — the developer cannot approve their own MME request
  • Track aging mitigations with expiration dates
6COMMON REJECTION SCENARIOS

MME submitted without evidence, Mitigate by Environment claimed without infrastructure controls, SCA finding mitigated when a patch is available — reject and require the upgrade.

💡 Interview Question

What is the difference between EASM, Vulnerability Management, and Penetration Testing — and how do they work together?

These three are complementary disciplines, not competitors. EASM (External Attack Surface Management) answers 'What can an attacker see about us?' — it continuously discovers all internet-facing assets, including shadow IT, forgotten staging environments, and dangling DNS records you didn't know existed. Vulnerability Management answers 'What known weaknesses exist and how do we fix them?' — it scans known assets for CVEs, missing patches, and misconfigurations, then tracks remediation against SLAs. Penetration Testing answers 'Can those weaknesses actually be exploited?' — a skilled red team validates exploitation in practice, confirming real-world impact. HOW THEY FIT TOGETHER: EASM discovers unknowns → feeds those assets into the VM program for deep scanning → PenTesting validates whether remaining risks are truly exploitable. Without EASM, your VM program only scans what you know about — attackers find the rest first. Without VM, you know what's exposed but not how to fix it. Without PenTesting, you know what's vulnerable but not whether it's exploitable in your specific environment. KEY DIFFERENCES: Scope — EASM is external-only; VM covers internal + external; PenTesting is defined scope. Frequency — EASM is continuous (24/7); VM runs on scan cycles; PenTesting is point-in-time (quarterly/annual). Asset Discovery — EASM discovers unknown assets; VM assumes you have an inventory; PenTesting uses a predefined scope. Exploitation — only PenTesting actively exploits. Mature programs run all three in parallel.

Framework Mapping

FrameworkRelevant Controls
OWASPTesting Guide v4, ASVS Verification Levels, Top 10 Testing, SAMM Security Testing
NISTSP 800-53 CA (Assessment), SA (System Acquisition), RA (Risk Assessment)
MITREATT&CK Tactics mapping for penetration testing scenarios and red team playbooks

Related Domains

🛡️

Application Security

Secure SDLC & code review

⚙️

DevSecOps

Security in CI/CD pipelines

🔍

Vulnerability Management

Scanning & prioritization

Enterprise-grade cybersecurity knowledge platform for training, interview preparation, and continuous learning. Master frameworks, architectures, and best practices.

Built by Security Professionals, for Security Enthusiasts.

Security Domains

  • AI Sec
  • AI/ML SecOps
  • API Sec
  • AppSec
  • Cloud
  • Data Sec

More Domains

  • DevSecOps
  • Crypto
  • GRC
  • IAM / IGA
  • MITRE ATT&CK
  • Network
  • OWASP Top 10
  • SAST/DAST
  • SIEM/Logs
  • SOC
  • VulnMgmt
  • ZTA

Frameworks

  • OWASP
  • NIST CSF
  • NIST SP 800
  • MITRE ATT&CK
  • ISO 27001/27002
  • CISA
  • CIS Controls
  • CVSS / CVE / KEV
  • CWE / SANS Top 25
  • SOX
  • PCI-DSS
  • GLBA
  • FFIEC / Federal Banking
  • GDPR
  • Architecture Diagrams
  • 📖 Glossary
© 2026 AIMIT — Cybersecurity Solutions PlatformA GenAgeAI Product
AIMIT
AIMIT 🛡️
On Duty AvatarVani