🛡️ Application Security
Securing applications throughout the software development lifecycle — from threat modeling and secure coding to SAST/DAST testing, WAFs, and runtime protection. The foundation of modern cybersecurity.
Application Security (AppSec) encompasses the measures taken to improve the security of applications by finding, fixing, and preventing security vulnerabilities. It spans the entire SDLC — from requirements and design through coding, testing, deployment, and maintenance. Modern AppSec combines automated tools (SAST, DAST, SCA, IAST) with manual processes (code review, penetration testing, threat modeling) to create defense-in-depth for software systems.
Key Concepts
DAST (Dynamic Analysis)
Tests running applications by simulating attacks. Discovers runtime vulnerabilities, misconfigurations, and authentication flaws from an attacker's perspective.
SAST (Static Analysis)
Analyzes source code, bytecode, or binaries without executing the application. Finds vulnerabilities like SQL injection, XSS, and buffer overflows early in development.
SCA (Software Composition Analysis)
Identifies vulnerabilities in open-source and third-party components. Maps dependencies to known CVEs and license risks.
Secure SDLC
Integrating security at every phase — requirements, design, implementation, testing, deployment, and operations. Shift-left security reduces cost and risk.
Threat Modeling
Systematic identification of threats using STRIDE, PASTA, or DREAD methodologies. Produces actionable mitigations before code is written.
WAF (Web Application Firewall)
Layer 7 defense that filters, monitors, and blocks HTTP/S traffic to and from web applications. Protects against OWASP Top 10 attacks.
Secure SDLC Architecture
Secure Software Development Lifecycle
Security is integrated at every phase — not bolted on at the end
SSDLC → CI/CD Pipeline Mapping
Each SSDLC phase maps to a CI/CD stage with specific security tools — shift-left means starting security at Pre-commit, not at Deploy
OWASP Top 10 (2021)
| Rank | Vulnerability | Severity | Description |
|---|---|---|---|
| A01 | Broken Access Control | Critical | Failures allowing users to act outside their intended permissions |
| A02 | Cryptographic Failures | Critical | Weak or missing encryption for data at rest and in transit |
| A03 | Injection | Critical | SQL, NoSQL, OS, LDAP injection via untrusted data |
| A04 | Insecure Design | High | Missing or ineffective security controls in design phase |
| A05 | Security Misconfiguration | High | Default configs, open cloud storage, verbose error messages |
| A06 | Vulnerable Components | High | Using libraries/frameworks with known vulnerabilities |
| A07 | Auth & ID Failures | High | Broken authentication, session management flaws |
| A08 | Software & Data Integrity | Medium | CI/CD pipeline integrity, unsigned updates, deserialization |
| A09 | Logging & Monitoring Failures | Medium | Insufficient logging, alerting, and incident detection |
| A10 | SSRF | Medium | Server-Side Request Forgery — fetching URLs without validation |
📱 OWASP Mobile Top 10 (2024)
Critical security risks for mobile applications — from credential storage to binary protections.
| ID | Vulnerability | Description | Key Mitigation |
|---|---|---|---|
| M1 | Improper Credential Usage | Hardcoded credentials, insecure storage, API keys in client code | Android Keystore, iOS Keychain, OAuth tokens |
| M2 | Inadequate Supply Chain | Malicious SDKs, compromised libraries, insecure build pipelines | SDK provenance, SCA scanning, signed builds |
| M3 | Insecure Auth/AuthZ | Weak biometrics, client-side auth decisions, session mismanagement | Server-side auth, secure biometric APIs, cert pinning |
| M4 | Insufficient I/O Validation | SQLi via local DBs, XSS in WebViews, path traversal | Parameterized queries, sanitize WebViews, validate paths |
| M5 | Insecure Communication | Missing TLS, weak ciphers, ignoring cert errors, HTTP traffic | TLS 1.2+, certificate pinning, ATS (iOS) |
| M6 | Inadequate Privacy Controls | Excessive data collection, PII in logs/backups, tracking without consent | Data minimization, privacy by design, consent mgmt |
| M7 | Insufficient Binary Protections | No obfuscation, missing anti-tampering, no root/jailbreak detection | ProGuard/R8, integrity checks, anti-debugging |
| M8 | Security Misconfiguration | Debug mode in prod, broad permissions, exported components | Secure defaults, minimize permissions, review manifests |
| M9 | Insecure Data Storage | Plaintext files, unencrypted DBs, clipboard/screenshot leakage | Encrypted storage, secure deletion, FLAG_SECURE |
| M10 | Insufficient Cryptography | Weak algorithms (DES, RC4), hardcoded keys, custom crypto | Platform crypto APIs, AES-256-GCM, proper key gen |
Remediation & Best Practices
Input Validation & Output Encoding
Validate all inputs server-side. Use parameterized queries and context-aware output encoding to prevent injection attacks.
Strong Authentication & Session Management
Implement MFA, secure session tokens, password hashing (bcrypt/argon2), and account lockout policies.
Dependency Management
Use SCA tools to scan dependencies. Maintain SBOM, update regularly, and pin versions. Monitor for CVEs.
Security Headers & CSP
Set Content-Security-Policy, X-Frame-Options, HSTS, X-Content-Type-Options, and Referrer-Policy headers.
Interview Preparation
What is the difference between SAST and DAST?
SAST (Static Application Security Testing) analyzes source code without executing it — it's white-box testing done early in the SDLC. DAST (Dynamic Application Security Testing) tests the running application from the outside — it's black-box testing done later. SAST finds issues like SQL injection patterns in code; DAST finds runtime issues like authentication bypasses. Ideally, both are used together (shift-left + shift-right).
How would you implement a Secure SDLC in an organization?
Start with threat modeling during design, integrate SAST into CI/CD pipelines, conduct peer code reviews with security checklists, run DAST scans in staging, perform SCA for dependency vulnerabilities, use WAF/RASP in production, and establish an incident response process. Train developers on secure coding (OWASP Top 10). Measure with metrics: vulnerability density, mean time to remediate, and coverage.
Explain the OWASP Top 10 A01:2021 - Broken Access Control
Broken Access Control occurs when users can act outside their intended permissions. Examples include IDOR (accessing /api/user/123 when you're user 456), privilege escalation, CORS misconfigurations, and missing function-level access control. Mitigations: deny by default, enforce access control server-side, implement RBAC/ABAC, use indirect object references, and log access control failures.
How do SSDLC phases map to CI/CD pipeline stages?
Each SSDLC phase has a direct CI/CD counterpart: Requirements & Design → Threat modeling docs stored in version control, security user stories in backlogs. Coding → Pre-commit hooks (secrets scanning, linting), IDE security plugins (Snyk, Semgrep). Build → SAST (SonarQube, Checkmarx) and SCA (Snyk, Dependabot) run as pipeline steps, SBOM generation. Test → DAST (OWASP ZAP, Burp) in staging environments, IAST agents during integration tests, container image scanning (Trivy). Deploy → IaC scanning (Checkov, tfsec), image signing (Cosign), admission controllers (OPA/Kyverno), and policy-as-code enforcement. Operate → RASP, WAF (AWS WAF, Cloudflare), CSPM, runtime monitoring, and continuous compliance scanning. The key principle: security gates should start non-blocking (alert only) and graduate to blocking as teams mature, to avoid developer friction while building a security culture.
How do you evaluate vulnerabilities across Java, .NET, Python, and other application codebases?
Each language/framework has unique vulnerability patterns — an AppSec engineer must know what to look for in each ecosystem.
- Common issues — SQL injection via JDBC string concatenation (fix: PreparedStatement), XXE in XML parsers (fix: disable external entities in DocumentBuilderFactory), deserialization attacks via ObjectInputStream (fix: whitelist classes, use JSON instead), JNDI injection (Log4Shell-style — fix: upgrade, disable lookups)
- SAST tools: Veracode, Checkmarx, SpotBugs with FindSecBugs plugin
- Framework-specific: Spring Security misconfigurations, Struts OGNL injection
- Common issues — SQL injection via string concatenation in ADO.NET (fix: SqlParameter), XSS in Razor views without Html.Encode, insecure deserialization with BinaryFormatter (fix: use System.Text.Json), ViewState tampering (fix: enable MAC validation), path traversal in file operations
- SAST tools: Veracode, SonarQube with C# plugin, Roslyn analyzers, Security Code Scan
- Framework-specific: ASP.NET anti-forgery token missing, insecure authentication cookie settings
- Common issues — SQL injection via f-strings/format in queries (fix: parameterized queries with SQLAlchemy or psycopg2), command injection via os.system/subprocess with shell=True (fix: use subprocess with shell=False and list args), SSTI in Jinja2/Flask (fix: autoescape=True), pickle deserialization RCE (fix: never unpickle untrusted data), SSRF in requests library
- SAST tools: Bandit, Semgrep, Veracode
- Framework-specific: Django CSRF bypass, Flask debug mode in production
- Common issues — prototype pollution, XSS via innerHTML/dangerouslySetInnerHTML (fix: textContent, DOMPurify), npm dependency attacks (typosquatting, supply chain), SSRF in axios/fetch, NoSQL injection in MongoDB queries
- SAST tools: ESLint security plugin, Semgrep, NodeJsScan
- Map all findings to CWE IDs for consistent tracking
- Prioritize by CVSS + reachability (is the vulnerable code path actually triggered?)
- Create language-specific secure coding guidelines and approved library lists
How do you work with development teams to remediate security flaws in source code and enforce secure coding practices?
Effective remediation is a partnership between security and development — not a handoff.
- Embed a security champion in each dev team — a developer who receives extra security training and acts as the first point of contact for security questions
- They review findings before escalation, mentor peers, and advocate for secure design patterns within their squad
- Conduct security-focused code reviews using checklists mapped to OWASP Top 10 and CWE Top 25
- Focus on high-risk areas — authentication flows, authorization checks, input handling, cryptography usage, and data serialization
- Use PR annotations from SAST/SCA tools so developers see findings inline during review
- Publish language-specific secure coding guidelines — approved libraries (e.g., ESAPI for Java, DOMPurify for JS), banned functions (e.g., strcpy, eval, pickle.loads), required patterns (parameterized queries, output encoding)
- Enforce via custom SAST rules and linter configs distributed as shared packages
- When a vulnerability is found — create a Jira ticket with CWE ID, severity, affected code location, attack scenario, and recommended fix with code example
- Schedule a fix-it pairing session for Critical/High findings
- Provide secure code snippets developers can copy-paste
- Set SLAs — Critical: 24-48 hrs, High: 7 days, Medium: 30 days, Low: next sprint
- Quarterly secure coding workshops covering real vulnerabilities found in the codebase (anonymized)
- Lunch-and-learn sessions on new attack vectors
- Gamified CTF events to build security awareness
- Integrate security training into onboarding for new developers
- Track vulnerability recurrence rate — are the same CWE categories showing up repeatedly? Measure MTTR trending down over time
- Monitor developer adoption of security tools (IDE plugins, pre-commit hooks)
- Goal: developers preventing vulnerabilities, not just fixing them
How do you provide guidance on OWASP Top 10 and SANS/CWE Top 25 vulnerabilities — how they arise, how they are exploited, and how to prevent them?
Understanding the full lifecycle of each vulnerability class — root cause, exploitation, and defense — is essential for any AppSec professional.
1INJECTION FLAWS (OWASP A03, CWE-89 and CWE-78): How they arise — user input concatenated directly into SQL queries, OS commands, or LDAP queries without sanitization. Exploitation — attacker submits crafted input like OR 1=1-- in login fields to bypass authentication or extract data. Prevention — parameterized queries and prepared statements (never string concatenation), stored procedures, input validation with allowlists, ORM frameworks.
2BROKEN ACCESS CONTROL (OWASP A01, CWE-862 and CWE-639): How they arise — missing authorization checks on API endpoints, IDOR (Insecure Direct Object References) where user IDs are guessable, privilege escalation via role manipulation. Exploitation — change /api/user/123 to /api/user/456 to access another users data, modify hidden form fields or JWT claims to elevate privileges. Prevention — deny by default, enforce server-side authorization on every request, use indirect references (UUIDs), implement RBAC/ABAC, log all access failures.
3CROSS-SITE SCRIPTING (OWASP A03, CWE-79): How they arise — user-supplied data rendered in HTML without encoding. Stored XSS persists in database, Reflected XSS via URL parameters, DOM XSS via client-side JavaScript. Exploitation — inject script tags to steal session tokens via document.cookie. Prevention — context-aware output encoding (HTML, JS, URL, CSS contexts), Content Security Policy headers, DOMPurify for rich text, HttpOnly cookies.
4CRYPTOGRAPHIC FAILURES (OWASP A02, CWE-327 and CWE-328): How they arise — weak algorithms (MD5, SHA1 for passwords), hardcoded keys, missing encryption at rest or in transit. Exploitation — rainbow table attacks on unsalted hashes, MITM on unencrypted channels. Prevention — bcrypt/Argon2 for passwords, AES-256-GCM for data at rest, TLS 1.2+ everywhere, proper key management (HSM/KMS), never roll your own crypto.
5SECURITY MISCONFIGURATION (OWASP A05, CWE-16): How they arise — default credentials left unchanged, unnecessary services enabled, verbose error messages in production, missing security headers. Exploitation — access admin panels with admin/admin, read stack traces to map internal architecture. Prevention — hardening checklists per platform, automated configuration scanning (CIS Benchmarks), infrastructure-as-code with security baselines, remove unused features/frameworks.
- OWASP Top 10 groups vulnerability categories by risk (frequency x impact)
- SANS/CWE Top 25 lists specific weakness types by prevalence in real-world CVEs
- They overlap — e.g., OWASP A03 Injection maps to CWE-89 (SQLi), CWE-78 (OS Command Injection)
- Use OWASP for risk-based prioritization and developer training, use CWE for precise SAST rule mapping and vulnerability classification
How do you use scripting and coding in Java and Python for security engineering, vulnerability management, and compliance?
Security engineers who can code have a massive force multiplier — automation replaces repetitive manual work and scales security across the organization.
- Python is the go-to language for security scripting because of its rich library ecosystem
- Common use cases — writing API integrations to pull scan results from Veracode, Qualys, or Nessus and push them into Jira or ServiceNow automatically
- Building custom parsers to normalize vulnerability data from multiple scanners into a unified format (CSV, JSON, or database)
- Automating compliance evidence collection — scripting checks for CIS Benchmarks, SOC 2 controls, or PCI-DSS requirements and generating audit-ready reports
- Key libraries — requests (API calls), pandas (data analysis and reporting), paramiko (SSH automation), boto3 (AWS security audits), python-nmap (network scanning), BeautifulSoup (web scraping for OSINT)
- Java is used for building enterprise security tools, custom SAST rules, and integrations with Java-based platforms
- Common use cases — writing custom Veracode API wrappers to orchestrate policy scans across hundreds of applications in CI/CD
- Building custom static analysis rules using SpotBugs or Error Prone to detect organization-specific anti-patterns
- Creating security middleware and filters in Spring Boot applications — custom authentication filters, request validation, and audit logging
- Developing custom Burp Suite extensions in Java for automated testing of application-specific vulnerabilities
- Automate the full vulnerability lifecycle — scan scheduling, result ingestion, deduplication, severity enrichment (adding business context to CVSS scores), SLA tracking, and escalation notifications
- Build dashboards that aggregate data from SAST, SCA, DAST, and infrastructure scanners into a single pane of glass
- Script auto-ticketing — when a Critical finding is detected, automatically create a Jira ticket with CWE ID, affected component, remediation guidance, and assign to the right team
- Track metrics programmatically — MTTR, vulnerability density per application, recurrence rates, SLA compliance percentages
- Script compliance checks — verify encryption settings, access controls, logging configurations, and patch levels against policy baselines
- Generate automated compliance reports for auditors — map vulnerabilities to specific control frameworks (NIST 800-53, ISO 27001, PCI-DSS, HIPAA)
- Build drift detection scripts that alert when configurations deviate from approved baselines
- Automate evidence collection for SOC 2 Type II audits — pull access reviews, change management logs, and security scan results into structured reports
- Python script to query Veracode REST API, pull all High/Critical findings across the portfolio, calculate MTTR per team, and email a weekly executive summary
- Java utility to scan all Spring Boot applications for missing security annotations
- Python automation to check AWS S3 bucket policies, IAM configurations, and CloudTrail logging against CIS AWS Benchmark and flag non-compliant resources
How do you review and approve false positives and mitigated-by-design requests for DAST, SAST, and SCA findings?
False positive triage and mitigated-by-design approvals are critical to maintaining scanner credibility and developer trust — if the security team marks everything as Must Fix without nuance, developers stop paying attention.
- False Positive — the scanner flagged something that is genuinely not a vulnerability
- Example — SAST flags SQL injection but the code uses a parameterized query through an ORM, so injection is impossible
- Mitigated by Design — the vulnerability technically exists in the code path but architectural controls make exploitation impossible
- Example — SAST flags hardcoded credentials in a test file that is excluded from production builds, or DAST finds a reflected XSS but a strict Content Security Policy blocks script execution
- Examine the data flow the tool traced — follow the source (user input) to the sink (dangerous function)
- Verify if input validation, encoding, or parameterization exists along the path that the scanner missed
- Common SAST false positives — ORM-generated queries flagged as SQL injection, encoded output flagged as XSS, dead code paths, test files
- Reproduce the finding manually — send the same payload the scanner used and verify if the vulnerability actually triggers
- Check if a WAF, CSP, or application-level control blocks the attack in practice
- Check if the vulnerable function in the library is actually called by the application (reachability analysis)
- Verify if the vulnerability applies to the deployment context
- Developer submits a mitigation request with evidence — code snippets showing the control, architecture diagrams, or test results proving non-exploitability
- Security engineer reviews the evidence independently — never approve based on developer assertion alone
- Document the decision with rationale, CWE ID, reviewer name, and expiration date
- Set mitigations to expire and require re-review (e.g., every 12 months)
- Track false positive rates per scanner and per application
- Periodically audit approved mitigations — sample 10% quarterly
- Never approve mitigated-by-design for Critical severity findings without a second reviewer
How do you review and approve SDLC security tasks such as MME and Secure-by-Design processes for DAST, SAST, and SCA?
In large enterprises — especially financial institutions — SDLC security tasks are formal governance checkpoints ensuring every application meets security standards before production release.
- MME (Mitigate by Mitigation, Mitigate by Environment) — these are Veracode mitigation categories where a finding is accepted because either a code-level mitigation exists that the scanner cannot detect (Mitigate by Mitigation) or network/infrastructure controls prevent exploitation (Mitigate by Environment — e.g., WAF rules, network segmentation, IP whitelisting)
- SbD (Secure by Design) — a formal process where the application architecture is reviewed upfront to confirm security controls are baked into the design rather than bolted on after scanning
- At each SDLC phase, specific security tasks must be completed and approved
- Design Phase — threat model review, security requirements sign-off, data classification
- Development Phase — SAST scan completion, SCA scan with no unapproved Critical/High findings, secure code review
- Testing Phase — DAST scan against staging, penetration testing for high-risk applications
- Pre-Production — all findings remediated or formally mitigated with approved MME requests, policy scan passing at required Veracode Level
- Verify the mitigation type is appropriate — Mitigate by Mitigation requires code evidence (show the sanitization, encoding, or parameterization the scanner missed)
- Mitigate by Environment requires infrastructure evidence (WAF rule screenshots, network diagram showing segmentation)
- Reject if evidence is insufficient
- Critical findings require a second AppSec reviewer plus manager approval
- Validate that the threat model covers all relevant attack vectors
- Confirm security controls are mapped to specific threats
- Review architecture diagrams for secure patterns — defense in depth, least privilege, secure defaults
- Financial regulators (OCC, FFIEC, MAS) require evidence of SDLC security controls
- Maintain separation of duties — the developer cannot approve their own MME request
- Track aging mitigations with expiration dates
MME submitted without evidence, Mitigate by Environment claimed without infrastructure controls, SCA finding mitigated when a patch is available — reject and require the upgrade.
How do you maintain compliance with NIST, PCI-DSS, FFIEC, SOX, and CIS security frameworks?
In regulated industries — especially financial services — security engineers must ensure applications and infrastructure continuously meet multiple overlapping compliance frameworks.
1NIST 800-53 AND NIST CSF: NIST SP 800-53 provides a catalog of 1,000+ security and privacy controls organized into 20 families. NIST CSF organizes security into 5 functions — Identify, Protect, Detect, Respond, Recover. Map SAST/DAST/SCA scanning to SI-2 (Flaw Remediation), SA-11 (Developer Testing), RA-5 (Vulnerability Monitoring).
- 12 requirements for cardholder data
- Requirement 6 — develop and maintain secure systems (6.2 risk ranking, 6.3 secure SDLC, 6.5 common vulnerabilities, 6.6 WAF or pen test)
- Requirement 11 — quarterly ASV scans
Financial institution IT security guidelines — risk assessments, secure coding, independent testing, vendor management.
Section 404 requires internal controls over financial reporting — access controls, segregation of duties, change management with approval workflows, audit trails.
- Hardening benchmarks for OS, databases, cloud
- CIS Control 16 covers AppSec scanning, secure coding training, remediation SLAs
- Automate with CIS-CAT or AWS Config Rules
- Map controls across frameworks to avoid duplicate work
- Use GRC platforms (Archer, ServiceNow GRC) for evidence tracking
- Maintain a compliance calendar — quarterly ASV scans, annual pen tests, SOX testing cycles
How do you work with security teams to deploy security tools as Infrastructure as Code (IaC)?
Deploying security tools as IaC ensures consistent, repeatable, auditable, and version-controlled infrastructure.
1WHY IaC FOR SECURITY TOOLS: Manual deployment leads to configuration drift and inconsistent coverage. IaC is declarative, version-controlled, peer-reviewed, and automatically deployed.
- WAF — Terraform modules for AWS WAF, Azure Front Door
- SIEM — Splunk forwarders, Elastic Security agents via Terraform/Ansible
- EDR — CrowdStrike Falcon via Ansible playbooks
- CSPM — Prisma Cloud, AWS SecurityHub via Terraform
- Secrets Management — HashiCorp Vault clusters via Terraform
- Reusable modules per tool
- Workspaces or Terragrunt for identical stacks across environments
- Remote state backends with encryption
- OPA with Rego policies, Checkov/tfsec to scan Terraform plans
- Block terraform apply on failures
- Security team writes Terraform modules, submits PRs
- Pipeline — plan, policy validation, approval gate, apply
- Blue-green deployments for zero-gap coverage
- Scheduled terraform plan to detect manual changes
- Auto-remediate drift
- Full Git audit trail for compliance
Framework Mapping
| Framework | Relevant Controls / Sections |
|---|---|
| OWASP | Top 10, ASVS, SAMM, Testing Guide, Secure Coding Practices |
| NIST | SP 800-53 SA-11 (Developer Testing), SI-10 (Input Validation), SA-15 (Dev Process) |
| MITRE | T1190 (Exploit Public-Facing App), T1059 (Command Execution), T1203 (Exploitation) |
| ISO | A.14.2 (Security in Dev), A.14.1 (Security Requirements), A.12.6 (Technical Vuln Mgmt) |