OWASP Top Ten Vulnerabilities Explained
The OWASP Top Ten is a consensus-based classification of the ten most critical web application security risk categories, maintained by the Open Web Application Security Foundation (OWASP). Each edition is produced through aggregated data from hundreds of contributing organizations and represents the attack surface categories responsible for the largest share of exploited vulnerabilities in production web applications. The list serves as a foundational reference for application security programs, regulatory frameworks, and professional certification bodies across the cybersecurity sector.
- Definition and scope
- Core mechanics or structure
- Causal relationships or drivers
- Classification boundaries
- Tradeoffs and tensions
- Common misconceptions
- Checklist or steps (non-advisory)
- Reference table or matrix
- References
Definition and scope
The OWASP Top Ten is not a vulnerability scanner output or a vendor benchmark — it is a risk-ranked taxonomy of web application weakness categories, each of which may encompass dozens of discrete vulnerability types as enumerated in the Common Weakness Enumeration (CWE) maintained by MITRE. The 2021 edition, the most recently published version at time of composition, incorporates data from 144,000 real-world applications submitted by security firms, bug bounty programs, and internal security teams (OWASP Top Ten 2021).
The scope is explicitly limited to web applications, though the risk categories have broad structural overlap with API and mobile application attack surfaces. The list does not cover network-layer vulnerabilities, operating system exploits, or physical security failures. OWASP publishes separate Top Ten lists for APIs (OWASP API Security Top Ten) and mobile applications (OWASP Mobile Top Ten), reflecting the divergent attack surfaces of those environments.
Regulatory adoption has made the OWASP Top Ten a de facto compliance reference. PCI DSS Requirement 6.3.1 explicitly references the OWASP Top Ten as a basis for addressing common vulnerabilities in bespoke software, while NIST SP 800-53 Rev. 5 Control SA-11 (Developer Security Testing) expects federal system developers to address known vulnerability classes consistent with recognized industry frameworks including OWASP. The application security providers reference catalogs service providers organized by these risk categories.
Core mechanics or structure
Each entry in the OWASP Top Ten represents a risk category, not a single CVE or exploit. The 2021 edition includes 10 categories, three of which were newly introduced and three of which represent merged prior categories:
A01 — Broken Access Control moved from the fifth position to first based on 94% of applications tested showing some form of access control failure (OWASP Top Ten 2021). The category encompasses forced browsing, IDOR (Insecure Direct Object References), and privilege escalation.
A02 — Cryptographic Failures replaced "Sensitive Data Exposure" to focus on root cause rather than symptom. Failures include use of deprecated algorithms (MD5, SHA-1), missing TLS enforcement, and weak key generation.
A03 — Injection dropped from first to third position as modern frameworks have reduced raw SQL injection prevalence, though the category now includes cross-site scripting (XSS), LDAP injection, and OS command injection as CWE-mapped variants.
A04 — Insecure Design was introduced in 2021 as a new category addressing architectural-level failures — threat model gaps, missing rate limiting by design, and the absence of secure-by-default patterns.
A05 — Security Misconfiguration covers misconfigured cloud permissions, default credentials, unnecessary enabled features, and verbose error messages. It absorbed the former "XML External Entities (XXE)" category as a subcategory.
A06 — Vulnerable and Outdated Components reflects the risk of third-party libraries and frameworks with known CVEs remaining in production. The Log4Shell vulnerability (CVE-2021-44228), rated 10.0 on the CVSS scale by NIST NVD, illustrated the magnitude of this category.
A07 — Identification and Authentication Failures renamed from "Broken Authentication," now encompasses credential stuffing, weak session token generation, and missing multi-factor authentication.
A08 — Software and Data Integrity Failures is new in 2021 and covers insecure CI/CD pipelines, unsigned code updates, and deserialization vulnerabilities, consolidating the former "Insecure Deserialization" category.
A09 — Security Logging and Monitoring Failures directly maps to detection and response capability gaps, a category that NIST SP 800-92 addresses through log management guidance.
A10 — Server-Side Request Forgery (SSRF) was added based on survey data despite low base-rate incidence, reflecting the high exploitability and potential blast radius in cloud-hosted architectures.
Causal relationships or drivers
The distribution of risk categories in the OWASP Top Ten reflects structural causes rooted in development practice, toolchain adoption, and economic incentives — not random vulnerability emergence.
Broken Access Control's ascent to A01 status correlates with the shift to single-page application (SPA) architectures and microservice APIs, where authorization logic is implemented at the client or fragmented across 10 or more discrete services rather than enforced centrally. NIST SP 800-204 (Security Strategies for Microservices) identifies this authorization fragmentation as a primary risk driver.
The introduction of A04 (Insecure Design) reflects industry recognition — formally echoed in NIST's Secure Software Development Framework (SSDF, NIST SP 800-218) — that a large fraction of critical vulnerabilities originate in architectural decisions made before a single line of code is written, making post-development testing insufficient as a sole mitigation mechanism.
A06 (Vulnerable and Outdated Components) is driven by the composition model of modern software development. A 2023 analysis by the Cybersecurity and Infrastructure Security Agency (CISA) in its joint advisory on software bill of materials (SBOM) usage identified that typical enterprise applications contain 70–80% open-source code by line count, the majority of which receives no automated dependency monitoring in many organizations.
A08 (Software and Data Integrity Failures) is causally linked to the rapid expansion of CI/CD pipeline adoption without corresponding security controls — a risk surface addressed in the application security in CI/CD pipelines reference context.
Classification boundaries
The OWASP Top Ten operates at the risk category level, which positions it above CVE-level vulnerability tracking and below full threat modeling frameworks. Understanding where the taxonomy's boundaries fall is essential for practitioners scoping security testing, compliance attestation, or vendor evaluation.
The Top Ten does not constitute a complete secure coding standard. OWASP's own Application Security Verification Standard (ASVS) — currently at version 4.0 — provides 286 discrete verification requirements organized across 14 security domains and is the appropriate reference for exhaustive control coverage.
The Top Ten also does not classify vulnerabilities by CVSS score, exploitability rating, or attack vector — those classifications belong to the NVD and CVE Program. A single Top Ten category (e.g., A03 Injection) may contain vulnerabilities ranging from CVSS 3.1 to 9.8.
Jurisdictionally, the Top Ten is not a U.S.-specific document. Its adoption is referenced in frameworks across the EU's ENISA Threat Landscape publications, the UK's National Cyber Security Centre (NCSC) Secure Development guidelines, and the Australian Signals Directorate (ASD) Application Hardening guidance.
Tradeoffs and tensions
The OWASP Top Ten's consensus methodology creates structural tensions that affect its operational utility.
Frequency vs. severity. The ranking methodology weights prevalence data heavily. A category that appears in 94% of applications at low severity (A01) ranks above a category that appears in 8% of applications at critical severity (A10). Practitioners using the list as a risk-prioritization tool must supplement it with their own threat models, as OWASP's own documentation acknowledges (OWASP Risk Rating Methodology).
Breadth vs. actionability. Categories like A04 (Insecure Design) are intentionally broad to capture architectural-level failures, but this breadth makes them difficult to operationalize as specific test cases. The OWASP Web Security Testing Guide (WSTG) partially addresses this gap with 91 individual test case templates, but the mapping between Top Ten categories and WSTG test procedures is non-exhaustive.
Update cadence vs. threat landscape velocity. The 2021 edition follows the 2017 edition by four years. In that interval, SSRF, supply chain attacks, and cloud misconfiguration emerged as dominant risk categories — only partially reflected in the current list. OWASP has acknowledged this cadence limitation in its project roadmap documentation.
Compliance proxy risk. Organizations that treat Top Ten coverage as a compliance checkbox without addressing the full ASVS control set may produce attestations that satisfy PCI DSS auditors while leaving A04-class architectural vulnerabilities entirely unaddressed. This tension is documented in NIST SSDF guidance and in CISA's Secure by Design initiative documentation.
Common misconceptions
Misconception: The OWASP Top Ten is a vulnerability scanner checklist.
Correction: Automated scanners can detect fewer than 30% of OWASP Top Ten vulnerability instances, primarily within A03 (Injection) and A05 (Misconfiguration). Categories like A04 (Insecure Design) and A01 (Broken Access Control) require manual testing, threat modeling, or code review for adequate coverage. OWASP WSTG explicitly states that business logic flaws require human judgment (WSTG-BUSL).
Misconception: Passing a Top Ten assessment means the application is secure.
Correction: OWASP itself states in the Top Ten project documentation that the list covers "the most critical risks" but does not represent the complete risk surface. ASVS Level 2 requires 135 verification requirements beyond what the Top Ten framework specifies.
Misconception: The list is updated annually.
Correction: The Top Ten has been published in 2003, 2004, 2007, 2010, 2013, 2017, and 2021 — an irregular cadence averaging approximately 3.5 years between editions. Organizations should not wait for a new edition to address emerging categories.
Misconception: A01 Broken Access Control is only about horizontal privilege escalation.
Correction: The category encompasses vertical privilege escalation, CORS misconfiguration, path traversal, missing function-level access control, and insecure direct object reference — 14 distinct CWE mappings in the 2021 taxonomy (OWASP A01:2021).
Checklist or steps (non-advisory)
The following sequence reflects the standard operational workflow for assessing an application against the OWASP Top Ten framework, drawn from OWASP WSTG and NIST SP 800-115 (Technical Guide to Information Security Testing):
- Scope definition — Identify the application type (web, API, SPA), technology stack, authentication model, and data classification tier.
- Threat model review — Map existing architectural documentation against A04 (Insecure Design) criteria; flag missing rate limiting, trust boundary definitions, and privilege separation models.
- Automated scanning — Run DAST tooling (e.g., tools evaluated under NIST SCAP validation) targeting A03 (Injection), A05 (Misconfiguration), and A02 (Cryptographic Failures).
- Manual access control testing — Execute WSTG-ATHZ test cases targeting A01; verify horizontal and vertical privilege boundaries, IDOR exposure, and CORS policy enforcement.
- Authentication and session testing — Apply WSTG-ATHN procedures covering A07; test session token entropy, timeout enforcement, and MFA bypass paths.
- Component inventory review — Generate or review an SBOM against NVD CVE data targeting A06; flag components with CVSS ≥ 7.0 and no vendor patch.
- CI/CD pipeline review — Assess integrity controls on build artifacts, dependency pinning, and signing procedures relevant to A08.
- Logging verification — Confirm that authentication events, access control failures, and input validation errors produce structured log output consistent with A09 and NIST SP 800-92 requirements.
- SSRF surface mapping — Enumerate server-side HTTP request paths and assess A10 exposure against internal metadata endpoints (e.g., cloud IMDSv1 endpoints).
- Finding classification — Map each finding to its OWASP category, CWE identifier, and CVSS score per NVD scoring methodology before reporting.
The application security providers provider network organizes service providers by the specific Top Ten categories they specialize in remediating.
Reference table or matrix
| OWASP 2021 Category | Prior Position (2017) | CWE Mappings (count) | Primary Detection Method | Regulatory Reference |
|---|---|---|---|---|
| A01 Broken Access Control | A05 | 34 | Manual testing, code review | PCI DSS Req. 6.3.1; NIST SA-11 |
| A02 Cryptographic Failures | A03 (Sensitive Data Exposure) | 29 | Automated scan + manual | FIPS 140-3 (NIST); PCI DSS Req. 4 |
| A03 Injection | A01 | 33 | DAST, SAST, code review | NIST SP 800-53 SI-10 |
| A04 Insecure Design | New | 40 | Threat modeling, design review | NIST SSDF (SP 800-218) |
| A05 Security Misconfiguration | A06 (+ XXE absorbed) | 20 | Automated scan, config audit | CIS Benchmarks; FedRAMP |
| A06 Vulnerable Components | A09 | 3 | SCA tooling, SBOM review | EO 14028 (SBOM mandate) |
| A07 Auth Failures | A02 | 22 | Manual testing, session analysis | NIST SP 800-63B |
| A08 Integrity Failures | New (+ A08 2017 merged) | 10 | Pipeline audit, code review | NIST SSDF; CISA Supply Chain |
| A09 Logging Failures | A10 | 4 | Log review, audit trail check | NIST SP 800-92 |
| A10 SSRF | New | 1 | Manual testing, DAST | CISA Cloud Security Advisory |
CWE mapping counts per OWASP Top Ten 2021 project page. Regulatory references drawn from published framework documentation at the agencies and standards bodies verified below.
For context on how these categories intersect with professional qualification standards and service provider evaluation, the reference explains how the sector is organized around these risk frameworks.