OWASP Top 10 Explained for Business Leaders
A non-technical walk through the OWASP Top 10 — the ten classes of web application risk that account for the bulk of breaches we see in real engagements — and what each one actually costs your business.
On this page (11)
- A01 — Broken Access Control
- A02 — Cryptographic Failures
- A03 — Injection
- A04 — Insecure Design
- A05 — Security Misconfiguration
- A06 — Vulnerable & Outdated Components
- A07 — Identification & Authentication Failures
- A08 — Software & Data Integrity Failures
- A09 — Logging & Monitoring Failures
- A10 — Server-Side Request Forgery (SSRF)
- What this means for your roadmap
The OWASP Top 10 is the closest thing the application security industry has to a shared vocabulary. Every penetration test report references it, every compliance auditor expects it, and every CISO is asked about it during board reviews. But the document itself is written for engineers — which leaves business leaders translating between "Broken Access Control" and "we lost customer data."
This post is the translation. We've delivered over 6,700 assessments since 2006, and most of the high-impact findings we report still map to the OWASP Top 10. Here is what each risk means in plain terms, and the kind of business outcome we've seen when it is exploited.
A01 — Broken Access Control
The application lets users see or change data they shouldn't. The classic example: changing the order ID in the URL and viewing someone else's invoice. We routinely find this in customer portals, admin panels, and APIs that "trust the frontend." When it goes wrong in BFSI, the consequences are regulator letters and CERT-In disclosures.
A02 — Cryptographic Failures
Sensitive data was either not encrypted, encrypted incorrectly, or transmitted over an insecure channel. Logs that capture passwords. Database fields that hold card numbers in plaintext. Backups stored without encryption. PCI DSS, RBI, and SEBI all explicitly call this out.
A03 — Injection
User input is interpreted as code or query syntax. SQL injection, command injection, LDAP injection. It still happens — usually because a junior developer concatenated strings instead of using parameterised queries. A single injection vulnerability typically lets attackers exfiltrate the entire database.
A04 — Insecure Design
The architecture itself is risky, even if the code is clean. Examples: a password reset flow with no rate limit, a coupon system that allows unlimited reuse, a withdrawal flow that doesn't verify balances atomically. These are the most expensive class to fix because they require redesign, not patches.
A05 — Security Misconfiguration
Default credentials still in place. Verbose error messages that leak stack traces. CORS policies that allow any origin. S3 buckets configured for public read. Most of the publicly disclosed Indian breaches in the last three years started here.
A06 — Vulnerable & Outdated Components
You're running a library with a known CVE. The Log4Shell scramble in late 2021 was a category-six event for this category — every organisation we worked with that month was paying down inventory debt they had ignored for years.
A07 — Identification & Authentication Failures
Login flows that don't enforce strong passwords. MFA that can be bypassed. Session tokens that don't expire. Account takeover through credential stuffing. This is the failure mode that ends up in newspaper headlines.
A08 — Software & Data Integrity Failures
Build pipelines that pull dependencies without integrity checks. Update servers that don't sign payloads. Deserialisation flaws. This is where supply-chain attacks live — the SolarWinds class.
A09 — Logging & Monitoring Failures
The breach happened. Nobody noticed. By the time anyone looked, logs had rotated, attackers had cleaned up, and the timeline was unreconstructable. We see this every time we're called in for incident response.
A10 — Server-Side Request Forgery (SSRF)
The application can be tricked into making requests on behalf of an attacker — typically into internal infrastructure that wasn't supposed to be reachable. Cloud environments are especially vulnerable because metadata endpoints expose IAM credentials.
What this means for your roadmap
The Top 10 isn't a checklist. It's a lens. When you commission a penetration test, ask the vendor how they map findings against it — and ask whether they validate exploitability or just report scanner output. Manual testing closes the gap on the categories scanners cannot reach (A04, A07, A09 in particular).
If you'd like our methodology mapped to the Top 10 categories your industry cares about most, request a scoping call and we'll send the relevant sections of our reporting framework.
Written by
Security Brigade Editorial Team
Continue reading
All articles →VAPT vs Penetration Testing: Which Do You Actually Need?
The terms get used interchangeably in Indian procurement RFPs, but they describe different things. Here is what the distinction means for scoping, cost, and the kind of report you walk away with.
Manual vs Automated Penetration Testing: The Real Difference
Scanners are good at what they are good at. Manual testing covers what they cannot. Here is the actual gap, with examples of findings each approach reliably catches and misses.
RBI Cybersecurity Framework: A 2026 Compliance Guide
What the RBI Cybersecurity Framework actually requires of banks, NBFCs, and payment system providers in 2026 — translated from circular language into an action plan.