Most security teams have had this conversation at some point:
“We already have a WAF in front of the app. Aren’t we covered?”
It’s a fair question. WAFs are widely deployed, they show up in audits, and they’re often treated as a checkbox that proves web risk is being handled.
The problem is that modern application risk doesn’t live where most people think it does. The vulnerabilities that cause real incidents today aren’t always loud injection payloads hitting public endpoints. They’re often quiet workflow failures, permission gaps, authenticated abuse paths, and API behaviors that don’t look malicious until it’s too late.
A WAF helps. It’s not useless. But treating it as a substitute for runtime security validation is where teams get burned.
That’s why DAST still matters – and why buying a better DAST matters even when you already have perimeter controls.
Table of Contents
- The False Comfort of “We Have a WAF”
- What a WAF Actually Does (And What It Doesn’t).
- Sensitive Data Exposure via get_config
- Why WAF Bypass Isn’t Rare – It’s Normal
- The Vulnerabilities WAFs Don’t Catch
- Why “We’ll Tune the WAF” Usually Fails
- Where DAST Fits Differently
- Procurement Traps: How Vendors Blur the Lines
- What to Demand in a Modern DAST Tool
- Where Bright Fits (Without Replacing Your WAF)
- Buyer FAQ: WAF vs DAST in 2026
- Conclusion: A WAF Is a Shield – DAST Is Proof
The False Comfort of “We Have a WAF”
WAFs are easy to over-trust because they sit in a comforting place in the architecture: right at the edge.
They’re visible. They’re marketable. They give you dashboards. They block some bad traffic. They make leadership feel like there’s a wall between attackers and the application.
But attackers don’t approach applications like compliance teams do.
They don’t care that you have a WAF. They care about whether they can:
- Access data they shouldn’t
- Abuse a workflow
- Escalate privileges
- Extract sensitive information through APIs
- Trigger unintended behavior inside the app
And most of that happens after the perimeter.
The modern question isn’t “Do we have a WAF?”
It’s: Do we know what is exploitable in the running application?
That’s a different category of assurance.
What a WAF Actually Does (And What It Doesn’t)
A Web Application Firewall is fundamentally a traffic control layer.
It inspects inbound requests and tries to block patterns that resemble known attacks: injection payloads, suspicious headers, malformed inputs, automated scanners, things like that.
That’s useful.
But it’s also limited in ways buyers don’t always internalize.
A WAF does not:
- Understand business logic
- Validate authorization rules
- Reason about user roles
- Test workflows end-to-end
- Confirm whether a vulnerability is actually exploitable
- Tell you what happens inside authenticated sessions
Most WAFs operate with conservative tuning because false blocks are expensive. Blocking a real customer’s checkout request is not a theoretical problem. It’s a revenue loss.
So in practice, WAFs tend to block the obvious stuff and allow everything else.
Which is exactly where real risk lives.
Sensitive Data Exposure via get_config
If get_count shows how MCP can leak data by executing unsafe queries, get_config shows how it can leak secrets by simply returning too much.
In Broken Crystals, get_config is an admin-only tool, but that does not make it safe. The implementation proxies /api/config, and unless include_sensitive is explicitly set to false, it returns the full configuration object. In other words, sensitive output is the default behavior.
The example response in the repo includes an S3 bucket URL, a PostgreSQL connection string, and a Google Maps API key. That is exactly the kind of data security teams try to keep out of logs, frontends, test fixtures, and support tooling. Exposing it through MCP means any agent or workflow with admin-level MCP access can retrieve it in one structured call.
This is a common failure mode in AI integrations. Teams assume the main risk is unauthorized public access. But over-privileged internal access is often the more realistic problem. If an agent is granted broad admin permissions for convenience, or if an authenticated MCP session is compromised, a configuration tool like this can leak credentials, infrastructure locations, service URLs, and third-party keys immediately.
The lesson is straightforward: admin-only is not a substitute for output minimization. Sensitive config should never be the default payload of an MCP tool. If a tool must exist at all, it should return a tightly redacted view designed for that specific use case.
Why WAF Bypass Isn’t Rare – It’s Normal
“WAF bypass” sounds like a headline. Like something advanced attackers do.
In reality, bypassing WAF protections is often just the default outcome of how modern applications work.
Attackers don’t need to smash through the front door if the building has side entrances.
Common bypass realities include:
- Payload obfuscation and encoding
- API-first attack surfaces where WAF rules are weak
- Authenticated abuse where traffic looks legitimate
- Multi-step workflows that don’t trigger signature rules
- Logic flaws that contain no malicious strings at all
The truth is uncomfortable:
WAFs block patterns. Attackers exploit behavior.
Those are not the same thing.
The Vulnerabilities WAFs Don’t Catch
This is where most AppSec programs get surprised.
The biggest gaps are not theoretical. They show up in real breach reports constantly.
Broken Access Control Doesn’t Trigger a WAF
One of the most damaging classes of vulnerabilities today is access control failure.
For example:
- User A can access User B’s invoice
- A patient portal leaks another patient’s records
- An internal admin API is reachable with normal credentials
Nothing about those requests looks malicious.
The payload is clean. The endpoint is valid. The session is real.
The vulnerability is in authorization logic, not syntax.
A WAF cannot tell whether someone should be allowed to see that data. It only sees traffic, not intent.
Business Logic Abuse Looks Like Normal Usage
Logic flaws don’t announce themselves.
Attackers abuse workflows like:
- Skipping payment steps
- Replaying discount codes
- Manipulating onboarding sequences
- Exploiting race conditions in multi-step actions
These are not “bad payloads.”
They are valid actions chained in unexpected ways.
No perimeter rule set can reliably detect that without breaking legitimate users.
Authenticated Attacks Walk Through Every Time
A lot of security tooling is strongest before login.
But most real attackers don’t stay anonymous.
They:
- compromise credentials
- create accounts
- abuse partner access
- exploit low-privilege footholds
Once traffic is authenticated, it blends in.
WAFs do not magically become behavioral security engines inside user sessions.
APIs and GraphQL Reduce WAF Effectiveness
Modern applications are API-driven.
That means:
- fewer predictable endpoints
- more dynamic request shapes
- more complexity hidden behind a single gateway
GraphQL, especially, is a procurement trap. Vendors will claim “GraphQL support” when they really mean “we don’t break it.”
WAFs struggle here because signatures don’t map cleanly to schema-driven behavior.
Why “We’ll Tune the WAF” Usually Fails
This is one of the most common organizational delusions.
Teams assume that if something slips through, they can just tune rules harder.
In practice:
- Tuning is endless
- Ownership is unclear
- Strict rules break real users
- Loose rules provide false confidence
Most WAF deployments end up in a middle zone:
Not aggressive enough to stop real abuse
Too fragile to lock down further
Still treated as a security control
That’s not a strategy. That’s drift.
Where DAST Fits Differently
DAST is not a perimeter filter.
DAST is runtime validation.
It answers a different question:
If an attacker interacts with this application, what can they actually exploit?
DAST tests the application the way attackers do:
- through real endpoints
- with real sessions
- across workflows
- observing responses
- validating exploit paths
DAST finds what WAFs can’t:
- access control failures
- authentication weaknesses
- workflow abuse
- API exposure
- multi-step exploitability
This is why modern teams don’t replace WAFs with DAST.
They use DAST to prove what still exists behind the WAF.
Procurement Traps: How Vendors Blur the Lines
When buyers evaluate AppSec tools, vendors love vague overlap.
Watch for these traps:
“Our WAF Includes Scanning”
Most WAF scanning is shallow, unauthenticated, and signature-based.
That is not application security validation.
“Our DAST Replaces Pen Testing”
No. DAST reduces gaps. It doesn’t replace adversarial testing.
“We Support Modern Apps”
Ask what that means:
- SPAs?
- OAuth flows?
- GraphQL?
- WebSockets?
- Multi-step authenticated workflows?
Marketing language is cheap. Capability isn’t.
“We Have Low False Positives”
Ask how they prove exploitability.
Noise reduction only matters if findings are validated.
What to Demand in a Modern DAST Tool
If you’re buying in 2026, the baseline questions should include:
- Can it scan authenticated applications reliably?
- Does it handle APIs, not just websites?
- Can it validate exploitability, not just detect patterns?
- Does it retest fixes automatically?
- Can it run continuously in CI/CD without disruption?
- Does it support production-safe scanning modes?
DAST procurement is no longer about “do you scan OWASP Top 10.”
It’s about whether you can operationalize runtime security without drowning engineers.
Where Bright Fits (Without Replacing Your WAF)
Bright’s approach is aligned with how risk actually shows up today: at runtime.
Instead of producing long theoretical lists, Bright focuses on validating what is exploitable in real application behavior.
That matters especially in environments where:
- WAFs are already deployed
- Applications are API-heavy
- AI-generated code increases unpredictability
- Teams need proof, not noise
Bright isn’t a perimeter replacement.
It’s the layer that helps teams answer: What’s still real behind the edge controls?
Buyer FAQ: WAF vs DAST in 2026
Does a WAF replace DAST?
No. A WAF blocks some inbound patterns. DAST validates runtime exploitability.
If we have a WAF, what’s the point of scanning?
Because most serious vulnerabilities aren’t blocked at the edge. They live in authorization, workflows, APIs, and authenticated behavior.
Can a WAF stop prompt injection or AI logic abuse?
Not reliably. These are semantic and behavioral issues, not signature payloads.
What’s the biggest mistake teams make in procurement?
Assuming overlap means redundancy. WAF and DAST solve different problems.
What should leadership care about?
Evidence. Knowing which vulnerabilities are exploitable and whether fixes actually worked.
Conclusion: A WAF Is a Shield – DAST Is Proof
WAFs are useful. They reduce noise at the perimeter. They block obvious attacks. They belong in modern architecture.
But they do not tell you what is exploitable inside the application.
And that’s the gap attackers live in.
The vulnerabilities that matter most today are rarely loud. They are behavioral, authenticated, workflow-driven, and API-native. They don’t look like classic payloads. They look like normal usage – until they aren’t.
That’s why DAST still matters. Not as a checkbox. Not as a report generator. As runtime proof.
If your security strategy stops at the edge, you will always discover risk too late. The teams that win are the ones that validate continuously, prioritize what’s real, and treat runtime behavior as the source of truth.
A WAF is a shield. DAST is the reality check. And in 2026, you need both.