The short version
- SAST analyzes source code without running it.
- DAST tests a running application by sending requests.
- Both are complementary; mature programs run both in CI/CD.
- SCA (dependency scanning) and IAST (runtime instrumentation) round out the modern app-sec stack.
The longer explanation
SAST in depth
Static application security testing reads the source code (or compiled bytecode) and looks for patterns associated with security flaws. The scanner does not need the application to run.
What SAST finds well:
- Hardcoded credentials and secrets.
- SQL, NoSQL, and command injection patterns where user input reaches a query or shell.
- Insecure cryptography (weak hashes, weak ciphers, hardcoded keys).
- Path traversal patterns.
- Unsafe deserialization.
- Cross-site scripting patterns in server-rendered templates.
What SAST struggles with:
- Runtime configuration (because it never sees runtime).
- Authentication flaws that depend on full system behavior.
- Business-logic flaws.
- False-positive noise in large codebases; good tuning takes material effort.
Popular enterprise SAST tools include Checkmarx, Fortify, Veracode, and Semgrep (the open-source option that has become widely adopted).
DAST in depth
Dynamic application security testing runs against a live application — typically in staging — and sends crafted requests to probe behavior. It finds flaws that depend on runtime state.
What DAST finds well:
- Broken authentication and session management.
- Broken access control (horizontal and vertical privilege escalation).
- Server-side injection reachable through actual requests.
- Security misconfigurations (headers, TLS, cookie flags).
- Reflected and stored XSS.
- Information disclosure in error responses.
What DAST struggles with:
- Code-level flaws not reachable in the paths it crawls.
- Complex business logic.
- Authenticated flows without careful credential management.
- Long scan times on large applications.
Popular enterprise DAST tools include Burp Suite Enterprise, Invicti (formerly Netsparker), Acunetix, and OWASP ZAP (open source).
SCA and IAST fill gaps
SCA (software composition analysis) scans dependencies for known vulnerabilities. Most enterprise code pulls in hundreds or thousands of third-party packages; SCA makes sure the known-vulnerable versions do not ship. Snyk, Dependabot, GitHub Advanced Security, and Semgrep's supply-chain features are common.
IAST (interactive application security testing) instruments the running application with an agent that observes both the request flow and the internal code paths. It combines DAST-style runtime observation with SAST-style internal visibility. Finds flaws neither SAST nor DAST alone would catch; requires agent installation which is operational overhead.
How they fit in CI/CD
A good modern application security pipeline:
- Commit. SCA runs on every push. Fast.
- Pull request. SAST runs on the diff or the full repo. Findings above severity block merge.
- Merge to main. Full SAST and SCA re-run.
- Staging deploy. DAST runs against the staging environment on schedule (nightly or per deploy). Findings triaged into the backlog.
- Production deploy. Runtime protection (WAF, RASP) plus monitoring; no scanning against production typically.
The rule is: cheap fast feedback close to the developer (SCA, SAST on PR) and comprehensive slow feedback on longer cadences (full DAST, IAST). Everything below a severity threshold files a ticket and does not block; above the threshold, the build fails.
Manual pen testing still matters
SAST and DAST catch a lot but not everything. Business-logic flaws, complex authorization bugs, and attack chains that require creative thinking are still human work. Mature programs run manual pen tests quarterly or per-major-release on top of the automated stack.
How Thoughtwave approaches this
Our cybersecurity practice integrates SAST, DAST, SCA, and IAST into client CI/CD pipelines as part of broader application security programs. We tune scanners to the client's language stack and frameworks, triage and prioritize findings, and run manual pen testing on top for flaws the automation misses.
For deeper context, see our Cybersecurity Solutions service and the penetration testing answer.