The short version
- Vulnerability assessments identify and prioritize known weaknesses without exploiting them.
- Output: findings ranked by severity, exploitability, and business risk.
- Run continuously via automated scanners plus periodically by analysts for depth.
- Differs from pen testing — assessments identify, pen tests exploit.
The longer explanation
What a vulnerability assessment covers
A complete assessment looks at several layers of the client's environment:
- Network. Exposed services, unpatched systems, misconfigured firewalls, weak protocols.
- Endpoint. Operating system patches, third-party software versions, local configuration, endpoint protection status.
- Identity. Weak passwords, dormant accounts, over-privileged access, MFA coverage gaps.
- Web and API. OWASP Top 10 categories, authentication flaws, broken access control, SSRF, deserialization.
- Cloud. Open storage buckets, over-permissioned IAM roles, unencrypted data, missing logging, exposed databases.
- Code and dependencies. Known CVEs in third-party libraries, static analysis findings, secrets in repositories.
- Configuration. CIS benchmarks, vendor hardening guides, deviations from known-good baselines.
The assessment's value comes from covering the layers the client's adversary actually targets — not just the ones easiest to scan.
Automated scans vs analyst review
Automated scanners produce high volumes of findings fast. They also produce false positives, miss business-logic flaws, and flag findings that are technically true but operationally irrelevant. An analyst-reviewed assessment filters the noise, validates the real findings, and contextualizes them against the client's threat profile.
For a mature security program, the cadence typically is:
- Continuous automated scanning on the entire attack surface.
- Weekly or monthly analyst review of new findings above a severity threshold.
- Quarterly comprehensive assessment with analyst-led deep dive and executive reporting.
- Targeted assessment after any material change — new application, new third-party, major infrastructure shift.
Prioritization that actually helps
Raw CVSS scores are a starting point but not the whole answer. A CVSS 9.8 on an isolated dev server matters less than a CVSS 7.2 on an internet-facing system with access to PII. Good prioritization combines:
- Severity (CVSS or equivalent).
- Exploitability (is an exploit known to exist? is it being exploited in the wild?).
- Business risk (what data or systems does this vulnerability grant access to?).
- Reachability (is the vulnerable system actually exposed?).
The output is a ranked list a security engineering team can act on, not a 200-page PDF nobody reads.
Remediation and re-test
A vulnerability assessment is not complete at the report. The findings need to be remediated, and the remediation needs to be verified. Good programs track findings through a fix cycle — open, assigned, in-remediation, resolved — and re-scan to confirm. Critical findings get explicit re-validation; lower-severity findings often verify via the next automated scan cycle.
Compliance cross-references
Most frameworks require vulnerability assessment on a defined cadence:
- PCI DSS requires quarterly external and internal scans, plus annual penetration testing.
- HIPAA Security Rule requires periodic risk analysis; vulnerability assessment feeds it.
- SOC 2 expects vulnerability management as part of the Common Criteria security controls.
- ISO 27001 requires technical vulnerability management in Annex A.12.
Running assessments on the right cadence is the most common way compliance programs demonstrate sustained control operation.
How Thoughtwave approaches this
Our cybersecurity practice runs vulnerability assessment programs as standalone engagements and as part of broader managed SOC and GRC programs. We pair automated scanning with analyst review, prioritize by business risk, and track findings through remediation with re-validation.
For deeper context, see our Cybersecurity Solutions service and the managed SOC service.