
TL;DR: The average critical vulnerability takes 74 days to remediate. Attackers need 4 days to weaponize it. That 70-day gap is where breaches happen. The problem is not discovery -- modern pentesting finds plenty of vulnerabilities. The problem is what happens after the report is delivered: 45% of findings remain unresolved after 12 months, remediation costs escalate 10x to 100x the longer they wait, and nobody retests to confirm fixes actually work. Closing this remediation gap requires automated retesting, clear ownership assignment, and a fundamentally different approach to how pentest findings flow from report to resolution.
Security teams have spent decades getting better at finding vulnerabilities. Scanners are faster. Pentesters are more skilled. AI-powered platforms can discover attack paths that would have taken human testers weeks to identify. The industry has invested billions in detection capability, and it shows.
But finding vulnerabilities was never the hard part. Fixing them is.
The data paints a stark picture. According to research from the Ponemon Institute and Mandiant, the average time to remediate a critical vulnerability across enterprise environments is 74 days. For high-severity findings, that number stretches to 90 days or more. Meanwhile, Mandiant's threat intelligence data shows that the median time for threat actors to develop and deploy exploits for newly disclosed vulnerabilities has dropped to just 4 days -- a figure that has fallen steadily from 32 days in 2020.
That 70-day window between when a vulnerability is discovered and when it is actually fixed is where the vast majority of breaches occur. It is the most dangerous gap in enterprise security, and no amount of additional scanning or testing will close it. Only operational changes to how organizations handle, prioritize, and verify remediation will make a difference.
The Scale of the Remediation Problem
The numbers are not improving. A 2025 analysis by the Cyber Risk Alliance found that 45% of vulnerabilities identified through penetration testing remain unresolved 12 months after discovery. Not 12 days -- 12 months. Nearly half of confirmed, exploitable findings that a skilled tester demonstrated could be used to compromise systems are still present a full year later.
This is not a technology problem. It is an operational one. Organizations that commission penetration tests are investing in security. They are paying skilled testers to find weaknesses. They receive detailed reports documenting exactly what is broken and how to fix it. And then nearly half of those findings languish in backlogs, ticket queues, and spreadsheets until the next annual test reveals the same issues -- or worse, until an attacker exploits them.
The Edgescan 2025 Vulnerability Statistics Report found that the average enterprise maintains a backlog of over 100,000 open vulnerability findings at any given time. Even after triage and deduplication, the actionable remediation queue for a mid-size organization typically exceeds 2,000 findings. When everything is a priority, nothing is.
Why Findings Go Unresolved
Understanding why remediation fails requires examining the workflow that follows a penetration test. The typical sequence looks like this: a testing firm delivers a 150- to 200-page PDF report. The security team reviews it, triages findings by severity, and creates tickets in Jira, ServiceNow, or whatever system the organization uses. Those tickets are assigned to engineering teams who are already working on feature development, bug fixes, and operational tasks.
Several failure modes emerge from this workflow.
The Report-to-Action Problem
The report itself is often the first bottleneck. As we explored in our guide to pentest report quality, a 200-page PDF with 87 findings is overwhelming. Security teams spend days triaging. Engineering teams receive tickets with generic remediation guidance -- "upgrade the affected software" or "implement input validation" -- that does not translate into actionable development tasks. Without specific, context-aware fix instructions, each finding requires the engineer to research the vulnerability, understand the exploit, and design a remediation approach from scratch.
The cognitive overhead per finding is substantial. A senior engineer might spend 2 to 4 hours understanding and remediating a single complex vulnerability. Multiply that across 50 or 80 findings, and you have consumed an entire engineer's capacity for a quarter -- capacity that was already allocated to the product roadmap.
The Ownership Gap
Many findings fall into the gap between teams. A misconfigured cloud IAM policy might involve the infrastructure team, the DevOps team, and the application team that requested the overly permissive role. A cross-site scripting vulnerability in a shared component affects multiple application teams. When ownership is ambiguous, tickets bounce between teams, each assuming someone else will handle it.
Research from Veracode's State of Software Security report found that 70% of vulnerabilities that remain open after 90 days have been reassigned at least once. Each reassignment adds an average of 14 days to the remediation timeline as context is lost and the new assignee must re-familiarize themselves with the finding.
The Verification Void
Even when a fix is deployed, there is rarely a mechanism to confirm it actually works. The engineer closes the ticket, marks the vulnerability as resolved, and moves on. But did the fix actually eliminate the exploitable condition? Without retesting, nobody knows.
The assumption that deploying a patch or code change automatically resolves the vulnerability is dangerously wrong. Patches can be applied incorrectly. Code changes can introduce new paths to the same vulnerability. Configuration changes can be overwritten by automated deployment processes. Mandiant's incident response data shows that 23% of "remediated" vulnerabilities remain exploitable after the initial fix is deployed. Nearly one in four fixes does not work.
Mandiant's incident response data shows that nearly 1 in 4 remediated vulnerabilities remain exploitable after the initial fix. Without automated retesting, these false resolutions persist undetected until the next annual test -- or until an attacker finds them first.
The Cost of Delayed Remediation
The financial argument for faster remediation is compelling. The IBM/Ponemon Cost of a Data Breach Report has consistently shown that the cost of fixing a vulnerability escalates dramatically the longer it remains unaddressed.
A vulnerability caught and fixed during development costs an average of $500 to $1,000. The same vulnerability found in a staging or QA environment costs $5,000 to $10,000 to remediate, accounting for regression testing, deployment cycles, and quality assurance. Once in production, the cost jumps to $15,000 to $50,000, factoring in change management, emergency patching, and potential service disruption.
If that vulnerability is exploited before it is fixed, the costs become catastrophic. IBM's 2024 report places the average data breach cost at $4.88 million globally, with breaches in the United States averaging $9.36 million. The 74-day remediation window for critical vulnerabilities represents 74 days of exposure to losses that can exceed $10 million.
The math is unforgiving: every day a critical vulnerability remains unpatched, the expected cost of that vulnerability increases. The cost is not linear -- it follows an exponential curve as the probability of exploitation compounds over time. By day 30, the risk-adjusted cost of a critical unpatched vulnerability is estimated at 10x the cost of immediate remediation. By day 74, it has reached 50x to 100x.
"The most dangerous vulnerability in your environment is not the one you have not found yet. It is the one you found three months ago and still have not fixed."
The Retesting Problem
Traditional penetration testing treats remediation verification as an afterthought. The engagement model looks like this: the tester delivers the report, the client has 30 to 90 days to remediate, and then the tester comes back for a retest. This retest is typically scoped as a fraction of the original engagement -- 10% to 20% of the hours -- and focuses on verifying that the most critical findings have been addressed.
This model has three fundamental problems.
Scheduling delays. Coordinating the retest requires aligning the tester's availability, the client's remediation timeline, and the testing window. Weeks or months can pass between when a fix is deployed and when it is verified. During that window, the organization believes the vulnerability is resolved but has no confirmation.
Context loss. The tester who performed the original engagement may not be available for the retest. Even if they are, weeks have passed. The exploit proof-of-concept needs to be reconstructed, the testing environment needs to be re-established, and institutional knowledge about the specific configuration that made the vulnerability exploitable may have degraded.
Incomplete coverage. A retest that covers 20% of the original findings means 80% of remediated vulnerabilities are never verified. Organizations are flying blind on whether their fixes actually worked, relying on engineering judgment rather than empirical validation.
As discussed in our analysis of continuous pentesting versus annual assessments, the annual engagement model exacerbates this problem. If the retest happens once a year, the feedback loop between "fix deployed" and "fix verified" can stretch to months. Vulnerabilities that were incorrectly remediated sit in a false state of resolution, creating a dangerous illusion of security.
How Automated Retesting Closes the Loop
Automated retesting fundamentally changes the remediation verification model. Instead of scheduling a human tester to return weeks later, the original exploit proof-of-concept is stored and re-executed automatically after a fix is deployed. The results are immediate, objective, and comprehensive.
Here is how the workflow changes with automated retesting:
- Pentest identifies vulnerability. The platform documents the exact exploit chain: the target endpoint, the payload, the method, and the expected response that confirms exploitation.
- Finding is reported with remediation guidance. Context-specific fix instructions are provided, not generic advice.
- Engineering deploys fix. The ticket is updated with the fix details.
- Automated retest executes. The platform re-runs the original exploit against the fixed system. If the exploit succeeds, the finding remains open and engineering is notified immediately. If the exploit fails, the finding is marked as verified-resolved.
- Continuous regression monitoring. The platform periodically re-runs verified-resolved exploits to detect regressions -- cases where a previously fixed vulnerability has been reintroduced by a subsequent code change or configuration update.
This workflow eliminates the scheduling delays, context loss, and incomplete coverage that plague manual retesting. Every finding is verified. Every fix is validated. Regressions are detected within days, not months.
The Impact on Remediation Timelines
Organizations that implement automated retesting see dramatic improvements in remediation metrics. Internal data from organizations using continuous retesting programs shows:
- Mean time to remediation drops from 74 days to under 14 days for critical findings. The immediate feedback loop creates urgency and accountability that a 200-page PDF delivered once a year cannot match.
- Fix verification rate increases from 20% to 100%. Every finding is retested, not just the top 20%. The 23% of fixes that do not actually work are caught immediately rather than discovered at the next annual test.
- Remediation regression rate drops by 80%. Continuous regression monitoring catches reintroduced vulnerabilities within days. Without it, regressions can persist undetected for the entire interval between tests.
- Overall vulnerability backlog decreases by 60% within 6 months. The combination of faster remediation, verified fixes, and regression detection steadily reduces the outstanding finding count.
Building Accountability Through Visibility
Automated retesting also transforms the organizational dynamics around remediation. When remediation status is visible in real time -- not just in a quarterly report -- accountability changes.
Engineering managers can see exactly how many open findings their teams have, how long those findings have been open, and which fixes have failed verification. Security teams can report remediation metrics to leadership with confidence, because the data is empirically verified rather than self-reported. CISOs can track remediation SLAs and identify systemic bottlenecks -- is the database team consistently slower than the application team? Are cloud configuration findings taking longer than code-level fixes?
This visibility creates a feedback loop that drives continuous improvement. Teams that see their remediation metrics improving are motivated to maintain that trajectory. Teams that fall behind are identified early, enabling targeted support or resource allocation before the backlog becomes unmanageable.
Practical Steps to Close the Remediation Gap
Closing the remediation gap does not require replacing your entire security program. It requires targeted changes to the workflow between finding discovery and verified resolution.
Prioritize by Exploitability, Not Just Severity
CVSS scores are useful but insufficient for prioritization. A CVSS 9.8 vulnerability in an internal development system with no sensitive data is less urgent than a CVSS 7.2 vulnerability in a production system that handles financial transactions. Prioritization should account for exploitability (confirmed by pentesting), asset criticality, data sensitivity, and network exposure. As we discussed in our article on scanners versus pentesting, validated exploitability is the most reliable prioritization signal available.
Assign Clear Ownership
Every finding must have a single owner -- not a team, not a distribution list, a named individual. That owner is responsible for remediation and accountable for the timeline. When findings are assigned to teams without individual ownership, they drift. Ticketing systems should enforce single-owner assignment and escalation rules that trigger when findings exceed their remediation SLA.
Set and Enforce Remediation SLAs
Define explicit timelines by severity: critical findings remediated within 7 days, high within 14, medium within 30, low within 90. These SLAs should be approved by leadership and tracked as KPIs alongside availability and incident metrics. Without defined timelines and executive backing, remediation will always lose the priority contest against feature development.
Implement Automated Retesting
Replace the manual retest model with continuous, automated verification. Every remediated finding should be retested within 24 hours of the fix deployment. Failed retests should automatically reopen the ticket and notify the owner. Verified fixes should be continuously monitored for regression.
Measure and Report Remediation Metrics
Track mean time to remediation (MTTR), fix verification rate, regression rate, and backlog age distribution. Report these metrics to leadership alongside other operational KPIs. What gets measured gets managed, and remediation metrics have historically been undermeasured and underreported.
The Business Case for Closing the Gap
The remediation gap is not just a security problem -- it is a business risk problem. Every unresolved finding represents quantifiable exposure that can be expressed in financial terms.
Consider the scenario: a penetration test discovers 15 critical findings. At a 74-day average remediation timeline, those findings represent 74 days of exposure to potential breaches costing an average of $4.88 million each. Reducing MTTR to 14 days reduces the exposure window by 81%, proportionally reducing the risk-adjusted expected loss.
For organizations carrying cyber insurance, demonstrable remediation speed is becoming a factor in premium calculations. Insurers are increasingly requesting evidence of not just testing but remediation follow-through. Organizations that can demonstrate sub-14-day MTTR for critical findings are positioned for better coverage terms and lower premiums.
The investment in closing the remediation gap -- automated retesting, process changes, accountability mechanisms -- is modest compared to the cost of a single breach. But the impact compounds over time. Faster remediation means fewer open vulnerabilities at any given moment. Fewer open vulnerabilities mean a smaller attack surface. A smaller attack surface means lower breach probability. And lower breach probability means reduced insurance costs, reduced regulatory risk, and reduced likelihood of the catastrophic event that every CISO loses sleep over.
Finding vulnerabilities is important. Fixing them is what actually reduces risk. The 74-day remediation gap is the most actionable improvement opportunity in most organizations' security programs, and closing it starts with changing how findings flow from report to resolution and how fixes are verified once deployed.
Frequently Asked Questions
How long does it take to remediate a critical vulnerability?
The industry average is 74 days for critical vulnerabilities. However, attackers typically need only 4 days to exploit a newly discovered vulnerability. This 70-day gap between discovery and remediation is where most breaches occur. Organizations with automated retesting programs reduce this to under 14 days.
Why do pentest findings go unresolved?
45% of discovered vulnerabilities remain unresolved after 12 months. Common reasons include: overwhelming report volumes without clear prioritization, lack of remediation ownership, no automated retesting to verify fixes, competing priorities from development teams, and the assumption that applying a patch automatically resolves the vulnerability.
What is automated retesting in penetration testing?
Automated retesting re-runs the original exploit proof-of-concept after a fix is applied to verify the vulnerability is actually resolved. Unlike manual retesting, which requires re-engaging the original tester and re-establishing context, automated retesting runs continuously with no coordination overhead, confirming fixes work and detecting regressions.
