Beyond 5%: How to Set Realistic Failure Rate Targets with the ABC Method & Risk-Based Benchmarking
Stop using a single failure rate target. Learn the ABC method to set dynamic resilience goals based on a departmental risk coefficient and NIS2 standards.

A single, company-wide failure rate target —like the industry-standard 5%— is often a misleading superficial indicator that hides critical vulnerabilities in your security posture. True cyber resilience isn't about hitting a flat number across the board; it is entirely about context. A 5% failure rate target in a warehouse might be an awareness issue, but that same 5% in your Finance department represents an impending operational crisis.
For CISOs and HR directors in regulated industries, adhering to this uniform approach is no longer just ineffective; it is a compliance trap. Under the new European directive, auditors now demand evidence of risk-based controls, not just generic participation statistics. They want to see that you are managing human risk dynamically through continuous benchmarking, allocating defense resources where the potential impact is highest.
This guide moves beyond static benchmarking. We will introduce the ABC Method to help you set realistic, segmented resilience goals—transforming your strategy from a flat failure rate to a measurable, department-specific improvement rate.
The Problem with Flat Benchmarking
Flat benchmarking is fundamentally flawed because it assumes all employees pose an equal threat, applying a generic failure rate target to everyone regardless of risk. A uniform failure rate target completely ignores the "impact" variable of the risk equation. A click from a privileged user can trigger a ransomware event that halts operations, while a click from a low-access user might only result in a re-imaged laptop. Effective security metrics must weigh the potential blast radius of the error, not just the frequency.
While industry reports provide a necessary baseline for your sector's average Phish-prone Percentage, relying on these flat numbers for your internal failure rate target is dangerous. They tell you where you stand against peers, but they fail to account for your specific internal risk exposure.
Why a 5% Click Rate in Finance is a Catastrophe
Not all failure rates are created equal. When setting a failure rate target, you must distinguish between "High Privilege" and "Low Privilege" users. A 5% failure rate in a department with access to SWIFT transfers or sensitive PII is an operational crisis, whereas the same rate in a department with limited access is manageable.
Risk Scenario: The Tale of Two Clicks
- The Finance Director (High Risk): Has payment authority and admin access. If they fall for a 5% failure rate target, that means 1 in 20 emails could result in a massive BEC (Business Email Compromise) incident. Impact: P1 (Critical).
- The Marketing Intern (Low Risk): Has no payment authority and limited network access. If they maintain the same 5% rate, the consequence is likely an endpoint clean-up. Impact: P3 (Minor).
Treating these two scenarios with the same failure rate target is a failure of risk management strategy.
Moving from "Absolute Rate" to "Improvement Rate"
For leadership, the most valuable KPI is often the delta—the change over time—rather than the absolute number. A high-risk department that moves from a 20% click rate to 10% has achieved a massive reduction in organizational risk, even if they haven't hit an arbitrary "5%" goal.
Instead of obsessing over a static number, focus on the Improvement Rate. This demonstrates the ROSI of your training and awareness programs to the Board. When you compare your sector, look for trends in improvement rather than just a flat ranking. This shift allows you to prioritize resources where they are needed most, rather than chasing a vanity metric that doesn't correlate with actual security.
The ABC Method: A Practical Model for Risk Segmentation
The ABC Method is a strategic framework that replaces generic security awareness programs with targeted defense layers. Instead of treating every employee as an equal threat, this model segments your workforce into three distinct categories—Critical Risk, High Exposure, and Baseline Risk—allowing you to allocate 80% of your defensive resources to the 20% of users who represent the greatest danger to your organization.
To comply with NIS2 and manage human risk effectively, you must move beyond job titles and look at impact. You cannot protect the reception desk with the same intensity as the server room. By categorizing users based on access privileges and external exposure, you transform your security program from a passive checklist into an active risk management engine.
1. Defining Group A (Critical Risk)
These are your "Very Attacked People" (VAPs) and high-privilege users. This group includes the C-Suite, IT Administrators, and Finance Department.
- Why they matter: They hold unrestricted access privileges—access to critical infrastructure, sensitive data, or large funds.
- Risk Profile: Catastrophic. A single error here is a P1 incident.
2. Defining Group B (High Exposure)
These are your high-volume communicators who operate primarily through email. This group typically includes Sales, HR, Marketing, and Procurement.
- Why they matter: Their email addresses are often public, and their job requires opening attachments from strangers. They face the highest quantity of threats.
- Risk Profile: High Frequency. They are the most likely entry point for initial compromise.
3. Defining Group C (Baseline Risk)
This covers the remainder of your workforce. While their access is limited, they still represent a potential vulnerability if left completely untrained.
- Why they matter: They need to maintain general cyber hygiene to prevent low-level incidents.
- Risk Profile: Standard. The goal here is efficiency and automated maintenance.
Once you have identified these groups, you can structure an advanced benchmarking and segementation plan that applies different awareness programs intensities and failure rate targets to each tier.
Calculating the Risk Coefficient: The "Why" Behind the Group
The risk coefficient bridges the gap between a user's potential for damage and their likelihood of causing it. It moves beyond static job titles to a dynamic score that adapts the classic cybersecurity equation (Risk = Probability * Impact) to the human element:
Risk Coefficient = (Behavioral State) * (Access + Privilege)
In this model, Impact is defined by Access and Privilege (the potential blast radius), while Probability is determined by the Behavioral State. A high-privilege user (High Impact) who is well-rested represents a manageable risk; that same user under extreme stress (High Probability) becomes an imminent security liability.
Static Factors: Access, Privilege, and Role
While integrating with your Identity Provider (IdP) like Azure AD or Okta is essential for operational efficiency (automating user provisioning), it is not a substitute for risk profiling. Do not rely solely on the "Department" label; you must look beyond the job title to assess the actual reality of their exposure:
- Access: Does the user have entry to critical data silos, regardless of their official department?
- Privilege: Can they execute payments, change configurations, or grant access to others?
Dynamic Factors: The Burnout & Stress Multiplier
This is the modern, critical variable that most compliance checklists miss. Cybersecurity is a cognitive task; when cognitive load increases due to stress or fatigue, vigilance collapses.
A burned-out IT admin is your #1 threat. They are already in a high-risk group (Group A) due to their privileges. When you add the "Burnout Multiplier," their risk coefficient escalates exponentially. They are no longer just "IT Staff"; they are a tired employee with elevated administrative rights, prone to skipping verification steps to save time or making mistakes.
Setting Your Targets: From Failure Rate to Dynamic Resilience Goals
To set realistic security goals, you must abandon the single company-wide metric in favor of segmented targets. Effective resilience means assigning a strict failure rate target (below 2%) to your critical staff (Group A), while prioritizing a steady Improvement Rate of over 30% for the general workforce (Groups B and C). This tiered approach aligns with NIS2 requirements, proving to auditors that you are managing risk intensity rather than just satisfying minimum compliance requirements.
A static failure rate target ignores the reality of human behavior. By shifting focus to "Dynamic Resilience," you acknowledge that while you cannot eliminate error entirely, you can drastically reduce the time it takes to detect and report it.
Targets for Group A: Focus on High Reporting, Near-Zero Failure
For your "Critical Risk" users (Finance, C-Suite, IT Admins), the margin for error is non-existent.
- Failure Rate Target: < 2%. They must be nearly impervious to standard attacks.
- Primary KPI: Report Rate > 85%. Your goal is to turn these high-access users into your strongest defensive sensors. If they see something suspicious, they must report it immediately.
Targets for Group B/C: Focus on Steady Improvement & Efficiency
For the majority of your organization, striving for a 0% failure rate target is resource-intensive and unrealistic.
- Failure Rate Target: < 10%. This is a safe, manageable baseline.
- Primary KPI: Improvement Rate > 30% (Quarter-over-Quarter). The objective here is behavioral change. Are they getting better? Are they recognizing threats faster than they did three months ago?
When you prepare to present these KPIs to the board, use the following framework to demonstrate distinct risk strategies:

Conclusion: From Excel Nightmares to Automated Governance
Attempting to manage dynamic risk segmentation manually is a guaranteed path to compliance failure. Risk is not static; an employee's burnout and wellbeing level, project access, or privilege rights can change overnight, rendering manual benchmarking obsolete before you even save the spreadsheet. To truly secure your organization, you must treat human risk with the same automated rigor as endpoint security.
To scale the ABC Method effectively, you need a dedicated Human Risk Management Platform that automates this entire lifecycle. These tools ingest data directly from your IdP, calculate the risk coefficient in real-time, and instantly generate the specific dashboard your NIS2 auditor will demand. Moving from "Excel nightmares" to automated governance isn't just about operational efficiency; it is the only way to provide irrefutable evidence of due diligence when the next audit—or incident—occurs.
Frequently Asked Questions
There is no single good target. A <2% target is critical for high-risk groups (like Finance), while a <10% target with a high improvement rate is a realistic goal for a general population.
It's a way to prioritize risk by segmenting users. Group A (Critical Risk), Group B (High Exposure), and Group C (Baseline Risk). This allows you to apply different awareness programs intensities and failure rate targets to each.
NIS2 Article 21 mandates "risk analysis and information system security policies." By using distinct benchmarking standards, you provide evidence of a risk-based approach, demonstrating to auditors that you are allocating resources dynamically rather than using a generic "tick-the-box" standard.
Yes, provided the data is anonymized and aggregated. The goal is to measure organizational risk signals (like metadata on working hours or rapid task switching), not to spy on individual content. Always collaborate with HR to frame this as a "wellness and security" initiative.
They require immediate intervention. This isn't just about more training; it may involve restricting their admin privileges or reducing their access scope until their resilience metrics improve, effectively removing the risk until behavior changes.
Beyond raw logs, it generates time-stamped risk trends, individual awareness histories, and proof of intervention for high-risk users. This transforms scattered data into irrefutable evidence of due diligence for your NIS2 auditor.


.png)
