Many organizations confuse “having a backup” with “being resilient.” This is the comfortable misconception that the European Union decided to dismantle with the Digital Operational Resilience Act, commonly known as DORA. The legislator in Brussels understood something that many IT departments still struggle with: operational continuity under stress is fundamentally different from disaster recovery documentation sitting in a drawer.

cover

The distinction is not semantic. Disaster recovery focuses on restoring systems after a failure has occurred. Operational resilience is about continuing to deliver critical services while an incident is actively unfolding. These are related disciplines, but one is reactive and the other is proactive, and the difference determines whether an organization can survive a serious disruption or simply recover from one.

A real-world example that illustrates this gap nicely is the Cloudflare-AGCOM confrontation from early 2026. When Italy’s communications regulator attempted to enforce blocking orders through Cloudflare’s infrastructure, the company faced a choice: comply with orders that would disrupt thousands of legitimate customers worldwide, or refuse and face regulatory penalties. The outcome was a €206,000 fine, but more importantly, the incident functioned as an unplanned stress test on digital infrastructure resilience. As I explored in my analysis of that episode, availability is not merely a technical metric, it is a governance outcome. When dependencies become hostile, unavailable, or legally contested, the organizations that survive are those designed to operate through that stress, not those with a well-written disaster recovery plan they never tested.

DORA exists because the financial sector learned this lesson the hard way. The regulation, as detailed by the European Insurance and Occupational Pensions Authority (EIOPA), applies to banks, insurance companies, investment firms, and a growing list of related entities across the European Union, and it fundamentally reframes what “digital resilience” means in a regulatory context.

The five pillars of DORA: what it actually demands from auditors and IT teams

DORA is structured around five interconnected pillars, each with specific obligations that organizations must demonstrate to compliance auditors.

ICT risk management

The first pillar requires organizations to establish a documented framework for managing ICT risk that includes governance structures, risk identification processes, and mitigation strategies. This is not optional paperwork. Regulators expect to see clear ownership of risk decisions, documented assessment methodologies, and evidence that risk appetite is defined at board level. As outlined in DORA compliance guidance, this framework must be integrated into the overall business strategy and reviewed regularly.

Incident management and reporting

The second pillar introduces mandatory incident reporting with rigid timelines that many organizations underestimate. The initial notification must reach the relevant authority within four hours of detecting a significant ICT-related incident. An intermediate report is required within 72 hours, providing updated assessment and details on the impact. A final report must follow within one month, containing a root cause analysis and remediation measures. For major incidents affecting critical functions or multiple entities, the timelines compress further, as detailed by EITT Academy’s DORA competencies guide. This creates a significant operational burden, particularly for organizations that lack mature incident classification processes and automated triage capabilities.

Digital operational resilience testing

The third pillar distinguishes between basic testing requirements that apply to all entities and advanced threat-led penetration testing (TLPT) reserved for systematically important institutions. Basic testing includes vulnerability assessments, scenario-based exercises, and internal audits. TLPT goes considerably further, requiring organizations to engage external testers who simulate real adversary behavior based on current threat intelligence. Several national authorities, including Luxembourg’s CSSF, have released specific expectations for how financial entities should design and execute these exercises.

This pillar connects directly to a topic I covered in my article on tabletop exercises. Those discussion-based rehearsals are not just good practice, they form a practical component of DORA’s testing expectations. A well-run tabletop that reveals gaps in your incident response procedures is exactly the kind of scenario-based testing the regulation envisions.

Third-party risk management

The fourth pillar addresses what many security leaders consider the most fragile part of modern IT architecture: dependency on external providers. Organizations must maintain a register of all ICT third-party relationships, classify providers by criticality, and most importantly, develop documented exit strategies for relationships with critical suppliers. The regulator wants to see that a bank can continue operating if its cloud provider, its core banking software vendor, or its payment processor becomes unavailable, whether through technical failure, legal dispute, or voluntary withdrawal. As analyzed by DLA Piper, this requirement forces organizations to confront vendor lock-in as a regulatory issue, not just a business inconvenience.

Information sharing

The fifth pillar encourages voluntary sharing of threat intelligence among peers. While not as prescriptive as the other four pillars, this reflects a recognition that adversaries often target multiple organizations in similar ways, and collective defense provides advantages that individual organizations cannot achieve alone.

Auditors examining DORA compliance will scrutinize each pillar with increasing rigor as the regulation matures. According to Ciferi’s analysis on ICT risk auditing, the days of submitting a static risk register and calling it a day are over.

RTO and RPO in the ransomware era: the numbers have changed

Traditional recovery targets no longer reflect the reality of modern cyber threats. The familiar targets of RTO (Recovery Time Objective) set to 4-8 hours and RPO (Recovery Point Objective) at 24 hours were designed for hardware failures, not for ransomware scenarios where attackers deliberately encrypt or destroy backup infrastructure, and where recovery must account for forensic verification that the malware has been completely removed.

When ransomware strikes a production environment, the recovery process is substantially more complex than pulling the last backup. You must first determine when the infection began, which means forensic analysis to identify the earliest compromised system. You must verify that any system you restore to is clean, which may require rebuilding from known-good media rather than restoring from potentially compromised backups. You must reconstruct identity infrastructure, particularly Active Directory or Entra ID, before any other systems can function. And you must conduct sufficient analysis to understand the attack path so you can be confident that remediation is complete.

For Tier-0 systems handling payments or operational technology, the realistic RTO range has shifted to 24-72 hours under ransomware conditions, not the 4-8 hours specified in legacy business continuity plans. As explained in SentinelOne’s RTO vs RPO analysis, some organizations are designing three-tier recovery architectures: hot sites for the most critical systems with near-real-time replication, warm sites for business-critical applications with measured failover procedures, and cold sites for supporting functions where longer outages are acceptable. This tiered approach aligns recovery investment with business impact.

The backup strategy itself requires rethinking. Immutable backups that cannot be modified or deleted by any user, including administrators, combined with air-gapped copies stored offline or in isolated network segments, represent the only standard truly resistant to modern ransomware. As noted in current ransomware recovery tactics for 2026, the cost of immutable storage has dropped sufficiently that this approach is feasible for mid-sized organizations. The key is treating backup as a security control, not a storage problem, and testing restoration regularly enough to have confidence that the backups actually work.

The recovery runbook that nobody tests: anatomy of a credible plan

A cyber recovery plan that has never been tested is not a plan, it is a hypothesis. DORA knows this, and the regulation’s testing pillar exists precisely because paper exercises and documented procedures that look solid on review often fall apart under the pressure of a real incident. Here is what a minimum viable cyber recovery runbook should contain.

Inventory of critical systems with tier classification

You cannot recover what you have not prioritized. A complete inventory of systems, classified by business criticality, forms the foundation. Tier-0 encompasses systems where disruption causes immediate financial loss or regulatory breach, such as payment processing and core banking systems. Tier-1 covers business-critical applications that support primary revenue-generating functions. Tier-2 includes supporting infrastructure where outages are inconvenient but do not halt operations. Each tier should have documented RTO and RPO targets aligned with business leadership’s expectations, not the IT department’s preferences.

Clean recovery point definition

When a ransomware attack has been spreading through your network for days before detection, the last backup may already be contaminated. Your runbook must define what happens in this scenario. Do you accept a data loss event and restore from a point in time before the infection? Do you rebuild systems from scratch using known-good media? The decision should be documented, communicated to business stakeholders in advance, and tested through scenario exercises. There is no right answer, but there is a wrong one: discovering during an incident that you have no defined clean recovery point.

Identity recovery as the linchpin

Compromised identity infrastructure blocks everything else. If Active Directory or Entra ID is compromised, restoring other systems becomes irrelevant because the attacker can simply reclaim access. Your runbook should include specific procedures for recovering identity services, including offline administrative accounts reserved exclusively for disaster recovery, secure procedures for resetting all credentials across the environment, and verification steps to confirm that compromised accounts cannot be reactivated. Microsoft’s ransomware recovery readiness guide provides detailed recommendations for this critical capability. This is often the most time-consuming part of recovery, and organizations that neglect it in their planning discover the problem only when they are in the middle of an active incident.

Communication and regulatory obligations during the incident

The runbook must specify who is authorized to communicate with regulators, what template they use, and by what deadline. Under DORA, the four-hour initial notification requirement is non-negotiable for significant incidents. Pre-drafting the notification template, identifying the person responsible, and ensuring they have access to the necessary information during an incident when email servers may be compromised are practical steps that many organizations overlook. The CISA ransomware guide recommends alternate communication channels, including out-of-band methods that do not depend on the organization’s primary infrastructure, and these must be identified and tested.

Post-incident review and runbook maintenance

The final step in any credible recovery plan is a structured post-incident review that examines what worked, what failed, and what needs to change. The runbook should be treated as a living document, updated after every significant incident and after every exercise. Lessons learned from tabletop exercises, which I detailed in my earlier article on cybersecurity tabletops, feed directly into this cycle of continuous improvement.

This practical approach connects to vulnerability management as well. Understanding which vulnerabilities created the initial attack path helps prioritize remediation and ensures that recovery discussions are informed by the same threat intelligence that drives your defensive strategy.

DORA beyond the financial sector: the domino effect on the entire ICT supply chain

DORA formally applies to financial entities, but its contractual clauses on ICT supplier management create ripple effects that extend far beyond the banking sector. When a regulated financial institution must demonstrate that it can exit a critical supplier relationship without operational collapse, that requirement flows down through the supply chain. A small SaaS provider serving a European bank must be capable of meeting resilience expectations that would have been unthinkable for a company of its size a few years ago. As noted by the Central Bank of Ireland, this creates a cascading effect across the entire technology ecosystem.

This produces an interesting dynamic. The regulation directly governs perhaps a few thousand entities in the financial sector, but it indirectly imposes resilience requirements on every technology company that provides services to those entities. The effect is a gradual normalization of operational resilience standards across the broader technology market, and many organizations are discovering that their suppliers cannot meet expectations they were never asked to meet before.

Concentration risk compounds this challenge. Three major cloud providers control the vast majority of European financial infrastructure, and this creates systemic exposure that individual organizations cannot mitigate through their own resilience planning. When a provider experiences a region-wide outage, the incident becomes a regulatory matter for every financial entity that depends on that region. DORA’s third-party risk requirements were designed precisely to force organizations to confront these dependencies, but the deeper question is whether the market structure itself needs to change.

The regulation is pushing organizations toward hybrid architectures that include on-premises fallback capabilities, a trend that intersects with the broader cloud repatriation discussion I explored in a recent analysis. For many financial institutions, the question is no longer whether to maintain some infrastructure under direct control, but how to design that infrastructure so it can sustain critical operations while leveraging cloud services for everything else. This is not a return to 1990s computing; it is a deliberate architectural choice to preserve optionality.

The convergence of DORA, NIS2, GDPR, and the EU Data Act creates a regulatory environment where demonstrating control over data and infrastructure is becoming both a competitive differentiator and a legal requirement. Organizations that treat resilience as a compliance checkbox will find themselves outmaneuvered by those that treat it as an operational capability. The regulation provides the framework, but the implementation is an engineering challenge that demands real investment in testing, architecture, and ongoing maintenance.

The organizations that will thrive under DORA are not those with the thickest binders of policies. They are the ones who have stress-tested their recovery procedures, validated their assumptions about RTO and RPO against real-world ransomware behavior, and built architectures that can absorb damage without collapsing. Everything else is documentation.