<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-US"><generator uri="https://jekyllrb.com/" version="4.2.1">Jekyll</generator><link href="https://andreafortuna.org/feed.xml" rel="self" type="application/atom+xml" /><link href="https://andreafortuna.org/" rel="alternate" type="text/html" hreflang="en-US" /><updated>2026-03-12T15:36:09+00:00</updated><id>https://andreafortuna.org/feed.xml</id><title type="html">Andrea Fortuna</title><subtitle>Cybersecurity expert, software developer, experienced digital forensic analyst, musician</subtitle><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><entry><title type="html">Cloud repatriation: when moving workloads back on premises is a strategic choice, not a retreat</title><link href="https://andreafortuna.org/2026/03/09/cloud-repatriation.html" rel="alternate" type="text/html" title="Cloud repatriation: when moving workloads back on premises is a strategic choice, not a retreat" /><published>2026-03-09T00:00:00+00:00</published><updated>2026-03-09T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/03/09/cloud-repatriation</id><content type="html" xml:base="https://andreafortuna.org/2026/03/09/cloud-repatriation.html"><![CDATA[<p><img src="/assets/2026/cloud-repatriation-joke-cover-cyber.png" alt="Cloud repatriation cover" /></p>

<p><em>“The cloud is just someone else’s computer.”</em> The old sysadmin joke has held up better than many forecasts from the last decade. After years of cloud-first mandates, digital transformation roadmaps, and hyperscaler marketing, more companies are taking a second look at where their workloads actually belong.</p>

<p>Not out of nostalgia, but because the industry has matured enough to recognize that “cloud-always” was never more rational than “on-premises-always.” In Europe, the reassessment has an added layer: data sovereignty, regulatory compliance, and a growing unease about relying on infrastructure that may not be fully shielded from foreign government access.</p>

<hr />

<h2 id="the-numbers-are-hard-to-ignore">The numbers are hard to ignore</h2>

<p><img src="/assets/2026/cloud-repatriation-numbers-header.svg" alt="Cloud repatriation numbers and trend lines" /></p>

<p>The scale of the shift is difficult to dismiss. According to a Barclays CIO Survey from Q4 2024, <strong>86% of CIOs planned to move some workloads from public cloud to private or on-premises environments</strong>, the highest figure the survey had recorded. An IDC study from the same year found that roughly <strong>80% of organizations expected to repatriate a share of compute and storage</strong> within the following twelve months.</p>

<p>Only 8% were planning a full exit. Nobody is shutting down their AWS accounts en masse; what’s happening is a selective repositioning, driven by a question that should probably have been asked much earlier: <em>which workloads genuinely benefit from cloud, and which ones are just paying a premium for someone else’s hardware?</em></p>

<hr />

<h2 id="case-studies-when-the-bill-finally-arrived">Case studies: when the bill finally arrived</h2>

<p><img src="/assets/2026/cloud-repatriation-case-studies-header.svg" alt="Cloud repatriation case studies and infrastructure economics" /></p>

<h3 id="37signals-2-million-saved-per-year-and-counting">37signals: $2 million saved per year, and counting</h3>

<p>If the cloud repatriation movement has a poster child, it’s David Heinemeier Hansson (DHH), CTO of 37signals, the company behind Basecamp and HEY. After finding that the company was spending more than <a href="https://world.hey.com/dhh/we-have-left-the-cloud-251760fb"><strong>$3.2 million per year on AWS</strong></a>, DHH argued for a radical shift. The company invested roughly <a href="https://world.hey.com/dhh/we-stand-to-save-7m-over-five-years-from-our-cloud-exit-53996caa">$600k in Dell servers</a> and cut its compute bill by <a href="https://world.hey.com/dhh/we-have-left-the-cloud-251760fb">about $1.5 million to $2 million a year</a>.</p>

<p>But that was the compute side. In later write-ups, 37signals documented the migration of <a href="https://dev.37signals.com/moving-mountains-of-data-off-s3/">billions of files off S3</a> and the operational model behind <a href="https://dev.37signals.com/pure-storage-monitoring/">10 petabytes of data in Pure Storage</a>. The specifics of later savings are harder to pin to a single primary source, but the trajectory is clear: less recurring cloud spend, more direct control.</p>

<p>As DHH put it: <em>“Cloud can be a good choice in certain circumstances, but the industry pulled a fast one convincing everyone it’s the only way.”</em></p>

<h3 id="dropbox-the-original-repatriator">Dropbox: the original repatriator</h3>

<p>Dropbox was doing cloud repatriation before anyone called it that. Between 2013 and 2016, the company migrated the vast majority of its data from AWS to proprietary colocation facilities via an internal project codenamed <a href="https://dropbox.tech/infrastructure/magic-pocket-infrastructure"><strong>“Magic Pocket”</strong></a>. According to later technical reporting, that move led to <a href="https://www.datacenterknowledge.com/cloud/dropbox-reduces-costs-by-nearly-75m-over-two-years-with-magic-pocket">nearly $75 million in savings over two years</a>, along with dramatically improved control over the storage stack.</p>

<h3 id="geico-300m-a-year-with-little-visible-return">GEICO: $300M a year with little visible return</h3>

<p>In 2013, GEICO began migrating more than 600 applications to the cloud. By 2021, it was reportedly spending <a href="https://siliconangle.com/2023/03/14/geico-wants-repatriate-workloads-public-cloud-growing-bill-300m-year/"><strong>over $300 million annually</strong></a> on cloud services, and internal stakeholders were struggling to justify the expense. Trade-press coverage of the case emphasized a now-familiar problem: for data-heavy estates, cloud storage costs can dominate the bill surprisingly quickly. GEICO subsequently began a selective repatriation of its most storage-intensive workloads.</p>

<h3 id="ahrefs-the-math-that-changed-everything">Ahrefs: the math that changed everything</h3>

<p>Ahrefs laid out the arithmetic in public: in a technical write-up, the company claimed its <a href="https://tech.ahrefs.com/how-ahrefs-saved-us-400m-in-cloud-costs-using-bare-metal-servers-4ec8421e0a7b">bare-metal approach had avoided roughly $400 million in cloud costs</a>. The pattern running through all these examples is the same: once workloads become large, steady, and storage-heavy, owning your hardware starts to look very different from renting it.</p>

<p><img src="/assets/2026/on-prem-cloud-info.png" alt="infographic" /></p>

<hr />

<h2 id="a-more-honest-accounting-of-cloud-pros-and-cons">A more honest accounting of cloud pros and cons</h2>

<p><img src="/assets/2026/cloud-repatriation-pros-cons-header.svg" alt="Cloud pros and cons decision balance" /></p>

<p>Cloud was not a mistake. But it is a tool, not a destiny, and like every tool it works well in some contexts and poorly in others.</p>

<h3 id="what-cloud-genuinely-does-well">What cloud genuinely does well</h3>

<p>For workloads with unpredictable traffic (consumer apps, seasonal e-commerce, early-stage startups), the ability to scale on demand without upfront CapEx is hard to beat. You pay for what you use, when you use it.</p>

<p>Cloud-native development (managed Kubernetes, CI/CD pipelines, Infrastructure-as-Code) can dramatically accelerate delivery cycles. Provisioning environments in minutes rather than weeks is a genuine productivity gain for development teams.</p>

<p>Offloading database administration, backups, patch cycles, and hardware lifecycle to a provider creates real savings for organizations without dedicated infrastructure teams. Not every company can run its own data center, and not every company should.</p>

<p>Multi-region failover, global CDN, and built-in disaster recovery that would cost millions to replicate on premises come standard with every major cloud provider.</p>

<p>Managed AI/ML platforms, petabyte-scale data warehouses, and globally distributed databases are extraordinarily hard to build in-house. For many organizations, cloud remains the only practical path to these capabilities.</p>

<hr />

<h3 id="where-cloud-consistently-underdelivers">Where cloud consistently underdelivers</h3>

<h3 id="cost-predictability">Cost predictability</h3>

<p>The Producer Price Index for cloud computing services rose by <strong>6.4% between September 2023 and May 2024</strong>. Cloud pricing is not magically trending toward zero. For stable, predictable workloads, the pay-as-you-go model is often more expensive than owned hardware over a 3-5 year horizon. Egress fees (what you pay to move data <em>out</em> of the cloud) are especially easy to underestimate during procurement.</p>

<h3 id="vendor-lock-in">Vendor lock-in</h3>

<p>Migrating <em>into</em> cloud is frictionless by design. Migrating <em>out</em> of cloud, or between providers, is significantly harder. Proprietary APIs, managed service dependencies, data format lock-in, and pricing structures tied to data transfer all create a gravitational pull that makes exit far more expensive than the initial architecture review ever anticipated.</p>

<h3 id="performance-for-high-throughput-low-latency-workloads">Performance for high-throughput, low-latency workloads</h3>

<p>For AI/ML training on large proprietary datasets, real-time industrial IoT processing, or high-frequency financial analytics, the shared nature of cloud infrastructure introduces variance and latency that dedicated hardware simply does not. For consistent, predictable workloads, on-premises infrastructure can offer better and more stable performance at a fraction of the long-term cost.</p>

<h3 id="security-posture-and-control">Security posture and control</h3>

<p>The shared responsibility model gets a lot of airtime, but in practice it is frequently misunderstood. According to Palo Alto Networks’ <a href="https://www.paloaltonetworks.com/state-of-cloud-native-security">State of Cloud Security Report 2025</a>, <strong>53% of organizations identify lax IAM practices as a top challenge and a leading vector for data exfiltration</strong>. The <a href="https://www.verizon.com/business/resources/reports/dbir/">Verizon 2025 DBIR</a> found that 30% of breaches now involve third-party components, double the previous year’s figure, a finding that maps directly to cloud supply-chain risk.</p>

<p>The most instructive case is probably Capital One’s 2019 breach. A misconfigured web application firewall on AWS allowed an attacker to exploit the cloud metadata service via SSRF and access <a href="https://www.washingtonpost.com/national-security/capital-one-data-breach-compromises-tens-of-millions-of-credit-card-applications-fbi-says/2019/07/29/72114cc2-b243-11e9-8f6c-7828e68cb15f_story.html">over 106 million customer records</a>, including Social Security numbers and bank account details. Amazon’s response was that the vulnerability lay in Capital One’s application layer, not in AWS itself. That distinction is the shared responsibility model in a nutshell: the provider secures the infrastructure, the customer secures everything running on top of it. In practice, the boundary is blurry enough that even large, well-funded security teams can get it wrong.</p>

<p>On-premises environments allow security teams to implement least-privilege at the hardware level, maintain audit trails end to end, and respond to incidents without waiting on a provider’s tooling or disclosure timelines. Repatriating organizations consistently report improved visibility as a secondary benefit. That said, on-prem security is not free: it requires dedicated staff, continuous patching, physical controls, and the discipline to maintain what you now fully own.</p>

<h3 id="ai-on-proprietary-data">AI on proprietary data</h3>

<p>Companies fine-tuning large language models or building domain-specific AI on confidential internal data have a strong incentive to keep that work off shared infrastructure. Even if the probability of data leakage is low, the residual risk is often incompatible with IP protection requirements in many sectors. This is becoming one of the fastest-growing reasons to invest in on-premises or private cloud infrastructure.</p>

<hr />

<h2 id="the-security-trade-off-no-free-lunch-either-way">The security trade-off: no free lunch either way</h2>

<p><img src="/assets/2026/cloud-repatriation-security-header.svg" alt="Cloud and on-prem security trade-offs" /></p>

<p>Repatriation is often framed as a security win, and in many respects it can be. But it would be dishonest to pretend that running your own infrastructure is inherently safer. The real picture is more nuanced.</p>

<p>Cloud providers do some things exceptionally well. The hyperscalers invest billions in physical security, DDoS mitigation, encryption at rest and in transit, and global threat intelligence. Most organizations cannot match that depth of investment on their own. Managed security services (SIEM, SOAR, threat detection) are mature, widely available, and improving rapidly.</p>

<p>The problem is what sits on top. Misconfigurations, overly permissive IAM roles, exposed storage buckets, unrotated secrets, forgotten API keys: these are not infrastructure failures, they are application- and configuration-layer mistakes that happen at the customer level. The <a href="https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-pandemic-eleven">CSA Top Threats to Cloud Computing</a> report consistently ranks misconfiguration and inadequate identity management among the top risks. The IBM <a href="https://www.ibm.com/reports/data-breach">Cost of a Data Breach Report</a>, published annually with the Ponemon Institute, continues to show that breaches involving cloud environments tend to cost more and take longer to contain than those in purely on-premises estates.</p>

<p>On-premises security gives you full control, but demands the staff to exercise it. Patching cycles, firewall rule management, physical access controls, backup testing, log retention, and 24/7 monitoring all need people. For organizations with experienced security operations teams, that control translates into better posture. For organizations that repatriate workloads without scaling their SecOps capability accordingly, the outcome can be worse than the cloud setup they left behind.</p>

<p>Hybrid architectures create the widest attack surface. This is the part that rarely gets enough attention. An estate split across on-premises, private cloud, and one or more public providers multiplies the number of identity boundaries, network perimeters, and configuration standards that need to be maintained in parallel. Consistent policy enforcement, centralized logging, and unified incident response across all environments require serious tooling and discipline.</p>

<p>The honest conclusion: moving workloads on premises can improve your security posture, but only if you invest in the operational capability to manage it. Repatriation is not a security strategy by itself.</p>

<hr />

<h2 id="the-european-dimension-sovereignty-is-not-optional">The European dimension: sovereignty is not optional</h2>

<p><img src="/assets/2026/cloud-repatriation-europe-header.svg" alt="European sovereignty, regulation, and cloud governance" /></p>

<p>For European organizations, the calculus goes beyond cost. Cloud repatriation is increasingly a matter of legal and regulatory necessity.</p>

<h3 id="gdpr-the-baseline">GDPR: the baseline</h3>

<p>The <a href="https://eur-lex.europa.eu/eli/reg/2016/679/oj/eng">General Data Protection Regulation</a> established strict rules on personal data processing, cross-border transfers, and data subject rights. The <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A62018CJ0311"><em>Schrems II</em> ruling</a> invalidated the Privacy Shield framework and forced European regulators to look much more closely at US-based cloud providers. The <a href="https://eur-lex.europa.eu/eli/dec_impl/2023/1795/oj/eng">EU-US Data Privacy Framework</a>, adopted in 2023, provides a new legal basis for transfers, but it is still viewed by many practitioners as politically fragile. For that reason, keeping personal data within EU borders remains the easiest posture to defend.</p>

<h3 id="the-cloud-act-the-elephant-in-the-room">The CLOUD Act: the elephant in the room</h3>

<p>In 2025, a point that legal analysts had raised for years became harder to dismiss in public debate: US cloud providers cannot offer an absolute guarantee that European data will never be reachable through US legal mechanisms. The <a href="https://www.congress.gov/bill/115th-congress/senate-bill/2383/text">CLOUD Act</a> is central to that debate. In testimony before the French Senate, <strong>Microsoft France’s general manager stated under oath that he could not guarantee French citizens’ data was protected from US authority access</strong>. Google, Amazon, and Salesforce have made similar acknowledgments in other contexts. That tension between the CLOUD Act and European data-protection expectations is shaping infrastructure decisions in both the public and private sectors.</p>

<h3 id="nis2-cybersecurity-supply-chain-obligations">NIS2: cybersecurity supply-chain obligations</h3>

<p>The <a href="https://digital-strategy.ec.europa.eu/en/policies/nis2-directive">NIS2 Directive</a>, which entered into force in January 2023 and had to be transposed by member states by 17 October 2024, explicitly requires organizations in critical sectors to assess and manage cybersecurity risks introduced by their supply chains, including cloud service providers. In practice, this creates a formal obligation to evaluate concentration risk and, in some cases, to maintain control over critical infrastructure components. The <a href="https://www.enisa.europa.eu/publications/enisa-threat-landscape-2024">ENISA Threat Landscape</a>, updated annually, provides the European reference framework for these risk assessments and consistently highlights supply-chain attacks and cloud-infrastructure threats among the top concerns. Regulators increasingly expect documented evidence that cloud dependencies have been assessed and that alternatives exist.</p>

<h3 id="dora-exit-strategies-for-financial-services">DORA: exit strategies for financial services</h3>

<p>The <a href="https://eur-lex.europa.eu/eli/reg/2022/2554/oj/eng">Digital Operational Resilience Act</a> entered into force in 2023 and has applied since 17 January 2025. It requires financial institutions to demonstrate that they can continue operating through severe disruption involving major technology providers. That means maintaining <strong>documented exit strategies</strong> and preserving business continuity even when a critical ICT provider becomes unavailable. For banks, insurers, and investment firms, this has accelerated investment in hybrid architectures with a credible on-premises fallback layer.</p>

<h3 id="eu-data-act-the-newest-layer">EU Data Act: the newest layer</h3>

<p>The <a href="https://digital-strategy.ec.europa.eu/en/policies/data-act">EU Data Act</a> entered into force on 11 January 2024 and has applied since 12 September 2025. Among other things, it requires providers of data-processing services to reduce barriers to switching and to take legal, technical, and organizational measures against unlawful third-country access to non-personal data held in the EU. That does not eliminate lock-in overnight, but it does make provider switching and eventual exit easier than before.</p>

<p>Taken together, these regulations are reshaping the European compliance environment. Cloud-only architectures face increasing scrutiny, and the ability to demonstrate local control over data is becoming both a competitive and a legal differentiator.</p>

<hr />

<h2 id="cloud-also-not-cloud-only">Cloud-also, not cloud-only</h2>

<p><img src="/assets/2026/cloud-repatriation-conclusion-header.svg" alt="Right workload, right environment hybrid strategy" /></p>

<p>The real problem is that <strong>“cloud-first” quietly became “cloud-always”</strong>, and many organizations are now paying for that simplification in ways they never fully modeled up front.</p>

<p>The market itself has absorbed this lesson. The hybrid cloud market was valued at approximately $85 billion in 2022 and is projected to reach $262 billion by 2027. Almost no organization repatriating workloads today is abandoning cloud entirely. They are building architectures that put each workload where it belongs. Cloud for elastic, innovation-heavy, globally distributed services. On-premises or private cloud for steady-state, data-intensive, compliance-critical, and proprietary-AI workloads.</p>

<p>The better framework is simpler than it sounds: <strong>right workload, right environment</strong>. It takes more architectural discipline up front, but it produces better outcomes over time.</p>

<p>For European organizations, it goes beyond good engineering. Under GDPR, NIS2, DORA, and the EU Data Act, it may be the only defensible position left.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andreafortuna.org/assets/2026/cloud-repatriation-joke-cover-cyber.png" /><media:content medium="image" url="https://andreafortuna.org/assets/2026/cloud-repatriation-joke-cover-cyber.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">MalHunt gets a major overhaul: Volatility3, smarter YARA handling, and better error recovery</title><link href="https://andreafortuna.org/2026/03/04/malhunt-update.html" rel="alternate" type="text/html" title="MalHunt gets a major overhaul: Volatility3, smarter YARA handling, and better error recovery" /><published>2026-03-04T00:00:00+00:00</published><updated>2026-03-04T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/03/04/malhunt-update</id><content type="html" xml:base="https://andreafortuna.org/2026/03/04/malhunt-update.html"><![CDATA[<p><img src="/assets/2026/malhunt-cover.png" alt="cover" /></p>

<p>If you have been following my open-source work, you probably know <a href="https://github.com/andreafortuna/malhunt">MalHunt</a>, the memory forensics tool I built to automate malware hunting on top of Volatility. Yesterday I pushed a significant batch of updates that, taken together, amount to a near-complete rewrite of the project. Here is what changed and why it matters.</p>

<h2 id="from-a-script-to-a-proper-python-package">From a script to a proper Python package</h2>

<p><img src="/assets/malhunt-architecture.svg" alt="Detail of the new package structure" /></p>

<p>The most visible change is structural. The original <code class="language-plaintext highlighter-rouge">malhunt.py</code> was a single 317-line script: practical, but not particularly maintainable or extensible. That file is gone. The codebase now lives under <code class="language-plaintext highlighter-rouge">src/malhunt/</code> as a properly organized Python package:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>src/malhunt/
├── core.py        # orchestration logic
├── volatility.py  # Volatility3 wrapper
├── scanner.py     # YARA, Malfind, and network scanners
├── artifacts.py   # artifact collection and ClamAV integration
├── models.py      # data models
├── utils.py       # utilities and YARA rule handling
└── __main__.py    # CLI entry point
</code></pre></div></div>

<p>This separation makes each component independently testable and easier to extend. Speaking of testing: the project now ships with a full test suite covering the core logic, the scanner layer, and the Volatility wrapper, something the old script lacked entirely.</p>

<p>The package is installable via both <code class="language-plaintext highlighter-rouge">pip</code> and <code class="language-plaintext highlighter-rouge">poetry</code>, and dependency management is now handled through a <code class="language-plaintext highlighter-rouge">pyproject.toml</code> with a locked <code class="language-plaintext highlighter-rouge">poetry.lock</code> file. No more environment guesswork.</p>

<h2 id="the-big-migration-volatility2--volatility3">The big migration: Volatility2 → Volatility3</h2>

<p><img src="/assets/malhunt-volatility-migration.svg" alt="Migration from Volatility 2 to 3" /></p>

<p>If you have been using the old version, the most important thing to know is that MalHunt now targets Volatility3 exclusively. The legacy v0.1 relied on Volatility2 and its <code class="language-plaintext highlighter-rouge">--profile=</code> flag; that approach is now gone.</p>

<p>Volatility3 works differently: it does automatic OS and version detection, it exposes plugins with updated names (<code class="language-plaintext highlighter-rouge">windows.vadyarascan</code>, <code class="language-plaintext highlighter-rouge">windows.malfind</code>, <code class="language-plaintext highlighter-rouge">windows.netscan</code>, and so on), and it handles symbol tables rather than profiles. The underlying subprocess management has been rebuilt accordingly, with proper timeout handling and a configurable retry strategy.</p>

<p>A migration guide is available in <a href="https://github.com/andreafortuna/malhunt/blob/master/docs/MIGRATION.md">docs/MIGRATION.md</a> for anyone upgrading from v0.1.</p>

<h2 id="smarter-yara-rule-handling">Smarter YARA rule handling</h2>

<p><img src="/assets/malhunt-yara-pipeline.svg" alt="YARA Rule validation pipeline" /></p>

<p>YARA rule management was one of the weakest points of the old tool. The new version addresses it on multiple levels.</p>

<p><strong>Downloading rules from Yara-Forge.</strong> Instead of cloning a git repository, MalHunt now fetches the full Yara-Forge rule bundle directly via HTTP, caches it under <code class="language-plaintext highlighter-rouge">~/.malhunt/</code>, and automatically refreshes it when the cache is more than a day old.</p>

<p><strong>Text-based sanitization.</strong> Before using any YARA file, MalHunt strips out rule blocks that rely on imports or features not supported by the version of <code class="language-plaintext highlighter-rouge">yara-python</code> used by Volatility: <code class="language-plaintext highlighter-rouge">import "math"</code>, <code class="language-plaintext highlighter-rouge">import "cuckoo"</code>, <code class="language-plaintext highlighter-rouge">import "hash"</code>, <code class="language-plaintext highlighter-rouge">imphash</code> patterns, and <code class="language-plaintext highlighter-rouge">pe.number_of_signatures</code>. This alone prevents a large category of failures on the 3300+ rules included in the Yara-Forge bundle.</p>

<p><strong>Compile-and-prune validation.</strong> The sanitization pass handles known-bad patterns, but the YARA format is complex enough that a rule file can still fail to compile for other reasons. The new <code class="language-plaintext highlighter-rouge">validate_and_prune_yara_rules_file()</code> function takes a different approach: it actually compiles the file using <code class="language-plaintext highlighter-rouge">yara-python</code>, and when a compilation error occurs, it locates the offending rule block, removes it, and tries again. This loop repeats until the file compiles cleanly or a maximum iteration count is reached. The result is a YARA file that is guaranteed to work, even when the upstream source contains rules with edge-case syntax or undocumented dependencies.</p>

<p><strong>Handling large scans without giving up.</strong> YARA scanning over a memory dump is slow. On large images it can easily take 15–20 minutes. The new <code class="language-plaintext highlighter-rouge">VolatilityConfig</code> now exposes a dedicated <code class="language-plaintext highlighter-rouge">yara_timeout</code> parameter (defaulting to 15 minutes) separate from the general command timeout. If a scan times out and the threshold is still below one hour, MalHunt doubles it and retries automatically. This prevents the tool from aborting unnecessarily on large forensic images, the kind you typically encounter in enterprise incident response.</p>

<h2 id="better-error-messages-that-actually-help">Better error messages that actually help</h2>

<p><img src="/assets/malhunt-error-recovery.svg" alt="The new error recovery flow" /></p>

<p>One of the most frustrating aspects of working with Volatility is decoding its error output. MalHunt now puts effort into turning those errors into actionable feedback.</p>

<p><strong>Structured error objects.</strong> The <code class="language-plaintext highlighter-rouge">VolatilityError</code> exception now carries the plugin name, the return code, and the full stdout and stderr from the failed command. Downstream code, and log files, can show exactly what went wrong and where, rather than just “Volatility command failed.”</p>

<p><strong>Symbol recovery.</strong> When Volatility fails because Windows symbol tables (PDB files) are missing, MalHunt now attempts to recover automatically. It parses the error output for download URLs, tries both <code class="language-plaintext highlighter-rouge">.pdb</code> and <code class="language-plaintext highlighter-rouge">.pd_</code> filename variants, and downloads the files into the correct directory structure. If the automatic recovery does not succeed, the tool generates a ready-to-run shell script containing all the download commands, so you can fix the problem in one step rather than hunting through Volatility documentation.</p>

<p><strong>YARA dependency detection.</strong> A separate check catches the situation where <code class="language-plaintext highlighter-rouge">yara-python</code> is not installed in the same Python environment that the <code class="language-plaintext highlighter-rouge">vol</code> binary uses. In that case MalHunt raises a specific error:</p>

<blockquote>
  <p>“YARA backend not available for Volatility. Install yara-python in the same Python environment used by ‘vol’ (or use yara-x), then retry.”</p>
</blockquote>

<p>That single sentence saves a lot of time compared to staring at a generic plugin-not-available stack trace.</p>

<h2 id="documentation">Documentation</h2>

<p>The project now ships with a proper <code class="language-plaintext highlighter-rouge">docs/</code> directory covering architecture decisions, installation instructions for various environments, a full usage reference with CLI examples, and a 400-line troubleshooting guide. Not the most glamorous part of a release, but probably the one most people will actually use.</p>

<h2 id="upgrading">Upgrading</h2>

<p>If you were using the old version, the upgrade path is straightforward:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install</span> <span class="nt">--upgrade</span> malhunt
</code></pre></div></div>

<p>Or from source:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/andreafortuna/malhunt.git
<span class="nb">cd </span>malhunt
poetry <span class="nb">install</span>
</code></pre></div></div>

<p>Make sure you have Volatility3 (≥2.0.0) installed and accessible as <code class="language-plaintext highlighter-rouge">vol</code> in your PATH. ClamAV integration remains optional.</p>

<p>If you were running v0.1 with Volatility2, read the migration guide first: the command-line interface has changed and the profile-based approach no longer applies.</p>

<h2 id="what-is-next">What is next</h2>

<p>A few things are still on my list: Linux memory dump support has been tested but could use more coverage, the ClamAV integration needs updating to handle newer daemon configurations, and I want to add structured JSON output for easier integration with SIEM pipelines and case management tools. Pull requests and issues are welcome on <a href="https://github.com/andreafortuna/malhunt">GitHub</a>.</p>

<hr />

<p><em>MalHunt is released under the MIT license. It is intended for authorized forensic analysis and security research only.</em></p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andreafortuna.org/assets/2026/malhunt-cover.png" /><media:content medium="image" url="https://andreafortuna.org/assets/2026/malhunt-cover.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Ten problems every Volatility2 analyst will hit when migrating to Volatility3</title><link href="https://andreafortuna.org/2026/03/01/volatility3-top10-issues.html" rel="alternate" type="text/html" title="Ten problems every Volatility2 analyst will hit when migrating to Volatility3" /><published>2026-03-01T00:00:00+00:00</published><updated>2026-03-01T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/03/01/volatility3-top10-issues</id><content type="html" xml:base="https://andreafortuna.org/2026/03/01/volatility3-top10-issues.html"><![CDATA[<p>After years of daily use in incident response and forensic investigations,
<strong>Volatility2</strong> becomes part of muscle memory. Commands are typed by reflex,
plugin behaviour is predictable, and the toolchain rarely surprises you.
Moving to <strong>Volatility3</strong> dismantles most of those assumptions at once. The
rewrite is architecturally justified and the result is genuinely superior, but
the migration path is littered with specific, repeatable problems that every
experienced analyst hits in roughly the same order. These are the ten that
caused the most friction, with the solutions that actually resolved them.</p>

<h2 id="the-symbol-table-labyrinth-windows-edition">The symbol table labyrinth: Windows edition</h2>

<p><img src="/assets/2026/volatility3-symbol-labyrinth.png" alt="The symbol table labyrinth" /></p>

<p>The first and most disorienting problem is the complete elimination of
<em>profiles</em>. In <strong>Volatility2</strong>, a profile tied the tool to a specific OS
version and analysis proceeded with a single <code class="language-plaintext highlighter-rouge">--profile=Win10x64_18362</code>
argument. <strong>Volatility3</strong> replaces this model with <em>symbol tables</em>, dynamically
resolved compressed JSON files matched against a PDB GUID embedded in the
memory image. In connected environments the framework contacts Microsoft’s
symbol server automatically, downloads the matching PDB, converts it, and
caches the result locally. The first run on a new kernel version is slow but
subsequent ones are instant. For air-gapped environments, the
<a href="https://blogs.jpcert.or.jp/en/2021/09/volatility3_offline.html">JPCERT/CC offline usage guide</a>
documents how to pre-populate the cache on a connected machine using
<code class="language-plaintext highlighter-rouge">pdbconv.py</code> and transfer the resulting files to the isolated workstation.</p>

<p>The second problem is subtler and harder to diagnose: the automagic PDB
scanner failing to locate the correct kernel base address, causing the familiar
<code class="language-plaintext highlighter-rouge">Unsatisfied requirement plugins.*.nt_symbols</code> error even with correctly placed
symbol files. Running <code class="language-plaintext highlighter-rouge">vol.py -f image.dmp -vvvv windows.info</code> reveals which
base addresses were attempted and whether the scanner exhausted its candidate
list. In several cases, specifying the kernel virtual offset manually through
the configuration system is the only path forward. If acquisition was performed
inside a virtualised environment, disabling hardware virtualisation in BIOS
before the next capture frequently resolves the issue at the source.</p>

<h2 id="linux-android-and-macos-building-symbols-from-scratch">Linux, Android, and macOS: building symbols from scratch</h2>

<p><img src="/assets/2026/volatility3-cross-platform.png" alt="Cross-platform symbol building" /></p>

<p>The third problem hits anyone working outside the Windows ecosystem.
<strong>Volatility3</strong> has no centralised symbol distribution for Linux, Android, or
macOS. Every kernel version and build configuration requires a custom-generated
symbol table produced with <a href="https://github.com/volatilityfoundation/dwarf2json">dwarf2json</a>,
a Go utility that processes DWARF debug data from a <code class="language-plaintext highlighter-rouge">vmlinux</code> binary and a
<code class="language-plaintext highlighter-rouge">System.map</code> file. The kernel must have been compiled with
<code class="language-plaintext highlighter-rouge">CONFIG_DEBUG_INFO=y</code>. Most distribution kernels do not enable this flag in
their production builds, but major distributions (Ubuntu, Debian, Fedora, RHEL)
ship debug symbols in separate packages (<code class="language-plaintext highlighter-rouge">linux-image-*-dbgsym</code> on
Debian/Ubuntu, <code class="language-plaintext highlighter-rouge">kernel-debuginfo</code> on RHEL/Fedora) that contain the unstripped
<code class="language-plaintext highlighter-rouge">vmlinux</code> needed by <code class="language-plaintext highlighter-rouge">dwarf2json</code>. A full recompile is only necessary when no
matching debug package exists for the target kernel version.</p>

<p>The fourth problem compounds the difficulty for Android emulator dumps. The
kernel must be compiled from source using the exact toolchain version embedded
in <code class="language-plaintext highlighter-rouge">/proc/version</code>, and the resulting <code class="language-plaintext highlighter-rouge">vmlinux</code> must produce a banner string
that matches the memory dump character for character. A single invisible
whitespace discrepancy causes <strong>Volatility3</strong> to reject the symbol file without
a clear explanation. Running <code class="language-plaintext highlighter-rouge">vol.py banners</code> on the dump before any other
command verifies the expected banner and prevents hours of misdiagnosis.</p>

<p>The fifth problem is specific to macOS analysis. A Kernel Debug Kit (KDK)
matching the exact OS build number must be downloaded from Apple’s developer
portal. After running <code class="language-plaintext highlighter-rouge">dwarf2json</code> on the kernel DWARF bundle, the
<code class="language-plaintext highlighter-rouge">constant_data</code> field in the resulting JSON must be manually populated with a
base64-encoded Darwin banner string extracted from the target memory image.
Forgetting this step or encoding the wrong banner produces the same
“symbol table requirement not fulfilled” error seen on other platforms, but
with no automatic resolution path available.</p>

<h2 id="performance-regression-when-analysis-takes-days">Performance regression: when analysis takes days</h2>

<p><img src="/assets/2026/volatility3-performance.png" alt="Performance regression" /></p>

<p>The sixth problem is one of the most surprising: dramatic, sometimes
catastrophic <em>performance degradation</em> on specific plugins. <code class="language-plaintext highlighter-rouge">windows.filescan</code>
on a typical Windows 10 image takes over an hour in <strong>Volatility3</strong>, versus
under one minute in the previous framework. The <code class="language-plaintext highlighter-rouge">timeliner</code> plugin, which
aggregates artefacts from dozens of sources across the entire image, has been
observed running for over a hundred hours on large dumps without completing.
The root cause is often the automagic stacker, which attempts multiple layer
detection strategies sequentially before committing to a format. Each failed
attempt carries measurable overhead. Specifying the layer type explicitly with
<code class="language-plaintext highlighter-rouge">--layer-type WindowsIntel</code> bypasses the guesswork and can reduce startup time
from several minutes to a few seconds on the same image. QEMU memory dumps are
the worst-affected format: the layered structure triggers repeated stacker
retries that render most plugins impractical until the dump is converted to raw
using the built-in <code class="language-plaintext highlighter-rouge">layerwriter</code> command
(<code class="language-plaintext highlighter-rouge">vol.py -f dump.qemu -o output_dir layerwriter.LayerWriter</code>).</p>

<p>The seventh problem is a specific pathological case: <code class="language-plaintext highlighter-rouge">windows.memmap.Memmap</code>
entering an infinite page-mapping loop on certain system processes such as
<code class="language-plaintext highlighter-rouge">svchost.exe</code> and <code class="language-plaintext highlighter-rouge">sihost.exe</code>. As documented in
<a href="https://github.com/volatilityfoundation/volatility3/issues/1920">GitHub issue #1920</a>,
the plugin prints the table header and then consumes 100% CPU indefinitely,
producing no rows and only terminating after several days or a manual interrupt.
For affected PIDs, <code class="language-plaintext highlighter-rouge">windows.vadinfo.VadInfo</code> provides the virtual address
descriptor information required in most investigations and completes in a
reasonable time on the same processes that cause the hang.</p>

<h2 id="the-missing-toolkit-and-the-standalone-binary-problem">The missing toolkit and the standalone binary problem</h2>

<p><img src="/assets/2026/volatility3-missing-toolkit.png" alt="The missing toolkit" /></p>

<p>The eighth problem for practitioners migrating from <strong>Volatility2</strong> is the
absence of several plugins they relied on regularly. <code class="language-plaintext highlighter-rouge">notepad</code> and <code class="language-plaintext highlighter-rouge">clipboard</code>
were not ported to <strong>Volatility3</strong>. Some,
like <code class="language-plaintext highlighter-rouge">notepad</code>, were deliberately excluded because heap structure changes in
modern Windows make the plugin fundamentally unreliable regardless of which
framework hosts it. For text content buried in process memory, <code class="language-plaintext highlighter-rouge">windows.strings</code>
fed with an offset-tagged string file produced by <code class="language-plaintext highlighter-rouge">strings -o dump.mem &gt; strings.txt</code>
provides an imperfect but functional substitute. However, as tracked in the
long-running <a href="https://github.com/volatilityfoundation/volatility3/issues/876">issue #876 in the Volatility3 repository</a>,
the offset mapping logic still has edge cases where strings confirmed present
in dumped process memory go undetected, and the issue remains open after years
of active discussion.</p>

<p>The ninth problem removed the portable deployment model that was standard
practice in incident response: early versions of <strong>Volatility3</strong> shipped without
a standalone Windows executable. Analysts accustomed to dropping a single binary
onto a forensic workstation found themselves with no equivalent option, and the
missing binary became the most-commented issue in the entire repository.
Official pre-compiled executables are now distributed alongside each tagged
release and downloadable directly from the GitHub releases page without
requiring a local Python installation, restoring the workflow that practitioners
had built their field kits around.</p>

<h2 id="architectural-culture-shock-and-broken-dependencies">Architectural culture shock and broken dependencies</h2>

<p>The tenth problem is not a single error but a <em>systematic architectural
disruption</em> affecting both daily usage and plugin development. The <code class="language-plaintext highlighter-rouge">--profile=</code>
argument is gone. Plugin names are namespaced as <code class="language-plaintext highlighter-rouge">windows.*</code>, <code class="language-plaintext highlighter-rouge">linux.*</code>, and
<code class="language-plaintext highlighter-rouge">mac.*</code>. The <code class="language-plaintext highlighter-rouge">calculate()</code> and <code class="language-plaintext highlighter-rouge">render_text()</code> pattern that structured every
<strong>Volatility2</strong> plugin has been replaced by a <code class="language-plaintext highlighter-rouge">_generator()</code> method yielding
rows to a <code class="language-plaintext highlighter-rouge">TreeGrid</code> renderer. Class inheritance changed, dependency
declarations moved into a <code class="language-plaintext highlighter-rouge">requirements()</code> function, and short option flags
were removed entirely. The
<a href="https://volatility3.readthedocs.io/en/latest/vol2to3.html">official Volatility2 to Volatility3 migration guide</a>
documents these changes comprehensively, but no documentation fully prepares
an analyst for the operational cost of running familiar commands against a
framework that no longer recognises them.</p>

<p><img src="/assets/2026/volatility-3-info.png" alt="infographif" /></p>

<p>Woven through this architectural transition is a dependency problem that
cripples many first installations. <code class="language-plaintext highlighter-rouge">yara-python</code> and <code class="language-plaintext highlighter-rouge">pefile</code>
are listed as optional but practically mandatory for production use. Missing
<code class="language-plaintext highlighter-rouge">yara-python</code> silently disables <code class="language-plaintext highlighter-rouge">yarascan</code>, <code class="language-plaintext highlighter-rouge">vadyarascan</code>, and <code class="language-plaintext highlighter-rouge">mftscan</code>.
Missing <code class="language-plaintext highlighter-rouge">pefile</code> eliminates <code class="language-plaintext highlighter-rouge">verinfo</code>, <code class="language-plaintext highlighter-rouge">netscan</code>, <code class="language-plaintext highlighter-rouge">netstat</code>, and
<code class="language-plaintext highlighter-rouge">skeleton_key_check</code> at import time. Running
<code class="language-plaintext highlighter-rouge">pip install pefile "yara-python&gt;=3.8.0" capstone pycryptodome</code> immediately
after the base installation closes most of these gaps. On Windows, <code class="language-plaintext highlighter-rouge">libyara.dll</code>
must be in the system PATH and the Python architecture must match the YARA
binary architecture exactly, a constraint that silently breaks installations
where system Python is 64-bit and the installed YARA wheel is 32-bit.</p>

<p>Despite all of this, the direction of travel is clear. The
<a href="https://volatilityfoundation.org/frequently-asked-questions/">Volatility Foundation</a>
continues to close the feature gap with each release, and the underlying
architecture of <strong>Volatility3</strong> is genuinely better suited to modern memory
analysis than its predecessor. The investment required to navigate these ten
problems is real but not prohibitive, and it pays back quickly once the
environment is correctly configured and the new mental model is internalised.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[After years of daily use in incident response and forensic investigations, Volatility2 becomes part of muscle memory. Commands are typed by reflex, plugin behaviour is predictable, and the toolchain rarely surprises you. Moving to Volatility3 dismantles most of those assumptions at once. The rewrite is architecturally justified and the result is genuinely superior, but the migration path is littered with specific, repeatable problems that every experienced analyst hits in roughly the same order. These are the ten that caused the most friction, with the solutions that actually resolved them.]]></summary></entry><entry><title type="html">Face ID vs. Android Face Unlock: A Security Comparison</title><link href="https://andreafortuna.org/2026/02/28/face-id-vs-android-face-unlock-security.html" rel="alternate" type="text/html" title="Face ID vs. Android Face Unlock: A Security Comparison" /><published>2026-02-28T00:00:00+00:00</published><updated>2026-02-28T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/28/face-id-vs-android-face-unlock-security</id><content type="html" xml:base="https://andreafortuna.org/2026/02/28/face-id-vs-android-face-unlock-security.html"><![CDATA[<p><img src="/assets/2026/face-unlock-cover.png" alt="face unlock cover" /></p>

<h2 id="the-hardware-gap-that-defines-the-comparison">The hardware gap that defines the comparison</h2>

<p><strong>Apple</strong> built Face ID around dedicated hardware that most competitors have never replicated at scale. The TrueDepth camera system, introduced with the iPhone X in 2017 and refined across every subsequent generation, uses a dot projector, an infrared camera, and a flood illuminator to cast more than 30,000 invisible infrared points onto the user’s face. The <a href="https://support.apple.com/en-us/102381">TrueDepth system</a> then reads the distortion of those dots to generate a precise depth map, while a separate infrared snapshot captures the resulting pattern. This is not image recognition in the conventional sense; it is a geometric measurement of the physical structure of a human face, performed in three dimensions and entirely independent of ambient light conditions.</p>

<p><img src="/assets/2026/face-unlock.png" alt="face unlock infographic" /></p>

<p><strong>Android</strong> manufacturers have historically taken a different path. The vast majority of Android phones, including flagship models from <strong>Samsung</strong>, <strong>OnePlus</strong>, and many others, perform facial recognition using the front-facing RGB camera. This 2D approach converts a standard photograph into a mathematical representation of facial geometry, measuring relative distances between the eyes, nose, mouth, and jawline. The process is fast and works reliably in daylight, but it is fundamentally different in its security profile. A photograph, a detailed mask, or even a video played on a second screen can, under certain conditions, defeat a purely 2D recognition system. This asymmetry in hardware capability is the root of almost every meaningful security difference between the two ecosystems.</p>

<h2 id="how-face-id-maps-geometry-into-identity">How Face ID maps geometry into identity</h2>

<p><img src="/assets/2026/apple-secure-enclave-architecture.png" alt="Apple Secure Enclave Architecture" />
<em>Diagram of the Secure Enclave components. Source: <a href="https://support.apple.com/guide/security/sec59b0b31ff/web">Apple Platform Security</a>.</em></p>

<p>The <em>depth map</em> generated by <strong>Apple</strong>’s TrueDepth hardware is converted into a mathematical model, informally referred to as a <em>face vector</em>, which is a compact numerical representation of the three-dimensional structure of the face. This vector captures structural information that no flat image can contain. A neural network running entirely on the device compares each unlock attempt against the stored vector, producing a confidence score; if that score exceeds a defined threshold, access is granted. Critically, neither the raw depth map nor the resulting vector ever leaves the device. Both are stored and processed exclusively within the <a href="https://support.apple.com/en-us/102381">Secure Enclave</a>, a dedicated cryptographic coprocessor physically isolated from the main application processor and inaccessible even to the operating system itself.</p>

<p>The Secure Enclave is not merely a software boundary. It is a dedicated subsystem within the system-on-chip (SoC) with its own boot process, its own encrypted memory, and communication channels that the main application processor cannot read. Even if the application layer were fully compromised by malware or a privilege escalation exploit, the biometric data stored in the enclave would remain unreachable. <strong>Apple</strong> reports the probability of a random individual unlocking someone else’s device with Face ID at approximately one in one million, compared to one in fifty thousand for the older Touch ID fingerprint sensor. The system also adapts over time, gradually updating the stored vector to account for natural changes in appearance such as facial hair, eyeglasses, or the slower drift of aging. This continuous learning is performed without transmitting any data externally, maintaining the privacy model alongside the security one.</p>

<h2 id="androids-approach-algorithms-over-dedicated-optics">Android’s approach: algorithms over dedicated optics</h2>

<p><img src="/assets/2026/android-biometric-stack.png" alt="Android Biometric Stack Architecture" />
<em>BiometricPrompt architecture and the Trusted Execution Environment stack. Source: <a href="https://source.android.com/docs/security/features/biometric">Android Open Source Project</a>.</em></p>

<p><strong>Google</strong> has pursued a different strategy on its recent <strong>Pixel</strong> line, and the evolution of that strategy illustrates how far software can extend the limits of biometric security without specialized hardware. After abandoning the dedicated 3D sensors of the <strong>Pixel 4</strong>, subsequent 2D face unlock implementations on <strong>Android</strong> were widely classified as a weaker biometric modality, acceptable for device unlock but not for authorizing financial transactions or accessing sensitive application data. The <a href="https://source.android.com/docs/security/features/biometric/measure">Android biometric security framework</a> defines three security classes, from Class 1 (convenience only) to Class 3 (strong authentication), and for years face unlock sat below the threshold required for payment authorization or cryptographic key access.</p>

<p>With the <strong>Pixel 8</strong>, <strong>Google</strong> upgraded face unlock to <em>Class 3 biometric</em>, meaning it meets the same security bar as fingerprint authentication and can be used with the Android Keystore to protect cryptographic keys. The improvement came not from new sensors but from substantial advances in <em>liveness detection</em>, the algorithmic ability to distinguish a live human face from a static image or a three-dimensional replica. Modern liveness detection systems analyze micro-movements, skin texture variations, infrared reflectance patterns on devices equipped with appropriate illuminators, and temporal consistency across multiple frames captured during the brief unlock gesture. The result is a system that remains more theoretically vulnerable than <strong>Apple</strong>’s hardware-centric approach but is considerably harder to fool than earlier software-only implementations.</p>

<p>The storage model on <strong>Android</strong> is sandboxed but follows a different architecture. Biometric templates reside within the Trusted Execution Environment (TEE), implemented through <strong>ARM</strong> TrustZone technology, while cryptographic keys are safeguarded by a dedicated secure element like the <strong>Titan M2</strong> chip on <strong>Google Pixel</strong> devices. The TEE provides strong isolation from the main OS, comparable in concept to <strong>Apple</strong>’s Secure Enclave, though the depth of that comparison depends on the specific implementation. The key challenge for <strong>Android</strong> is not any single device but the ecosystem as a whole: across hundreds of manufacturers and thousands of models, the quality of TEE implementation varies in ways that are difficult for end users or even administrators to evaluate without detailed technical documentation.</p>

<h2 id="the-threat-model-spoofing-bypasses-and-real-world-attacks">The threat model: spoofing, bypasses, and real-world attacks</h2>

<p>The most operationally relevant question is not which system is theoretically stronger but which is more difficult to defeat under realistic adversarial conditions. <em>Presentation attacks</em> (using a physical or digital replica of the target’s face to deceive the sensor) represent the primary concern for any face-based biometric.</p>

<p>A 2D face unlock system is inherently vulnerable to attacks using high-resolution photographs, printed or displayed on a screen. Several older <strong>Android</strong> devices were publicly demonstrated to unlock using nothing more than an image retrieved from a social media profile, or even a video playing on another smartphone.</p>

<iframe width="100%" height="400" src="https://www.youtube.com/embed/BGgQ9woZQOg?si=qMRyTmqsvrncUvNn" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>
<p><em>A popular demonstration by Unbox Therapy showing how the Samsung Galaxy S10’s 2D face unlock could be bypassed using a video.</em></p>

<p><strong>Apple</strong>’s Face ID sets a substantially higher bar: defeating it requires a custom-crafted three-dimensional model of the target’s face with accurate depth representation and credible infrared reflectance properties, a task that is neither trivial nor inexpensive. However, researchers, such as those at the Vietnamese security firm Bkav, have successfully demonstrated proof-of-concept bypasses using highly detailed 3D-printed masks combined with 2D infrared images for the eyes.</p>

<iframe width="100%" height="400" src="https://www.youtube-nocookie.com/embed/GRra4PoAiaY" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen=""></iframe>
<p><em>Security researchers from Bkav demonstrating a proof-of-concept bypass of Apple’s Face ID using a specially crafted 3D mask. (Video via Reuters)</em></p>

<p>Law enforcement interaction represents a separate and often underappreciated dimension of the threat model. In documented cases across multiple jurisdictions, authorities have compelled individuals to unlock devices using biometric authentication, on the grounds that biometrics constitute physical evidence rather than testimonial disclosure protected against self-incrimination. The legal framework around this coercion varies by country, but the practical implication is that both Face ID and Android face unlock can be used to access a device without requiring the owner to disclose a passphrase. From this angle, the security difference between the two systems becomes less decisive than the shared vulnerability intrinsic to any biometric access control mechanism: the authenticator is always present and visible.</p>

<p>A subtler attack surface involves the neural network inference layer itself. Adversarial perturbations, carefully crafted and nearly imperceptible modifications to input data, can sometimes cause classification models to produce incorrect outputs. Since <strong>Android</strong> face unlock relies more heavily on learned feature representations, it is theoretically more exposed to this category of attack. In practice, such techniques require significant expertise and physical proximity to the target device. <strong>Apple</strong>’s reliance on raw depth measurements, rather than purely learned visual features, offers a degree of inherent resistance to adversarial input manipulation, since the depth sensor produces structured physical data that is harder to perturb than a pixel array.</p>

<h2 id="what-this-means-in-practice-for-users-and-organizations">What this means in practice for users and organizations</h2>

<p>The security gap between the two systems is real, measurable, and architecturally significant. It does not, however, translate uniformly into elevated risk for every user in every context. For the vast majority of individuals using face unlock to avoid typing a PIN throughout the day, both systems provide adequate protection against casual access by strangers, opportunistic theft, or unsophisticated attackers. The adversary capable of defeating either system at scale remains confined, for now, to well-resourced state actors and highly specialized security researchers.</p>

<p>For organizations evaluating mobile device management policies or assessing risk for regulated industries, the distinction carries considerably more weight. <strong>Apple</strong>’s uniform hardware implementation across all Face ID devices means that the security properties of the biometric are predictable and consistent across an entire fleet. An IT administrator deploying iPhones in a financial services or healthcare environment can rely on the same depth-sensing architecture regardless of which iPhone model employees carry. <strong>Android</strong>’s heterogeneous ecosystem makes equivalent assurance difficult to provide. A <strong>Samsung Galaxy S25</strong> and a budget device from a lesser-known manufacturer may both advertise face unlock, but the underlying implementation, from sensor quality to TEE integrity to software patch level, can differ dramatically.</p>

<p>The <em>principle of least privilege</em> suggests that sensitive operations, whether authorizing a wire transfer or accessing an encrypted credential vault, should require the strongest available authentication factor. On <strong>Apple</strong> hardware, Face ID satisfies this requirement natively and consistently. On <strong>Android</strong>, the answer depends on the specific device, the Android version, and the application’s own implementation of the BiometricPrompt API. Recent flagships from <strong>Google</strong> and <strong>Samsung</strong> running fully patched software come close to closing the gap in practical terms. Older or lower-tier devices running outdated Android versions do not. Any organizational security policy that treats Android face unlock as equivalent to Face ID without accounting for this variance is operating on an assumption that the hardware and software stack does not always support.</p>

<p>The face, whether read in three dimensions by a dedicated sensor or interpreted by a neural network trained on millions of images, remains among the most convenient biometric factors available on a consumer device. The technology behind it is not monolithic, and treating it as such is a security mistake. Understanding that projecting 30,000 infrared points onto a face and recognizing its photograph with a standard camera are architecturally different operations, with different attack surfaces and different failure modes, is the foundation of any informed decision about mobile authentication, whether you are choosing a phone for personal use or writing a biometric policy for an enterprise.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[A detailed security comparison between Apple Face ID and Android Face Unlock, examining hardware architecture, threat models, and real-world implications.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://andreafortuna.org/assets/2026/face-unlock-cover.png" /><media:content medium="image" url="https://andreafortuna.org/assets/2026/face-unlock-cover.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Audit-Proofing your NIS2 training plan: a strategic guide</title><link href="https://andreafortuna.org/2026/02/27/nis2-training-plan-jekyll.html" rel="alternate" type="text/html" title="Audit-Proofing your NIS2 training plan: a strategic guide" /><published>2026-02-27T00:00:00+00:00</published><updated>2026-02-27T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/27/nis2-training-plan-jekyll</id><content type="html" xml:base="https://andreafortuna.org/2026/02/27/nis2-training-plan-jekyll.html"><![CDATA[<h2 id="why-training-is-no-longer-optional">Why training is no longer optional</h2>

<p>The <a href="https://enobyte.com/en/legal/nis2/20/index.html">NIS2 Directive (EU) 2022/2555</a> has fundamentally redefined what it means for a European organization to take cybersecurity seriously. Among its most significant shifts is the elevation of training from a recommended best practice to a binding legal obligation. Article 20 explicitly requires that management bodies of essential and important entities follow cybersecurity training, and encourages organizations to offer similar, regular training to their employees (a requirement further solidified by Article 21). This is not a formality. It is the normative foundation upon which the entire human layer of security now rests.</p>

<p><img src="/assets/2026/nis2-training.png" alt="nis2-training" /></p>

<p>What makes this requirement particularly demanding is not its breadth, but its depth. <em>Compliance is no longer satisfied by assigning an annual e-learning module and checking a box.</em> Regulators, national supervisory authorities, and auditors now expect organizations to demonstrate that training is meaningful, measurable, and continuously updated. The standard has shifted from “did you train your staff?” to “can you prove the training worked, who received it, when, and how it aligned with your current threat landscape?”</p>

<h2 id="the-regulatory-architecture-articles-20-and-21-as-a-combined-mandate">The regulatory architecture: articles 20 and 21 as a combined mandate</h2>

<p>Understanding how to structure a compliant training plan requires reading <a href="https://nis2resources.eu/directive-2022-2555-nis2/article-20/">Article 20</a> and <a href="https://enobyte.com/en/legal/nis2/21/index.html">Article 21</a> together, not in isolation. Article 20 addresses governance and management accountability: board members and senior executives must personally undergo training to acquire sufficient knowledge to identify risks and assess cybersecurity risk-management practices. The personal liability dimension is crucial. Under NIS2, management can be held individually responsible for infringements, which transforms training from an HR matter into a boardroom imperative.</p>

<p>Article 21 then specifies the technical and organizational measures that entities must implement, listing “basic computer hygiene practices and cybersecurity training” as an explicit requirement within an all-hazards risk management framework. <em>Training must be anchored to the organization’s broader risk posture</em>, covering incident handling, business continuity, multi-factor authentication, backup procedures, and supply chain awareness. The two articles together make it clear that no training plan can be considered compliant if it operates as a standalone activity disconnected from risk assessment processes.</p>

<p>The financial stakes reinforce this reading. Essential entities face maximum fines of at least 10 million euros or 2% of global annual turnover (whichever is higher) for non-compliance, while important entities face maximum fines of at least 7 million euros or 1.4% of turnover (whichever is higher). These figures, combined with the possibility of personal liability for executives, make a defensible training program one of the most consequential investments an organization can make.</p>

<h2 id="designing-a-training-plan-that-holds-up-under-scrutiny">Designing a training plan that holds up under scrutiny</h2>

<p>A compliant training plan is built on four pillars: scope, content, cadence, and evidence. Each one must be intentionally designed rather than assembled by default.</p>

<p>Scope determines who receives what. Not all employees carry the same risk exposure, and a well-structured plan acknowledges this through segmentation. The general workforce needs foundational cyber hygiene: recognizing phishing, using strong passwords and password managers, understanding how to report incidents, applying secure configurations, and practicing safe remote work habits. Mid-level management adds a layer covering incident response protocols, business continuity fundamentals, and an overview of NIS2 obligations relevant to their function. Senior management and board members require training specifically tailored to their legal obligations, risk oversight responsibilities, and the personal liability framework that Article 20 introduces. A training plan that fails to differentiate between these audiences is unlikely to survive a rigorous audit.</p>

<p><img src="/assets/2026/nis2-info.png" alt="nis2 infographic" /></p>

<p>Content must map directly to identified risks. One of the most common gaps <a href="https://kymatio.com/blog/nis2-audit-evidence-checklist">auditors identify</a> is a mismatch between the threats documented in the organization’s risk register and the topics covered in its training modules. If a company has identified social engineering as a primary attack vector in its threat assessment, and its training program contains no phishing simulation or social engineering awareness module, that gap is an evidentiary liability. Content should be derived from the risk assessment, reviewed at least annually, and updated whenever the threat landscape shifts materially.</p>

<p>Cadence addresses the persistent misconception that annual training is sufficient. NIS2’s requirement for “regular” training implies a frequency calibrated to the pace of threat evolution and to the organization’s operational reality. Practical interpretations from national authorities and compliance frameworks suggest quarterly awareness touchpoints at minimum, supplemented by role-specific deep dives and just-in-time training triggered by incidents or newly discovered vulnerabilities. Phishing simulations, tabletop exercises, and live scenario walkthroughs are not optional embellishments; they are the mechanisms through which training transitions from passive consumption to active competency.</p>

<h2 id="building-the-evidence-layer-what-auditors-actually-want-to-see">Building the evidence layer: what auditors actually want to see</h2>

<p><em>The shift from policy-based assurance to evidence-based proof</em> is perhaps the most operationally disruptive change NIS2 introduces. An auditor asking for proof of training compliance will not be satisfied by a list of employee names marked “completed.” What they require is granular, timestamped, exportable documentation that answers four specific questions: who was trained, what content they covered, when the training took place, and what their assessed performance was.</p>

<p>This means organizations need to invest in platforms or processes capable of producing this level of detail. Training records should include module-specific completion logs, assessment scores, remedial activity records for those who failed initial assessments, and separate documentation for management-level training that reflects their distinct obligations under Article 20. The <a href="https://www.enisa.europa.eu/publications/nis2-technical-implementation-guidance">ENISA technical implementation guidance</a> reinforces that evidence must demonstrate not just activity, but effectiveness. Dashboards showing improvement trends in phishing simulation click rates, reductions in policy violation incidents, or increases in self-reported suspicious activity are the kind of data that demonstrate a living program rather than a dormant one.</p>

<p>Governance documentation must accompany training records to provide context. Version histories of training content showing how modules evolved in response to updated risk assessments, board meeting minutes confirming that management completed their mandated training, and formal approval signatures on the training plan itself are all components of a defensible evidence package. Without this layer, even an operationally excellent training program may fail to produce the compliance narrative an audit demands.</p>

<h2 id="making-it-defensible-the-risk-linkage-principle">Making it defensible: the risk-linkage principle</h2>

<p>A training plan is only as defensible as the logical chain connecting it to the organization’s formal risk management framework. <em>Risk-linkage is the principle that transforms a training calendar into a compliance control.</em> It means that every training topic can be traced back to a specific identified risk, that every update to the training curriculum is triggered by a documented change in the risk landscape, and that training outcomes feed back into the organization’s periodic risk reviews as measurable evidence of risk reduction.</p>

<p>In practice, this requires integrating the training program into the same governance cycle as the risk assessment. When a new vulnerability is identified, the training team receives a signal to assess whether existing content addresses it. When a sector-level threat intelligence report is published, relevant modules are reviewed for currency. When an incident occurs, post-incident analysis informs the next training iteration. This recursive loop is what <a href="https://advisera.com/articles/nis2-training-awareness/">compliance frameworks increasingly describe</a> as the difference between a static program and a resilient one.</p>

<p>Organizations that build their training plans with this architecture, scope differentiated by role and risk, content anchored to threat intelligence, cadence calibrated to regulatory expectations, and evidence structured for auditability, are not simply meeting the minimum requirements of NIS2. They are building the kind of institutional security culture that the directive was designed to foster: one where cybersecurity awareness is not a compliance exercise, but an organizational capability embedded in daily operations and accountable at every level of the hierarchy.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[Why training is no longer optional]]></summary></entry><entry><title type="html">Privileged access management: risks and best practices for zero trust implementations</title><link href="https://andreafortuna.org/2026/02/26/privileged-access-management-zero-trust.html" rel="alternate" type="text/html" title="Privileged access management: risks and best practices for zero trust implementations" /><published>2026-02-26T00:00:00+00:00</published><updated>2026-02-26T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/26/privileged-access-management-zero-trust</id><content type="html" xml:base="https://andreafortuna.org/2026/02/26/privileged-access-management-zero-trust.html"><![CDATA[<h2 id="the-strategic-weight-of-privileged-accounts">The strategic weight of privileged accounts</h2>

<p>In any enterprise environment, privileged accounts represent the highest-value target for attackers. These are not just administrator credentials; they encompass service accounts, DevOps pipelines, cloud management interfaces, and any identity with elevated permissions over critical systems. When one of these accounts is compromised, the consequences extend far beyond a single machine or dataset. Attackers can move laterally, escalate privileges, and reach the deepest layers of an organization’s infrastructure, often without triggering immediate alerts.</p>

<p><img src="/assets/2026/zero-trust.png" alt="zero-trust-pam" /></p>

<p>The numbers behind this threat are stark. According to research cited by <a href="https://www.keepersecurity.com/blog/2025/05/30/seven-risks-of-not-having-privileged-access-management/">Keeper Security</a>, the global average cost of a data breach in 2024 reached <strong>$4.88 million</strong>, a figure that reflects not only the technical remediation but also legal fees, regulatory fines, and lasting reputational damage. The 2024 <strong>AT&amp;T</strong> breach, which exposed data from more than 65 million former customer accounts, stands as a recent and instructive example of what happens when privileged access is left poorly managed on third-party cloud environments.</p>

<p>These incidents rarely originate from sophisticated zero-day exploits. In the majority of cases, the initial vector is a stolen or misused credential. Attackers rely on phishing, credential stuffing, or exploiting improperly decommissioned accounts to gain a foothold. Once inside, the presence of excessive or poorly monitored privileges allows them to escalate quickly and operate undetected for extended periods. The longer an attacker maintains access, the more costly the breach becomes, both in terms of direct financial impact and long-term erosion of customer trust.</p>

<p><em>Privileged Access Management</em> (PAM) addresses this problem by establishing structured controls over who can access sensitive systems, under what conditions, and for how long. But PAM alone is no longer sufficient. The modern threat landscape demands that it be combined with the principles of Zero Trust, a security model built on the premise that no user, device, or network segment should ever be trusted by default.</p>

<h2 id="why-traditional-pam-models-fall-short">Why traditional PAM models fall short</h2>

<p>Legacy PAM implementations were designed for a different era: perimeter-based networks where internal users were generally considered trustworthy, and where administrative access was granted on a semi-permanent basis to a small number of system administrators. That model has not survived contact with cloud adoption, remote work, and the explosion of non-human identities in modern IT environments.</p>

<p>One of the most persistent weaknesses in traditional PAM is the concept of <em>standing privileges</em>: accounts that retain elevated permissions continuously, regardless of whether those permissions are actively needed. This approach dramatically widens the attack surface. A compromised account with standing admin rights is immediately dangerous, while a credential with no active privileges at the moment of breach offers an attacker far less leverage.</p>

<p>The problem becomes even more acute in hybrid and multi-cloud environments, where privileged accounts often span multiple platforms with different security models and management interfaces. An administrator who holds standing privileges across AWS, Azure, and on-premises Active Directory presents a single point of failure that, if compromised, grants an attacker access to the organization’s entire technology stack. Without centralized visibility and policy enforcement, security teams are forced to manage these risks in silos, inevitably leaving gaps.</p>

<p>Shadow IT compounds the problem further. Many organizations simply do not have a complete inventory of their privileged accounts. Shared credentials, dormant service accounts, and unmonitored automation pipelines create blind spots that security teams cannot defend. As <a href="https://www.splashtop.com/blog/pam-challenges">Splashtop’s analysis of PAM challenges</a> highlights, the lack of automated discovery and continuous classification of privileged accounts is one of the most common and dangerous gaps organizations face today.</p>

<h2 id="zero-trust-as-the-architectural-backbone">Zero Trust as the architectural backbone</h2>

<p>The <a href="https://cloudsecurityalliance.org/blog/2025/01/29/zero-trust-approach-to-privileged-access-management">Cloud Security Alliance</a> defines Zero Trust PAM around three foundational assumptions: breaches will occur, trust must be continuously verified, and authentication must be adaptive and context-aware. These principles transform PAM from a static gatekeeping function into a dynamic, risk-responsive framework.</p>

<p>At the operational level, this translates into several concrete practices. <em>Just-in-Time</em> (JIT) access replaces standing privileges with time-bound elevations that are provisioned only when a specific task requires them and revoked automatically once the task is complete. A DevOps engineer, for instance, might be granted temporary root access for a deployment window of thirty minutes, after which the privilege is automatically removed. This model, endorsed by the <strong>CISA</strong> Zero Trust Maturity Model, shrinks the window of opportunity for attackers to exploit compromised credentials.</p>

<p>Multi-Factor Authentication, particularly phishing-resistant implementations such as FIDO2 hardware tokens and passkeys, adds another layer of defense. Combined with behavior-based anomaly detection, which can flag an administrator logging in from an unrecognized geolocation or accessing systems outside of business hours, adaptive authentication ensures that the verification process is not a one-time event at login but a continuous assessment throughout each session.</p>

<p><em>Micro-segmentation</em> reinforces these access controls at the network level. By dividing the infrastructure into isolated security zones, each with its own access policies, organizations can contain the impact of a compromised privileged account. Even if an attacker gains elevated access to one segment, micro-segmentation prevents unrestricted lateral movement to other parts of the network. When combined with JIT provisioning and continuous authentication, micro-segmentation creates a layered defense architecture where each layer independently limits the attacker’s reach.</p>

<p>An often overlooked component of Zero Trust PAM is the principle of <em>least privilege by design</em>. Rather than granting broad access and later attempting to restrict it, organizations should define the absolute minimum set of permissions required for each role and function from the outset. This inversion of the default, from “allow unless denied” to “deny unless explicitly allowed”, fundamentally changes the security posture of the organization and reduces the blast radius of any single compromised identity.</p>

<h2 id="integrating-pam-with-the-broader-security-ecosystem">Integrating PAM with the broader security ecosystem</h2>

<p>PAM does not operate effectively in isolation. Its value multiplies when integrated with Identity Governance and Administration (IGA) systems, Access Management platforms, and Security Information and Event Management (SIEM) tools. This integration creates a unified audit trail across all user activities, privileged and otherwise, enabling security teams to correlate events, detect lateral movement, and respond to incidents with the context they actually need.</p>

<p>The treatment of <strong>non-human identities</strong> is a particularly critical and often underestimated dimension of this ecosystem. Service accounts, API keys, machine-to-machine tokens, and automated pipelines frequently carry elevated permissions and are rarely subject to the same scrutiny as human users. In many organizations, non-human identities outnumber human users by a factor of ten or more, yet they receive a fraction of the security attention. An attacker who compromises a service account can blend into legitimate traffic patterns, moving through cloud environments and on-premises networks with minimal detection risk.</p>

<p>Applying Zero Trust principles to these identities requires a dedicated strategy. Credentials for service accounts and API integrations should be rotated automatically on a short lifecycle, ideally through a secrets management platform that eliminates the need for hard-coded credentials in source code or configuration files. Each non-human identity should be scoped to the minimum set of permissions required for its function, and its activity should be monitored continuously for deviations from established baselines. Organizations that treat non-human identities as second-class citizens in their PAM strategy are leaving one of their largest attack surfaces effectively unguarded.</p>

<p>Insider threats require their own layer of attention. The danger does not come only from malicious actors; negligence and misconfiguration account for a significant share of privilege-related incidents. <a href="https://duo.com/learn/privileged-access-management-risks">Duo Security’s research on PAM risks</a> emphasizes that even well-designed PAM strategies can introduce new vulnerabilities if they are inconsistently monitored or poorly maintained, underscoring the need for continuous oversight rather than periodic audits.</p>

<h2 id="building-a-resilient-pam-implementation">Building a resilient PAM implementation</h2>

<p>Translating Zero Trust principles into a functioning PAM implementation requires a structured approach. The starting point is always a comprehensive audit of the existing identity landscape: every privileged account, human or automated, must be discovered, classified, and assessed against the principle of least privilege. Accounts that retain more access than their function requires should be immediately remediated.</p>

<p>From there, organizations should prioritize the implementation of Role-Based Access Control (RBAC) combined with JIT provisioning workflows. Credential vaulting, where privileged passwords and keys are stored in an encrypted, centrally managed repository rather than shared informally or stored in configuration files, eliminates one of the most common vectors for credential theft. Session recording provides forensic value after incidents and serves as a behavioral deterrent during normal operations.</p>

<p>A robust PAM implementation must also account for the full lifecycle of privileged accounts. This includes automated provisioning and deprovisioning tied to HR and organizational events, so that when an employee changes roles or leaves the organization, their privileged access is adjusted or revoked immediately. Too often, accounts persist long after the business justification for their privileges has expired, creating a growing inventory of dormant credentials that attackers can exploit.</p>

<p>Equally important is the investment in <em>security culture and training</em>. Technical controls are only as effective as the people who interact with them. Privileged users should receive targeted training on recognizing phishing attempts, handling credentials securely, and understanding the rationale behind access restrictions. An organization that deploys sophisticated PAM tooling but neglects to educate its administrators risks undermining its own defenses through human error.</p>

<p>Compliance considerations add urgency to this work. Frameworks such as <strong>NIST SP 800-207</strong>, <strong>ISO 27001</strong>, and regulatory standards like GDPR, NIS2, and PCI DSS all require demonstrable controls over privileged access. Automated audit logging, which PAM platforms can generate natively and forward to SIEM systems, directly supports compliance reporting and reduces the manual burden on security and legal teams.</p>

<h2 id="toward-a-continuous-security-discipline">Toward a continuous security discipline</h2>

<p>The convergence of PAM and Zero Trust is not a one-time project with a defined endpoint. It is an ongoing operational discipline that must evolve in response to new threats, new technologies, and changes in organizational structure. As cloud-native architectures, containerized workloads, and AI-driven automation continue to reshape enterprise IT, the definition of what constitutes a “privileged account” will keep expanding, and so must the controls that govern it.</p>

<p>Organizations that treat PAM and Zero Trust as a living practice, continuously auditing their identity landscape, adapting policies to emerging risks, and investing in both technology and people, will find themselves significantly better positioned against attackers and in front of regulators. Those that treat it as a checkbox exercise will inevitably discover, often at the worst possible moment, that static defenses cannot withstand a dynamic threat environment.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[Privileged accounts are among the most targeted assets in any organization. Understanding how PAM and Zero Trust intersect is essential to building a resilient security posture.]]></summary></entry><entry><title type="html">CERT-EU’s cyber threat intelligence framework: a common language for European digital defence</title><link href="https://andreafortuna.org/2026/02/23/cert-eu-cti-framework.html" rel="alternate" type="text/html" title="CERT-EU’s cyber threat intelligence framework: a common language for European digital defence" /><published>2026-02-23T00:00:00+00:00</published><updated>2026-02-23T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/23/cert-eu-cti-framework</id><content type="html" xml:base="https://andreafortuna.org/2026/02/23/cert-eu-cti-framework.html"><![CDATA[<p>On February 13, 2026, <strong>CERT-EU</strong> (the Computer Emergency Response Team for the EU Institutions, Bodies and Agencies) released its <a href="https://www.cert.europa.eu/publications/threat-intelligence/cyber-threat-intelligence-framework/">Cyber Threat Intelligence Framework</a>, a document that formalizes how the organization classifies, assesses, and prioritizes cyber threats relevant to European Union entities. Published under TLP:CLEAR and openly shared with the broader cybersecurity community, the framework is not merely a technical reference: it represents a deliberate effort to establish a <em>shared methodological language</em> that bridges the gap between raw technical analysis and operational decision-making.</p>

<p><img src="/assets/2026/certeu_cti_infographic.png" alt="infographic" /></p>

<p>The timing is significant. As geopolitical tensions continue to shape the digital threat landscape, and as regulations such as NIS2 and DORA push EU organizations toward more structured approaches to cyber risk management, the need for a coherent, institution-wide intelligence model has never been more pressing. The <strong>CERT-EU</strong> framework responds to this need by introducing structured concepts, consistent scoring, and clearly defined threat taxonomies, all calibrated to the specific context of Union entities.</p>

<h2 id="formalizing-the-intelligence-process">Formalizing the intelligence process</h2>

<p>At its core, the Cyber Threat Intelligence Framework defines the analytical and operational standards that <strong>CERT-EU</strong> uses across its publications, from individual Cyber Briefs to the annual <a href="https://cert.europa.eu/publications/threat-intelligence/">Threat Landscape Report</a>. The fundamental challenge it addresses is deceptively simple: intelligence is only useful if the people producing it and the people acting on it share the same understanding of what terms mean, how severity is measured, and which threats deserve immediate attention versus those requiring only monitoring.</p>

<p>By codifying definitions, scoring criteria, and classification hierarchies, the framework transforms what might otherwise be a subjective, analyst-dependent process into a <em>repeatable and consistent methodology</em>. Primary Operational Contacts (POCs) and Local Cybersecurity Officers (LCOs) at Union entities can now receive CERT-EU alerts knowing that the underlying assessments have been produced according to a transparent, documented standard, rather than relying on implicit institutional knowledge.</p>

<p>The framework is also conceived as a key enabler of what <strong>CERT-EU</strong> calls its <em>Full-Spectrum Adversary Approach</em>, an internal model for threat-informed defence that supports holistic modelling of threats across both strategic and technical dimensions. By making this approach explicit and reproducible, the framework strengthens situational awareness and ensures that observations translate into structured data capable of driving faster, more coherent operational responses.</p>

<h2 id="the-mai-concept-and-the-ecosystem-model">The MAI concept and the ecosystem model</h2>

<p>One of the framework’s most significant conceptual contributions is the introduction of the <em>Malicious Activity of Interest</em> (MAI) as the central analytical unit. Rather than focusing exclusively on confirmed incidents, an MAI encompasses a broader range of adversary behaviours: confirmed compromises, but also suspicious intrusion attempts, adversarial infrastructure development, and targeted reconnaissance. This expanded scope is deliberate, acknowledging that in modern threat environments, the early stages of an attack cycle carry intelligence value that should not be discarded before a formal incident has been confirmed.</p>

<p>Equally important is the framework’s <em>ecosystem model</em>. <strong>CERT-EU</strong> does not limit its analytical lens to direct attacks on EU institutions. Instead, it considers the broader environment in which Union entities operate: the countries in which they are active, the sectors they belong to, the software and services they rely on, and the supply chains that underpin their operations. This perspective reflects a crucial insight: a threat does not need to directly target an institution to be operationally relevant. A compromised supplier, a widely exploited vulnerability in commercial software used across EU bodies, or a campaign targeting a sector adjacent to Union entities can all carry systemic implications.</p>

<p>The ecosystem model translates into a more nuanced approach to threat relevance. When <strong>CERT-EU</strong> analysts assess an MAI, they consider not only whether Union entities are directly targeted, but also how many elements of the ecosystem are affected, and how those effects might cascade. A threat actor whose activity spans multiple ecosystem components will be rated more severely than one whose activity is isolated to a single, peripheral element, even if neither has yet caused a confirmed incident at a Union entity.</p>

<h2 id="threat-levels-actor-levels-and-scoring">Threat levels, actor levels, and scoring</h2>

<p>The framework introduces two structured scales designed to support consistent prioritization. The <em>threat level</em> scale assesses the criticality and proximity of malicious cyber activity in relation to Union entities: a “High” rating indicates an immediate threat requiring urgent verification and action, “Medium” signals a close threat warranting careful monitoring, and “Low” describes distant or indirect threats with no immediately identified link to Union entities. These levels are applied particularly in the Threat Alerts that <strong>CERT-EU</strong> provides to its constituents, guiding the urgency and scope of recommended mitigations.</p>

<p>Alongside this, a <em>threat actor level</em> scale classifies adversaries based on their observed behaviour during a defined period of interest. A “Critical” actor is one that has caused at least one significant incident directly affecting Union entities; a “High” actor has been responsible for a qualifying MAI that did not reach the threshold of a significant incident; “Medium” and “Low” actors are distinguished by the breadth of ecosystem elements their activity has touched. This granularity allows decision-makers to contextualize alerts within a broader picture of adversary behaviour over time, rather than reacting to isolated events without context.</p>

<p>Complementing these scales, the framework defines a scoring mechanism for both adversaries and mitigations. The threat score is driven by five components: occurrences, targeting, severity, time period, and a decay factor that progressively reduces the weight of older activity. The mitigation scoring draws on a formula that incorporates the coverage of adversary techniques by available controls, the number of initial access vectors addressed, and alignment with the Essential Eight baseline practices, providing a quantitative basis for defensive planning and resource allocation that goes well beyond intuition-based prioritization.</p>

<h2 id="standards-and-a-european-approach">Standards and a European approach</h2>

<p>A defining characteristic of the <strong>CERT-EU</strong> framework is its deliberate integration of established international standards rather than the development of new parallel ones. For the classification of adversary tactics, techniques, and procedures (TTPs), the framework adopts the <a href="https://attack.mitre.org/">MITRE ATT&amp;CK</a> knowledge base, a widely used, behaviour-based taxonomy that links observable adversary actions to known techniques, making threat-hunting and prioritized mitigation systematic and repeatable for analysts and defenders alike.</p>

<p>For the assessment of source reliability and information credibility, the framework employs the <em>Admiralty Code</em>, a NATO-standard system that evaluates these two dimensions independently. Source reliability is rated from A (completely reliable) to F (unreliable or untested), while information credibility runs from 1 (confirmed by multiple sources) to 6 (cannot be judged). Crucially, <strong>CERT-EU</strong> only uses intelligence that meets a specific threshold (A1, A2, B1, or B2 combinations), ensuring that CTI products are grounded in information from sources with a demonstrated track record and with sufficient corroboration or plausibility.</p>

<p>On the question of attribution, the framework adopts a strictly technical stance. <strong>CERT-EU</strong> does not attribute activity to states or organizations, focusing instead on identifying threat actors through observable technical indicators such as TTPs, infrastructure overlaps, malware artefacts, and targeting patterns. When attribution to a known threat actor proves impossible, the framework designates an Unattributed Threat Actor (UTA) with a numeric suffix (for example, UTA-53), which can later be merged with a known actor or another UTA as additional evidence emerges. This approach, consistent with the best practices promoted by the <a href="https://www.first.org/">FIRST</a> community for CTI reporting, ensures that attribution claims remain defensible, evidence-based, and revisable as the analytical picture develops.</p>

<h2 id="a-living-document-for-a-changing-landscape">A living document for a changing landscape</h2>

<p><strong>CERT-EU</strong> has explicitly designed the framework as a dynamic document rather than a static reference. The threat environment changes constantly: new geopolitical pressures emerge, technologies evolve, and regulatory frameworks are updated. The Cyber Threat Intelligence Framework is intended to evolve in step with these shifts, and the organization has published it under TLP:CLEAR precisely to invite feedback from peers and cybersecurity professionals across the broader community. This openness to external input is itself a statement of intent: effective threat intelligence is a collective endeavour, not a closed institutional exercise.</p>

<p>The implications of the framework extend well beyond the walls of <strong>EU</strong> institutions. National administrations, public bodies, and private organizations that work in cooperation with Union entities, including those already engaged in information-sharing initiatives coordinated through <a href="https://www.enisa.europa.eu/">ENISA</a>, now have a shared reference point for aligning their own intelligence processes. Not by replacing their existing frameworks, but by adopting compatible terminology, confidence scales, and scoring approaches that enable genuine interoperability. In a landscape where cyber threats routinely cross organizational and national boundaries, this kind of <em>methodological alignment</em> is a prerequisite for effective collective defence and for the shared situational awareness that complex, interconnected environments increasingly demand.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[An in-depth analysis of the CERT-EU Cyber Threat Intelligence Framework, published on February 13, 2026 to standardize how malicious cyber activity targeting EU institutions is classified, assessed, and prioritized.]]></summary></entry><entry><title type="html">The end of security as we knew it: what Claude Code Security really means</title><link href="https://andreafortuna.org/2026/02/22/claude-code-security.html" rel="alternate" type="text/html" title="The end of security as we knew it: what Claude Code Security really means" /><published>2026-02-22T00:00:00+00:00</published><updated>2026-02-22T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/22/claude-code-security</id><content type="html" xml:base="https://andreafortuna.org/2026/02/22/claude-code-security.html"><![CDATA[<h2 id="the-announcement-that-shook-the-market">The announcement that shook the market</h2>

<p>On February 19, 2026, <strong>Anthropic</strong> unveiled <a href="https://www.anthropic.com/news/claude-code-security">Claude Code Security</a>,
a new capability integrated into its Claude Code platform, and the cybersecurity industry felt the
tremor almost immediately. <strong>CrowdStrike</strong> saw its stock drop nearly 8% in the hours following the
announcement, while <strong>Cloudflare</strong> shed just over 8%. These are not modest corrections; they signal
a market recalibration, a repricing of assumptions that had underpinned the sector for years.
Whether or not AI-native security tools will actually replace traditional vendors remains an open
question, but the market voted swiftly, and it voted with conviction.</p>

<p><img src="/assets/2026/security-shift.png" alt="Claude Code Security" /></p>

<p>The drop was not driven by panic or speculation alone. Investors and analysts grasped the structural
implication behind the launch: a reasoning-based security scanner, built directly into a developer
workflow tool used by thousands of engineering teams worldwide, could compress the need for dedicated
third-party security products in ways that have no real historical precedent. For the first time,
frontier-level security analysis was being packaged not as a standalone enterprise product requiring
dedicated procurement, integration, and training cycles, but as a feature embedded in the environment
where code is actually written.</p>

<h2 id="how-claude-code-security-actually-works">How Claude Code Security actually works</h2>

<p>What distinguishes <a href="https://thehackernews.com/2026/02/anthropic-launches-claude-code-security.html">Claude Code Security</a>
from conventional scanning tools is its departure from <em>rule-based pattern recognition</em>. Traditional
static analysis tools work by matching code against a catalogue of known vulnerability signatures.
They are fast, consistent, and by definition blind to anything outside their rulebook. Claude Code
Security takes a fundamentally different approach: it reads a codebase the way a senior security
researcher would, understanding how individual components interact, tracing data flows across the
application, and flagging vulnerabilities that emerge from context rather than from a known fingerprint.</p>

<p>Each finding goes through what <strong>Anthropic</strong> calls a <em>multi-stage verification process</em>, designed to
filter out false positives before results ever reach a human analyst. Identified vulnerabilities are
assigned severity ratings to help teams prioritize, and each issue also carries a confidence score, an
honest acknowledgment that even reasoning-based systems can be uncertain. Critically, nothing in the
system applies changes automatically. Patches are suggested, reviewed on a dedicated dashboard, and
approved by developers, keeping a human firmly in the loop. This human-in-the-loop design is not a
limitation but a deliberate principle: the tool is built to augment decision-making, not replace it.</p>

<p>This is not theoretical capability. Using <strong>Claude</strong> Opus 4.6, <strong>Anthropic</strong>’s internal security
team found over 500 vulnerabilities in production open-source codebases, bugs that had gone undetected
for decades despite years of expert review. The tool is currently available in limited research preview
for Enterprise and Team customers, with an expedited, complimentary access path for teams managing
open-source repositories.</p>

<h2 id="the-structural-obsolescence-of-the-old-model">The structural obsolescence of the old model</h2>

<p>The market reaction made headlines, but the deeper story is structural. For years, enterprise security
has operated on a model essentially designed for a slower world: periodic scans scheduled weeks apart,
vulnerability backlogs stretching into hundreds of unresolved items, and exhausting cross-functional
negotiations over which risks deserved immediate attention and which could be deferred. The irony is
that these practices never truly managed risk; they managed the illusion of managing risk. Security
teams spent enormous energy producing reports that documented exposure without meaningfully reducing it.</p>

<p>A <a href="https://www.weforum.org/stories/2025/11/cybersecurity-ai-professionals-workers/">World Economic Forum study from 2025</a>
found that 88% of security teams reported significant time savings through AI-assisted operations.
<a href="https://www.aimakers.co/blog/ai-penetration-testing/">AI-powered penetration testing</a> now runs
approximately 80 times faster than traditional manual assessments, with an 80% reduction in
remediation time for identified vulnerabilities. These are not incremental improvements; they represent
a fundamental shift in what security teams can realistically accomplish per unit of time, and the gap
between organizations that have embraced this shift and those that have not is widening every quarter.</p>

<p>The old model made sense when threat actors also operated slowly, when the window between vulnerability
disclosure and active exploitation was measured in months rather than hours. That window has collapsed.
Attackers now leverage the same AI tools that defenders are only beginning to adopt, automating
reconnaissance, payload generation, and lateral movement at a speed that periodic scanning simply
cannot match. <em>The asymmetry has become untenable</em>, and the organizations still relying on legacy
processes are not managing their exposure; they are simply choosing not to look.</p>

<h2 id="a-competitive-landscape-in-motion">A competitive landscape in motion</h2>

<p><strong>Anthropic</strong> is not operating in isolation. The AI-powered security space is becoming crowded with
credible players, each approaching the problem from a different angle. <strong>OpenAI</strong> began beta testing
a tool called Aardvark in late 2025, an autonomous security researcher built on GPT-5, signaling that
the two leading AI labs are now in direct competition in the security domain. Meanwhile, the broader
market for <a href="https://www.stackhawk.com/blog/claude-code-security/">AI-native security tools</a> is
expanding rapidly, with 97% of organizations reportedly considering AI adoption in penetration testing
workflows.</p>

<p>Traditional vendors are not standing still. <strong>CrowdStrike</strong> has accelerated investment in its
Charlotte AI assistant, integrating generative capabilities into its Falcon platform. <strong>Palo Alto
Networks</strong> has expanded its Cortex XSIAM suite with AI-driven threat detection, while <strong>Fortinet</strong>
has deepened its use of machine learning across its Security Fabric. Yet these efforts are largely
retrofits, AI capabilities layered on top of architectures designed for a pre-AI era, rather than
ground-up rethinks of the security model itself.</p>

<p>This convergence of capability and demand is reshaping how security is purchased, staffed, and
integrated into development pipelines. The traditional model of security as an external validation
gate, a checkpoint imposed on engineering at the end of a release cycle, is giving way to something
more continuous and embedded. Claude Code Security, deployed directly inside a developer’s coding
environment, represents a logical endpoint of this shift: <em>security that travels with the code from
the moment it is written, rather than arriving after the fact to report on the damage.</em></p>

<h3 id="what-it-does-not-cover-runtime-behavior">What it does not cover: runtime behavior</h3>

<p>It is worth noting, however, that the current version does not test runtime behavior. It cannot send
requests through an API stack, validate how authentication middleware chains under real conditions, or
confirm whether a finding is exploitable in a live environment. Those classes of vulnerabilities, the
ones most likely to appear in actual incident reports, only manifest at runtime. A mature AppSec
program will therefore need to instrument both code-level reasoning and runtime validation in parallel,
not treat one as a substitute for the other.</p>

<h2 id="a-reset-not-an-extinction">A reset, not an extinction</h2>

<p>None of this means that the expertise, judgment, and creativity of skilled security professionals
becomes obsolete. What it means is that <em>the definition of what skilled looks like</em> is shifting. The
analyst who spent most of their working day triaging alert queues and writing remediation tickets is
looking at a radically different role profile than the one who knows how to configure, interrogate,
and challenge an AI-assisted security system effectively. These are not lesser skills, and the
transition may lead to work that is both more demanding and more genuinely impactful.</p>

<p>What is ending is the institutional inertia that allowed security to remain expensive, slow, and
operationally noisy without accountability. The cybersecurity sector is entering a
<a href="https://www.totalassure.com/blog/ai-cybersecurity-stats-2025">transformation phase</a>: faster
identification of real vulnerabilities, reduced operational noise, tighter integration with
development workflows, and transparent confidence scoring are not aspirations for a distant future.
They are capabilities available, in preview, today. The tools now emerging do not diminish domain
expertise; they amplify it, provided that expertise is applied to the right problems. The challenge
is identifying which problems those are before the competitive landscape makes that choice for you.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[When Anthropic launched Claude Code Security in February 2026, cybersecurity stocks dropped sharply. But the real disruption is not in the market — it is in the model itself.]]></summary></entry><entry><title type="html">ClickFix: the new frontier of social engineering between DNS and Google Ads</title><link href="https://andreafortuna.org/2026/02/20/clickfix-dns-google-ads-social-engineering.html" rel="alternate" type="text/html" title="ClickFix: the new frontier of social engineering between DNS and Google Ads" /><published>2026-02-20T00:00:00+00:00</published><updated>2026-02-20T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/20/clickfix-dns-google-ads-social-engineering</id><content type="html" xml:base="https://andreafortuna.org/2026/02/20/clickfix-dns-google-ads-social-engineering.html"><![CDATA[<p>Over the past few months, a social engineering technique known as ClickFix has rapidly evolved from a relatively contained threat into one of the most sophisticated and versatile attack vectors on the current threat landscape. Originally documented as a method for tricking users into executing malicious commands disguised as routine software fixes or CAPTCHA verifications, the technique has now incorporated two alarming innovations: the abuse of DNS infrastructure as a covert payload delivery channel and the exploitation of Google-sponsored advertising to redirect unsuspecting users to weaponized content hosted on fully legitimate platforms.</p>

<h2 id="how-clickfix-works-anatomy-of-deception">How ClickFix works: anatomy of deception</h2>

<p>ClickFix attacks follow a deceptively simple but psychologically effective pattern. The attacker lures a target to a malicious or compromised page, which presents a fake error message or a fraudulent CAPTCHA verification prompt. The victim is then instructed to open the Windows Run dialog (Win+R), paste a command that has already been silently loaded onto their clipboard by the page, and press Enter. That command, typically a <strong>PowerShell</strong> invocation, initiates a chain of downloads that ultimately installs an infostealer or a remote access trojan on the system.</p>

<p><img src="/assets/2026/microsoft-dns-query.webp" alt="Courtesy of Bleepingcomputer" /></p>

<p>What makes this technique especially insidious is its <em>exploitation of the human element rather than any technical vulnerability</em>: no zero-day exploit is required. The attacker simply convinces the user to become an unwitting accomplice in their own compromise. <a href="https://www.proofpoint.com/us/blog/threat-insight/security-brief-clickfix-social-engineering-technique-floods-threat-landscape">Proofpoint</a> documented an early large-scale campaign in which the technique impacted at least 300 organizations globally, with fake reCAPTCHA messages used to obscure the true nature of the pasted command from the Windows Run dialog. Since then, researchers have reported a surge of over 500% in observed attacks throughout 2025, with threat actors continuously refining both the social lures and the technical delivery mechanisms.</p>

<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Microsoft Defender researchers observed attackers using yet another evasion approach to the ClickFix technique: Asking targets to run a command that executes a custom DNS lookup and parses the `Name:` response to receive the next-stage payload for execution. <a href="https://t.co/NFbv1DJsXn">pic.twitter.com/NFbv1DJsXn</a></p>&mdash; Microsoft Threat Intelligence (@MsftSecIntel) <a href="https://twitter.com/MsftSecIntel/status/2022456612120629742?ref_src=twsrc%5Etfw">February 13, 2026</a></blockquote>
<script async="" src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

<p>The range of payloads delivered through ClickFix has grown steadily. Security teams have documented the distribution of <strong>Lumma Stealer</strong>, <strong>StealC</strong>, the <strong>Amatera</strong> infostealer, and various remote access trojans, all delivered through the same core mechanism of clipboard injection followed by manual command execution. This architectural simplicity is precisely what makes the technique so durable: it does not depend on a specific vulnerability, browser version, or operating system configuration, but rather on the consistent reliability of human compliance.</p>

<h2 id="the-dns-variant-nslookup-as-a-weapon">The DNS variant: nslookup as a weapon</h2>

<p>The most technically significant evolution of ClickFix was disclosed by <strong>Microsoft</strong> Threat Intelligence in February 2026: a variant that abuses the <code class="language-plaintext highlighter-rouge">nslookup</code> command to retrieve malicious payloads through DNS responses rather than conventional HTTP requests. Victims are instructed to execute a command through the Windows Run dialog that performs a DNS lookup against a hard-coded external DNS server under the attacker’s control. The malicious <strong>PowerShell</strong> script is embedded directly within the <code class="language-plaintext highlighter-rouge">Name:</code> field of the DNS response, parsed from the standard <code class="language-plaintext highlighter-rouge">nslookup</code> output, and immediately executed on the victim’s machine.</p>

<p><img src="/assets/2026/clickfix.png" alt="infographic" /></p>

<p>As <a href="https://www.bleepingcomputer.com/news/security/new-clickfix-attack-abuses-nslookup-to-retrieve-powershell-payload-via-dns/">BleepingComputer</a> noted in its reporting, this represents the first known use of DNS as a delivery channel in ClickFix campaigns. The approach provides a critical evasion advantage: DNS traffic is ubiquitous on enterprise networks and rarely subject to the same level of inspection as HTTP or HTTPS traffic. By blending malicious activity into standard DNS queries, attackers can bypass web-based detection mechanisms, proxies, and content filtering solutions that would otherwise intercept a traditional download request. The final payload delivered in the observed campaign is <strong>ModeloRAT</strong>, a remote access trojan that establishes persistent control over the compromised system by creating a <code class="language-plaintext highlighter-rouge">MonitoringService.lnk</code> shortcut in the Windows startup folder to survive reboots.</p>

<p>The <strong>Microsoft</strong> research team described this approach as using DNS as a <em>lightweight staging and signaling channel</em>, which not only reduces the attacker’s dependency on traditional web infrastructure but also introduces an additional validation layer before the second-stage payload is executed. The initial command runs through <code class="language-plaintext highlighter-rouge">cmd.exe</code> and directs the DNS lookup toward an attacker-controlled resolver rather than the system’s default one, a detail that further complicates detection for security tools that rely on monitoring communications with known malicious IP addresses. This architectural choice makes campaigns built on this technique significantly more resilient to takedowns and harder to attribute through conventional network forensics.</p>

<h2 id="google-ads-and-claude-artifacts-trust-as-a-weapon">Google Ads and Claude artifacts: trust as a weapon</h2>

<p>While the DNS variant targets <strong>Windows</strong> environments with technical precision, a parallel campaign documented in February 2026 exploits entirely different vectors to attack macOS users: <strong>Google</strong>-sponsored search results and public artifacts generated by <strong>Anthropic</strong>’s <strong>Claude</strong> large language model. In this campaign, threat actors create publicly accessible Claude artifacts, pieces of content published directly on the <code class="language-plaintext highlighter-rouge">claude.ai</code> domain, containing ClickFix instructions disguised as legitimate technical guides. These guides prompt users to open a terminal and execute shell commands, replicating the same psychological manipulation used in the Windows variant but adapted to macOS shell syntax.</p>

<p>The attack’s reach is dramatically amplified through targeted <strong>Google Ads</strong> that promote these malicious artifacts in search results. As security researchers observed, the ads display the real, recognized <code class="language-plaintext highlighter-rouge">claude.ai</code> domain rather than a spoofed or typosquatted address. Clicking the ad leads to a genuine <strong>Claude</strong> page, not a phishing replica. This combination, a trusted platform combined with sponsored visibility in search results for specific technical queries, creates a <em>near-perfect trust signal</em> for technically inclined users who might otherwise recognize more conventional phishing attempts. A single malicious Claude artifact accumulated over 15,000 views before being identified, according to <a href="https://www.rescana.com/post/claude-llm-artifacts-exploited-to-distribute-mac-infostealer-malware-via-clickfix-attack-chain-targe">Rescana</a> researchers, while a second variant distributed through <strong>Evernote</strong> links pushed the total exposure well beyond 10,000 additional users. The malware delivered to macOS victims includes <strong>Atomic Stealer</strong> and <strong>MacSync Stealer</strong>, tools designed to extract credentials, session tokens, browser data, and cryptocurrency wallet contents.</p>

<p>The use of <strong>Google Ads</strong> in this campaign is particularly noteworthy from a security architecture perspective. Ads can be granularly configured to target users by geographic region, device type, and even the specific email domains of an organization, meaning attackers can tailor the delivery to reach high-value targets with surgical precision while staying within the policy boundaries of the advertising platform long enough to reach a significant victim pool.</p>

<h2 id="a-cross-platform-threat-in-rapid-evolution">A cross-platform threat in rapid evolution</h2>

<p>ClickFix’s adaptability is what distinguishes it from more static attack frameworks. The technique has demonstrated a consistent ability to repurpose whatever legitimate infrastructure is currently trusted by target demographics. A variation of the Claude-based campaign stages ClickFix instructions on <strong>Evernote</strong> links distributed through sponsored results, demonstrating the attackers’ willingness to rotate infrastructure across multiple trusted platforms to maintain campaign longevity. The <a href="https://www.cisecurity.org/insights/blog/clickfix-an-adaptive-social-engineering-technique">Center for Internet Security</a> has formally characterized ClickFix as an <em>adaptive social engineering technique</em>, precisely because of this capacity to integrate seamlessly into the evolving digital trust landscape rather than relying on fixed infrastructure.</p>

<p><strong>Kaspersky</strong> data indicates that campaigns deploying <strong>RenEngine Loader</strong> through ClickFix have affected users across <strong>Russia</strong>, <strong>Brazil</strong>, <strong>Turkey</strong>, <strong>Spain</strong>, <strong>Germany</strong>, <strong>Mexico</strong>, <strong>Algeria</strong>, <strong>Egypt</strong>, <strong>Italy</strong>, and <strong>France</strong> since March 2025, confirming that this is a globally distributed threat with no effective geographic boundary. Further observed variants include a <strong>ClearFake</strong> campaign that uses fake CAPTCHA lures on compromised <strong>WordPress</strong> sites to deploy <strong>Lumma Stealer</strong>, and an email phishing variant that embeds ClickFix instructions within malicious SVG files contained in password-protected ZIP archives, each iteration testing the limits of what detection systems can reliably flag as malicious behavior.</p>

<h2 id="detecting-and-mitigating-clickfix-attacks">Detecting and mitigating ClickFix attacks</h2>

<p>The fundamental challenge in defending against ClickFix lies in its intentional abuse of legitimate user actions and trusted infrastructure. Traditional endpoint detection products may not flag the execution of <code class="language-plaintext highlighter-rouge">nslookup</code> or <code class="language-plaintext highlighter-rouge">PowerShell</code> as intrinsically malicious, since both are standard system utilities with countless legitimate uses. Security teams should prioritize behavioral monitoring rules that detect anomalous execution patterns, such as <code class="language-plaintext highlighter-rouge">nslookup</code> invocations that query non-standard external DNS servers, or <strong>PowerShell</strong> processes spawned directly from the Windows Run dialog rather than from scheduled tasks or software installers.</p>

<p>From a user awareness perspective, organizations should incorporate ClickFix-specific scenarios into their security training programs, emphasizing the categorical principle that no legitimate service will ever ask a user to manually open a Run dialog, paste commands from a CAPTCHA page, or execute shell instructions copied from a website. Network-level controls, including <a href="https://www.infoblox.com/blog/security/the-dns-threat-landscape-december-2025-a-three-month-lookback/">DNS filtering solutions</a> and policies that restrict outbound DNS queries to unauthorized external resolvers, can significantly reduce the attack surface for the DNS-based variant. For macOS environments, endpoint policies that restrict execution of unsigned shell scripts initiated from browser sessions provide a meaningful additional layer of defense against campaigns that weaponize trusted AI platforms as delivery surfaces.</p>

<p>The broader lesson ClickFix offers is one that extends beyond any single technique or payload family: <em>the security perimeter is increasingly defined by user behavior rather than network topology</em>. As threat actors continue to exploit institutional trust in platforms like <strong>Google</strong>, <strong>Claude</strong>, and <strong>Evernote</strong>, security architectures that focus exclusively on technical controls without addressing the human decision layer will find themselves consistently one step behind.</p>

<h2 id="iocs">IoCs</h2>

<p>IoCs can be found in a curated STIX 2.1 format at this <a href="/assets/2026/clickfix.stix21.json">link</a>.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[A new generation of ClickFix attacks abuses DNS lookups and Google-sponsored ads to deliver malware, bypassing traditional defenses and exploiting user trust in legitimate platforms.]]></summary></entry><entry><title type="html">Italy’s cyber perimeter under fire: two institutional breaches in fifteen days</title><link href="https://andreafortuna.org/2026/02/19/italy-cyber-perimeter-institutional-breaches.html" rel="alternate" type="text/html" title="Italy’s cyber perimeter under fire: two institutional breaches in fifteen days" /><published>2026-02-19T00:00:00+00:00</published><updated>2026-02-19T00:00:00+00:00</updated><id>https://andreafortuna.org/2026/02/19/italy-cyber-perimeter-institutional-breaches</id><content type="html" xml:base="https://andreafortuna.org/2026/02/19/italy-cyber-perimeter-institutional-breaches.html"><![CDATA[<h2 id="when-the-digital-blackout-hit-the-lecture-hall">When the digital blackout hit the lecture hall</h2>

<p><img src="/assets/2026/sapienza.jpeg" alt="Sapienza University of Rome" /></p>

<p>Between the night of February 1 and February 2, 2026, <strong>Sapienza University of Rome</strong> experienced something far more serious than a routine IT outage. What struck its campus was a full digital blackout that simultaneously knocked out portals, internal networks, and administrative services, forcing one of Europe’s largest universities into an improvised return to analogue operations. Students found themselves queuing at physical info points, exams continued but grade recording stalled, and administrative deadlines were pushed back as the institution scrambled to contain the damage.</p>

<p>At the heart of the crisis was <strong>Infostud</strong>, the university’s integrated platform for exam bookings, academic records, payments, and certifications. When Infostud went dark, it triggered a cascade of failures across every service that depended on it, including internal email systems and real-time administrative workflows. The suspension of digital operations was not merely inconvenient: it represented a complete loss of control over processes that, by design, are supposed to be resilient and independently verifiable.</p>

<p>According to <a href="https://it.blastingnews.com/tecnologia/2026/02/ransomware-blocca-la-sapienza-emergenza-digitale-per-tre-giorni-003982399.html">reporting by Blasting News</a>, the attack was carried out using the ransomware <strong>BabLock</strong>, also known as <em>Rorschach</em>, a sophisticated piece of malware first identified in 2023. The malware, attributed to a group operating under the signature <em>Femwar02</em>, is built from code fragments derived from <strong>Babuk</strong>, <strong>LockBit v2.0</strong>, and <strong>DarkSide</strong>, making it particularly fast at encrypting files and difficult to contain once deployed. The attackers are believed to have exploited technical vulnerabilities or a compromised administrator account to gain initial access to the university’s network.</p>

<h2 id="bablock-bitcoin-and-the-logic-of-extortion">BabLock, bitcoin and the logic of extortion</h2>

<p>What transformed a technical incident into a political case was the mechanism of extortion that followed the breach. Consistent with the <strong>BabLock</strong> playbook, the attackers left instructions for administrators and initiated a negotiation over decryption keys, reportedly demanding <a href="https://www.edoardolimone.com/2026/02/04/uniroma-1-sapienza-data-breach/">up to one million dollars in bitcoin</a> with a 72-hour ultimatum and the explicit threat of publishing exfiltrated data if the ransom went unpaid. The countdown timer, a standard feature of modern ransomware campaigns, is designed to maximize pressure and minimize the institution’s room for deliberation.</p>

<p>The reputational damage extended beyond the technical disruption. On dark web circuits accessible via <strong>Tor</strong>, offers appeared claiming to sell academic documents attributed to <strong>Sapienza</strong>, complete with packages mimicking the university’s official identity. Whether these listings were based on actual exfiltrated data or were opportunistic fraud riding the media wave, the effect was identical: the credibility of a public institution was put up for sale. When someone attempts to monetize your identity, the attack is no longer about data alone. It becomes an assault on trust and authority, the two foundations that allow a state institution to function.</p>

<p>The restoration process, when it arrived, involved forensic cleanup, backup validation, and staged service reactivation. But even as <strong>Infostud</strong> came back online and the university moved toward normal operations, a critical question remained unanswered: what data was actually exfiltrated, in what volume, and what are the potential future uses? <em>Operational damage ends when you restart your systems; strategic damage ends only when you know precisely what left your digital house.</em></p>

<h2 id="the-viminale-breach-5000-digos-agents-exposed">The Viminale breach: 5,000 Digos agents exposed</h2>

<p>The second incident is of an entirely different magnitude. A group of hackers linked to <strong>Chinese</strong> intelligence managed to penetrate the network of the <strong>Viminale</strong>, <strong>Italy</strong>’s Ministry of the Interior, and extract files containing the identities, roles, and operational locations of approximately 5,000 agents belonging to <strong>Digos</strong> (Divisione Investigazioni Generali e Operazioni Speciali). As <a href="https://it.euronews.com/2026/02/18/hacker-cinesi-rubano-dati-di-cinquemila-agenti-digos-in-attacco-informatico-alla-rete-del-">Euronews reported</a> citing <strong>La Repubblica</strong>, the division targeted is responsible for counterterrorism surveillance, monitoring of foreign communities, and tracking of dissidents from <strong>Beijing</strong> who have sought refuge in <strong>Italy</strong>.</p>

<p><img src="/assets/2026/digos.jpeg" alt="digos" /></p>

<p>The intrusion is believed to have taken place between 2024 and 2025, and was described by investigators as “surgical”: not an attack aimed at disruption or sabotage, but a <em>targeted exfiltration of high-value operational intelligence</em>. This distinction carries real weight. A noisy attack, one that crashes systems or wipes data, is visible, measurable, and declarable. A silent exfiltration often surfaces only months later, and by then the questions multiply: how long did the adversary have access, how many times did they return, and in what ways is the extracted information already being exploited?</p>

<p>The data extracted goes far beyond a staff directory. It provides a map of investigative priorities, revealing which officers are assigned to the most sensitive operations. For <strong>Beijing</strong>, having that kind of visibility into <strong>Italy</strong>’s internal security apparatus is worth considerably more than any conventional act of sabotage. If those files include information on officers involved in tracking Chinese dissidents living in <strong>Italy</strong>, the implications extend to real people whose safety depends on their cases remaining confidential.</p>

<h2 id="the-chinese-shadow-and-the-political-trap">The Chinese shadow and the political trap</h2>

<p>The <strong>Viminale</strong> breach does not occur in a political vacuum. In 2024, Interior Minister <strong>Matteo Piantedosi</strong> traveled to <strong>Beijing</strong> and met with his counterpart <strong>Wang Xiaohong</strong> to establish a three-year cooperation plan covering drug trafficking, cybercrime, human trafficking, and organized crime. In a development described as historically significant, <strong>China</strong> responded, for the first time, to a formal rogatory request from the <strong>Prato</strong> prosecutor’s office led by <strong>Luca Tescaroli</strong>, which was investigating the exploitation of workers within <strong>Prato</strong>’s textile district and its surrounding criminal networks.</p>

<p>The juxtaposition is uncomfortable. While <strong>Italy</strong> was attempting to build an operational bridge with <strong>Beijing</strong> to combat transnational crime, actors linked to <strong>Chinese</strong> intelligence were inside the <strong>Viminale</strong>’s systems, mapping precisely the people responsible for those investigations. The two timelines overlap in a way that is difficult to dismiss as coincidence. Every diplomatic opening, if not backed by genuine technical security, can become a strategic asset for an adversary with different objectives. Following the discovery of the intrusion, Italian public security authorities reportedly severed all direct operational collaboration with <strong>Chinese</strong> counterparts, a decision that reflects how seriously the damage was assessed at the highest levels.</p>

<p>This is not the first time <strong>Italy</strong> has underestimated the <em>information risk embedded in cooperative arrangements</em>. During the <strong>COVID</strong> pandemic, Russian military medical teams worked inside Italian hospitals, gaining access to structures and information flows at a moment of acute institutional vulnerability. The lesson that should have been drawn then, that every access point is a potential collection vector regardless of the diplomatic context, was apparently not absorbed deeply enough into the country’s security culture.</p>

<h2 id="rules-for-others-the-compliance-paradox">Rules for others: the compliance paradox</h2>

<p>Both incidents land at a specific and consequential moment in <strong>Italy</strong>’s regulatory calendar. <a href="https://nis2certification.eu/italy/">Legislative Decree 138/2024</a>, which transposed the <strong>NIS2 Directive</strong> into Italian law, entered into force on October 16, 2024. The measure imposes structured obligations on essential and important entities across energy, transport, healthcare, finance, public administration, and digital infrastructure, covering risk management, incident reporting, business continuity, supply chain security, and executive accountability. By April 2025, <strong>ACN</strong> (Agenzia per la Cybersicurezza Nazionale) was required to publish minimum security measures; full compliance with advanced requirements is expected by October 2026.</p>

<p>For private organizations, NIS2 means audits, mandatory role designations, investment in security infrastructure, incident notification within 24 to 72 hours, and direct liability for senior leadership. The compliance burden is real, and for many smaller entities in critical sectors it represents a significant operational cost. What makes the picture so troubling is the contrast: the same state that demands this discipline from its private sector is currently managing public institutions that failed to protect the Ministry of the Interior and one of the country’s flagship universities within the same fifteen-day window.</p>

<p><em>NIS2 is not a compliance logo to display at conferences. <a href="https://andreafortuna.org/2025/12/23/the-imperative-of-periodic-security-reviews-under-nis-2-compliance">It is a commitment</a>.</em> It demands that incidents no longer be treated as confidential embarrassments, that security be demonstrated rather than declared, and that governance failures carry consequences. When the public administration itself provides the clearest examples of what non-compliance looks like in practice, the regulatory architecture risks becoming an asymmetric burden: demanding from the outside what it cannot enforce from within. The credibility of <strong>ACN</strong>, the agency charged with supervising Italy’s national cyber perimeter, rests in part on its capacity to hold public institutions to the same standards it imposes on the private sector. Two incidents in fifteen days suggest that the distance between the written perimeter and the real one remains dangerously wide.</p>]]></content><author><name>Andrea Fortuna</name><email>andrea@andreafortuna.org</email></author><summary type="html"><![CDATA[When the digital blackout hit the lecture hall]]></summary></entry></feed>