The AIX Blind Spot – Getting Visibility Where EDR Can’t Run

by Mar 30, 2026

AIX is still running critical workloads in finance, manufacturing, and other industries that value stability over frequent platform churn. The uncomfortable part is that many security programs treat these systems as “special cases” – meaning they often end up outside the default endpoint coverage model.

That gap isn’t academic. Attackers don’t care whether a host is “non-standard”. If it stores credentials, runs business logic, or sits on a path to more interesting systems, it’s part of the attack surface.

THOR runs natively on IBM AIX and is actively built and tested on AIX 7.2 and 7.3 in our IBM Cloud environments. The point is simple: extend the same detection intelligence used across the rest of the estate to platforms where classic endpoint tooling is often missing, incomplete, or operationally hard to maintain.

THOR running on IBM AIX 7.3 TL3 on POWER9

THOR running on IBM AIX 7.3 TL3 on POWER9

This post is not about replacing EDR. It’s about closing coverage gaps: scanning AIX for suspicious artifacts and signs of compromise – on demand, with forensic-grade visibility into data at rest.

Why AIX becomes a blind spot

Most security stacks evolve around the “standard endpoint” model: Windows workstations, mainstream Linux servers, and cloud workloads that can run the same agent stack everywhere. AIX rarely fits that default path.

In many organizations AIX systems are:

  • long-lived and business-critical (meaning changes are rare and risk-averse)
  • owned by specialized admin teams with separate tooling and processes
  • treated as infrastructure “exceptions” during monitoring rollouts

The result is predictable: strong visibility on the mainstream fleet, weaker visibility on the systems that are hardest to replace – and therefore often most valuable.

A second factor is detection domain. EDR is strongest on live telemetry: process starts, behavior chains, and network activity. AIX coverage in that model is frequently limited. At the same time, a large class of post-compromise evidence lives in data at rest: scripts, persistence artefacts, modified configs, suspicious tooling, and staged payloads.

That is exactly where a file- and artifact-focused scanner fits: broad platform reach, high-sensitivity rules, and a workflow designed for compromise assessments, baselining, and periodic verification.

What to look for on AIX (and why AV/EDR often won’t surface it)

When people say “we have EDR everywhere”, they usually mean “everywhere that looks like a standard endpoint”. AIX often isn’t part of that sentence.

Coverage map - AV vs EDR vs THOR (operational reach vs threat sophistication)

Coverage map – AV vs EDR vs THOR (operational reach vs threat sophistication)

Even when some level of endpoint tooling exists, it typically focuses on live telemetry. That’s useful, but it leaves an important class of post-compromise evidence underexplored: artifacts that sit quietly on disk. Even when some level of endpoint tooling exists, it typically focuses on live telemetry. That’s useful, but it leaves an important class of post-compromise evidence underexplored: artifacts that sit quietly on disk.

A practical way to think about AIX coverage is to focus on what tends to persist after initial access:

  • Obfuscated or suspicious scripts
    • shell/perl/python scripts with unusual encoding, heavy string hiding, or “download and execute” patterns
    • installers or helper scripts dropped into admin-friendly locations
  • Persistence artifacts
    • cron entries and auxiliary scripts referenced by cron
    • boot/runtime scripts and profile changes that survive reboots
    • additions or modifications around SSH access (keys, configs, forced commands)
  • Dual-use tooling and staging
    • tunneling/proxy components, remote administration utilities, credential handling helpers
    • archives and payload staging in temporary or rarely monitored paths
  • Configuration drift with security impact
    • subtle changes that don’t crash systems, but change access, logging, or trust boundaries

None of this requires AIX to be “malware-heavy”. In real incidents, the most valuable signals are often the boring ones: small artifacts that indicate interactive access, persistence, or lateral movement preparation.

This is where the difference in detection philosophy matters:

  • Classic AV is optimized to be conservative. It tends to focus on known malware and is forced to avoid noisy “maybe suspicious” reporting.
  • EDR is optimized for runtime behavior and correlation on endpoints where an agent can run with deep visibility.

THOR sits in a different lane: it is designed to scan data at rest and report high-sensitivity indicators that deserve analyst review. That is exactly the kind of output that helps close blind spots on platforms that fall outside the default endpoint model.

Operational models: agentless rollout vs managed orchestration

AIX fleets are rarely managed by “clicking around on each host”. The operational question is how to run scans consistently, collect output, and keep friction low.

There are two common models

1. Agentless Rollout

Agentless rollout (THOR runs natively on AIX) In an agentless model, THOR still runs locally on each AIX host. The difference is orchestration: distribution, execution, collection, and cleanup are driven by whatever tooling already exists in the AIX operations stack.

The workflow is usually the same shape:

  • distribute the THOR package to target hosts
  • unpack and run the scan locally (working directory set so signatures and subfolders are available)
  • pull the report back to a central location
  • remove temporary files

Common ways to implement this in AIX environments include:

  • dsh for distributed command execution, combined with dcp for distributed file copy (often used exactly for fleet-wide “push, run, pull” workflows)
  • NIM-managed environments where remote operations tooling is already in place and used for repeated administrative tasks
  • SSH-based automation (simple loops, parallel execution tooling), typically paired with scp/sftp/rsync to move the THOR package and collect reports
  • Ansible (agentless over SSH) where it is already part of the standard automation stack

This section intentionally stays at the “operational pattern” level rather than giving step-by-step commands, because the concrete implementation depends on site conventions (paths, privilege model, staging locations, cleanup rules).

These approaches are common in AIX operations and are a natural fit for on-demand scanning workflows like THOR.

2. Managed Orchestration

Managed orchestration via ASGARD (THOR still runs natively on AIX) For environments that want centralized control and remote collection, ASGARD adds orchestration and response capabilities around the same scan workflow:

  • remote console access
  • remote file system browsing
  • file collection
  • triggering THOR scans and collecting results centrally

The scanner remains the same. The difference is operational: how scans are scheduled, how output is collected, and how much bespoke scripting is required.

Why this is also an assurance/compliance problem

AIX coverage usually doesn’t fail because people think AIX is “safe”. It fails because responsibility gets fragmented: the security program is built around standard endpoints, and everything else becomes an exception.

That exception mindset collides with how audits and assurance expectations work. Different frameworks use different wording, but they converge on the same core ideas:

  • you need an inventory of in-scope systems (including non-standard platforms)
  • you need continuous or periodic verification that controls are effective
  • you need post-incident validation: “are we clean now?” cannot stop at the Windows/Linux fleet

AIX is often exactly the kind of system that auditors assume is covered because it is critical – while operational reality is that it sits outside the default monitoring stack.

AIX Logo

A practical, defensible way to close that gap is to treat AIX like any other high-value asset class:

  • baseline it (what is normal on these hosts?)
  • re-verify after changes (patch windows, application updates, configuration changes)
  • include it in compromise assessments (especially when lateral movement is in scope)

On-demand scanning fits these requirements well, because it does not require kernel-level telemetry or “always-on” instrumentation to be useful. It provides a repeatable verification step that can be executed when it matters.

AIX is only one example: extending coverage to other blind spots

AIX is a good example because it is both business-critical and operationally different from the default endpoint model. But the underlying problem is broader: modern security stacks have excellent coverage on mainstream endpoints, and predictable gaps everywhere else.

A practical way to think about closing those gaps is to choose the right deployment model per environment. Three patterns cover most “non-standard” estates:

  1. Native scanning where possible If a system can run THOR natively, the simplest path is to scan it directly with the same detection intelligence used elsewhere. AIX is in this bucket.
  2. Legacy Windows coverage – not every environment is on current Windows versions. THOR Legacy keeps older Windows estates in scope (for example Windows 7 and Windows Server 2008), where modern endpoint tooling is often constrained or no longer supported.
  3. Collector-based acquisition + central analysis – some systems cannot run a scanner at all (appliances, restricted environments, niche platforms). In those cases, THOR Thunderstorm is deployed on-prem inside the environment. Lightweight collectors gather relevant artifacts (often via script-based acquisition) and feed them into the on-prem Thunderstorm service, where deep detection logic can be applied centrally.
  4. Remote filesystem scanning – in some environments, the most practical approach is to mount a remote filesystem (SSH/NFS and similar) and scan the data at rest. This is a proven model for systems like VMware ESXi, where direct endpoint tooling may be limited.

The common theme across all four patterns is the same: extend high-sensitivity detection intelligence to assets that are outside the reach of default endpoint tooling.

Conclusion

EDR is strong on standard endpoints. The remaining risk sits in the blind spots: platforms and environments where agents don’t run, telemetry is limited, or verification is simply not part of the routine.

AIX is one of the most common examples of that gap. THOR closes it by running natively on AIX (actively built and tested on AIX 7.2 and 7.3) and by providing on-demand, data-at-rest visibility into suspicious artifacts and signs of compromise.

Treat AIX like any other critical asset: baseline it, re-verify after changes, and include it in compromise assessments.

About the author:

Avatar photo

Nextron Threat Research Team

The Nextron Threat Research Team builds the detection logic behind THOR, Aurora, Thunderstorm and the rest of the Nextron toolchain. The group analyses intrusions, reverse-engineers malware, tracks supply-chain incidents, and turns all of that into signatures, heuristics and rules used across our products. The team maintains YARA and Sigma content at scale, develops internal tooling and pipelines, and ships thousands of high-quality detections every year that help customers spot real attacker activity instead of noise.

Subscribe to our Newsletter

Monthly news, tips and insights.

Follow Us

Upgrade Your Cyber Defense with THOR

Detect hacker activity with the advanced APT scanner THOR. Utilize signature-based detection, YARA rules, anomaly detection, and fileless attack analysis to identify and respond to sophisticated intrusions.