Antivirus engines and EDRs have their place – no doubt. But what happens when malware simply slips through their nets? What if the malicious file was never executed? What if the incident happened months ago? That’s where THOR comes in. Our compromise assessment scanner has a unique superpower: it operates where others stay blind – in the calm, post-incident stillness of a system.
Why Detection Coverage Needs a Rethink
Most detection solutions focus on runtime behavior. They monitor what’s currently happening: processes, memory, network connections. And they’re good at that. But they also come with limitations:
- AVs are built to be strict and non-intrusive – they need clear indicators before blocking.
- EDRs are real-time watchers – they need something to happen to react.
- Neither is built to find old traces. Neither helps you answer: “Was this system compromised weeks ago?”
That’s the gap we’re filling.
THOR: Built for What Others Ignore
THOR doesn’t run in real-time. It scans on demand – and that’s its strength. Here’s what makes it special:
- It works on cold systems – backups, forensic images, offline endpoints.
- It detects threats in their static form – scripts, tools, payloads lying dormant.
- It’s sensitive – by design. We don’t quarantine or block; we alert. So we can afford to be aggressive without causing damage.
This approach enables THOR to find threats that AVs and EDRs often miss. And we have data to back it up.
What the Numbers Say – and What They Mean
Here’s an excerpt from our detection statistics over the past 21 days. Each row represents a detection rule from THOR that matched one or more files uploaded to VirusTotal.
We recorded:
- The average antivirus detection rate (i.e. the average number of AV engines that flagged those samples – not a percentage)
- The number of samples that triggered each rule
- A manual classification of the threat type
Rule Name | AV Detection Rate | Match Count | Category |
---|---|---|---|
SUSP_BAT_Aux_Jan20_1 | 0.0 | 1065 | Malicious Script |
To be clear: a value of 0.00
means that, on average, zero antivirus engines flagged the matched samples as malicious.
A value of 0.50
means: on average, half an AV engine did – meaning across all matched samples, one engine flagged every second sample, or two engines flagged one out of four. Whatever the distribution, the value reflects the actual number of detections per sample, not a percentage.
Most of the entries in this list show extremely low detection rates, and all samples were manually verified by us. They’re not false positives, not questionable – they’re actual dual-use tools, malicious or attacker-controlled files that simply didn’t show up on anyone else’s radar.
And that’s the point: THOR consistently detects what others don’t – even when “others” means 70 commercial AV engines.
The screenshot above shows a portion of the raw data used in our analysis. The first column lists the rule name, the second shows the average number of AV engines that flagged the matched samples, and the third shows how many samples matched. The fourth column provides a human-readable threat category. The full dataset contains the top 500 best-performing rules – manually verified, all highlighting real blind spots in conventional AV coverage.
Why VirusTotal?
Of course, it’s important to acknowledge that the detection results on VirusTotal reflect static file analysis – what the engines see when they scan files at rest. This is not the full story of how antivirus products behave in real-world deployments. Many modern AV solutions include behavior-based detection features – components that resemble lightweight EDRs. These modules often trigger when a file is executed: for example, when a suspicious subprocess is spawned, or when the executed file drops additional payloads to disk that match known indicators.
In other words: just because an AV engine didn’t flag the file on VirusTotal doesn’t mean it would also miss it at runtime.
That said, our comparison still holds weight – and here’s why: our own use cases are all about files at rest. We scan forensic disk images, post-incident backups, extracted file collections, and log repositories. The question we’re answering isn’t “who reacts quickest at runtime?” – it’s “who still sees the attacker’s traces when everything is quiet?” In that context, the VirusTotal comparison is legitimate. It gives us a clean, static view – and one that’s aligned with the blind spots we aim to cover.
Offline Detection Matters
Another key distinction: THOR operates entirely offline. It does not rely on cloud-based lookups, online reputation systems, or external telemetry to make detections. The rule logic, the intelligence, the indicators – they’re all embedded.
This stands in contrast to many traditional AV products, which show significantly lower detection rates in offline scenarios, as shown in public AV-Comparatives tests. In those evaluations, some AV engines lose 20–40% of their detection capability when disconnected from their cloud backend.

AV Comparatives Comparison (Source: https://www.av-comparatives.org/tests/malware-protection-test-march-2025/)
VirusTotal scans also run in a “file-in-rest” context without access to these online components, which partly explains the often poor showing of AV engines in our comparison. But where they fall short offline, THOR still performs – because it was built for exactly that use case: forensic analysis and threat hunting in environments without a live internet connection, without signatures being streamed in the background, and without relying on reputation lookups.
Not Everything Happens in Real Time
Some of the most important detection work doesn’t happen live. It happens after the incident. When the system is already down. When the attacker’s gone. When you’re looking at disk images, logs, and backups.
And this is exactly where THOR thrives:
- Forensic Images: Analysts load up a compromised disk image and scan it with THOR to identify malicious artifacts – dropped tools, remnants of scripts, or persistence mechanisms.
- Backups: Before restoring a system after a ransomware attack, MSSPs use THOR to ensure that the backup isn’t quietly carrying a backdoor or another loader. We scan backups before they go back into production.
- Post-Incident Sweeps: Organizations run THOR across an environment after containment to discover how far the attack spread – even on machines that no longer show obvious signs of compromise.
In all these cases, real-time tools are either absent or useless. There’s no process to watch, no network activity to flag. Just files, logs, and traces.
THOR is designed for that moment.
It doesn’t need execution. It doesn’t need the system to be live. It just needs access to the data – and it gets to work.
Use Cases That Show Its Value
- Forensics: THOR sees what happened – even months ago.
- MSSPs: Use it to cover what your EDR and AV don’t.
- Backups: Scan them before restoring – without booting anything up.
- Baseline assessments: Run THOR across endpoints to detect signs of prior compromise.
Who Gets It (and Who Doesn’t)
Our biggest fans? CERTs, forensic labs, large enterprise IR teams.
Why? Because they *know* what’s missing in the standard stack – and they’ve seen THOR catch things that slipped past AV and EDR.
Where we struggle?
Trying to sell THOR as “just another security tool” to small businesses who want simplicity over nuance. Fair enough. THOR is not for everyone. But if you manage detection across dozens of environments or do post-incident analysis – THOR is your sensor for the grey zone.
TL;DR
- AVs and EDRs see the present. THOR sees the past.
- THOR detects threats in rest, not just in action.
- It’s the forensic flashlight – not the alarm bell.
- And if you’ve got blind spots, THOR helps you map them.
Full Table of Reviewed Samples
👉 Download the top 500 Valhalla rules with low AV coverage (PDF)