Demystifying SIGMA Log Sources

Demystifying SIGMA Log Sources

One of the main goals of Sigma as a project and Sigma rules specifically has always been to reduce the gap that existed in the detection rules space. As maintainers of the Sigma rule repository we’re always striving for reducing that gap and making robust and actionable detections accessible and available to everyone for free.

Today we’re introducing a new contribution to the Sigma project called log-source guides. The idea behind it is to provide specific guides on configuring a system’s audit policies so that the system actually creates the logs needed by the rules. An adequate audit policy is a crucial dependency often overlooked when deploying Sigma rules.


SIGMA Log-Source

Before we delve deeper, Let’s take a step back and talk a bit about how the log-source attribute is used in Sigma rules.

Every Sigma rule requires a section called log-source that indicates as the name suggests, the log source on which this detection will fire. A typical example would look like this:

product: windows
category: process_creation

The “product” indicates that this rules is targeting the “Windows” product and a specific category called “process_creation” is used to indicate that this rule is using “Process Creation” events. You can read the full explanation of every field in the specification

To someone who isn’t familiar with Sigma or logging a couple of question will arise:

  • Is “Process Creation” category using events “Sysmon/EventID 1” or maybe “Microsoft-Windows-Security-Auditing/EventID 4688”?
  • The next question that arises: How would we collect these logs?

This is where the log-souce-guides enter the picture.

-Log-Source Guides

Starting from today, if you navigate to the Sigma main rule repository, you’ll see a new folder called “rules-documentation” this will be the location of the aforementioned log-source-guides and future documentation projects teased here.

The log-source-guides will have a simple structure that reflects available log sources. So in the example of “process_creation” for the Windows product:


Now that we established the location of these guides, let’s talk about their structure and the information they provide. Every logsource guide will provide the following information:

Event Source(s)

This section will describe the source(s) used by the log-source. As a quick way for the reader to know exactly which channel or ETW provider is required to be able to receive the events.

Logging Setup

This section describe a step by step guide on how to enable the logging and which events are to be expected by enabling it. Let’s take the “Credential Validation” audit policy.

  • The “Subcategory GUID” is the GUID for this specific audit policy which can be used with the auditpol command to enable the log (as we’ll see in a little bit).
  • “Provider” indicates the exact ETW provider that is responsible for emitting these events.
  • “Channel” is the Event Log channel where the generated events are emitted.
  • “Event Volume” indicates the amount of logs to be expected by enabling that audit category or EventID.
  • “API Mapping” is a direct link to jsecurity101TelemetrySource project.
  • “EventIDs” are the events generated by enabling the policy or log.

Next comes the section on how you’ll be able to enable the log in question – in this example either via Group Policy or by using Auditpol.

Note: This section will obviously be different depending on the logs. Enabling Sysmon logs will be different than enabling Security logs.

Full Event(s) List

This section while not always be present and is meant to be a collection of all events generated by the event sources in question. It’s there as a quick reference for any event. As every event is linking to the MSDN documentation when possible.

Event Fields

The last section contains the specific event fields used by every event. While this section will be complete for certain log sources such as “process_creation”, it’s still a work in progress for logs such as security and will be populated over time.

The idea behind this is to provide the fields that the event generates as a reference. Since SIGMA rules aim to be as close to the original logs as possible and leave field mapping to the back end.

Linking Log-Source Guides and Rules

To make these log-source guides easily accessible. Every Sigma rules will now link to their respective logsource documentation via a unique ID that will be added to the “definition” section. Here is an example:

As part of this initial release documentation will be available for the for the following log-sources:

  • product: windows / service: security
  • product: windows / category: process_creation
  • product: windows / category: ps_module
  • product: windows / category: ps_script

In the coming weeks and months we’ll keep adding more documentation to cover every available log-source.


Sigma Log-Source Checker

As part of this release we’re also providing a new script we’re calling “sigma-logsource-checker“. The idea behind this script is to provide the user the ability to know which logs to enable based on the SIGMA rules they’re using.

It takes a Sigma rules folder as input, parses all the used log-source and Event IDs and suggests the Audit policies and logging configurations that should be enabled.

As an optional feature, It can also parse the XML output of gpresult

gpresult /x [results.xml]

and then suggests configuration changes based on current policy:

Note: This version will only check the Security Audit policy and PowerShell log configuration. We’ll keep improving it as we go along.

You can have a look at this today by visiting the Sigma HQ main repository

Private Sigma Rule Feed in Valhalla and Partnership with SOC Prime

Private Sigma Rule Feed in Valhalla and Partnership with SOC Prime

We are proud to announce the integration of our private Sigma rule set in Valhalla. This rule set is used in our scanner THOR and endpoint agent Aurora. 

The rule set currently contains more than 250 quality-tested and generic rules written by Nextron’s detection engineering team. 

Valhalla Front Page Now Shows Sigma Rule Information

The Valhalla front page already shows Sigma rule information. The grey bars show the number of new Sigma rules created per day.

Two new tables on the front page list new Sigma rules and the rule categories. The first table contains new rules with rule title, description, creation date, a reference link and an info page.

The second table on the front page shows for which type of log source the rules have been written for.

This can help you decide if the contents of the feed align with the log data your organisation collects.

Feed Characteristics

The feed can be requested as a ZIP archive, which contains all rules in separate files or in form of one big a JSON file.

The rules included in the feed share the following features:

  • Each rules went through several stages of internal quality testing
  • Each rule is tagged with the current MITRE ATT&CK® techniques
  • Most of the rules use a more or less generic detection logic focussing on methods and not on tools

The feed is offered in a form that facilitates filtering of the rules based on levels, type or keywords. 

Future versions of the feed will include usage and false positive statistics based on anonymised data collected through Nextron Systems’ MSP partners. 

Web Access and API

The feed can be retrieved from the web page using the respective form on the Valhalla front page. Using the “demo” key, you can get the rules maintained in the public sigma repository in the streamlined form in which we offer all our rules. 

The Python module “valhallaAPI” has been updated to support the new Sigma rule feed. 

Partnership with SOC Prime

We are also excited to announce that we have entered into a partnership with SOC Prime, a renowned threat intelligence and cybersecurity content platform.

As part of this collaboration, Nextron’s detection rules will be made available in SOC Prime’s threat detection rule marketplace, providing SOC Prime’s customers with access to a wider variety of rules for identifying potential security threats. Nextron will be the first B2B partner to participate in this program, with their feed accessible to SOC Prime’s customers after a subscription update.

We believe that this partnership will provide significant value to both Nextron and SOC Prime’s customers by enhancing their ability to detect and respond to cyber threats.


We would like to inform our customers that the subscription to our threat detection rule feed is priced at 15,000 USD per year.

However, we are currently offering the feed at a reduced price of 10,000 USD per year. This offer is only valid for new customers who subscribe within the promotional period which ends at the end of May 2023.

The subscription fee provides access to our constantly updated and comprehensive threat detection rules, which have been carefully curated by our team of cybersecurity experts. We believe that this is a valuable investment for organizations looking to enhance their security posture and mitigate the risks of cyber threats. Don’t miss out on this limited time offer and subscribe today!

Use the “Get Started” button in the upper right corner of this web page to contact our sales team for details.

Sigma Rule Feed in Valhalla

Nextron Systems has always supported the Sigma project, investing hundreds of work hours into creating and maintaining the community rules shared in the public Sigma rule repository. Apart from the community support, we’ve created a set of internal detection rules for our products, THOR and Aurora, that we kept confidential for various reasons and didn’t share publicly.

Today we are glad to announce that we’ve started feeding these rules into the Valhalla service.

Similarly to the YARA feed, we’ve integrated all types of Sigma rules, publicly shared and private rules.

Using the “demo” API key, you can retrieve all public rules in a structured form from Valhalla.

The private Sigma rule feed contains 190 Sigma rules at the date of this blog post and is expected to grow by 600 rules every year. The following table from the front page of the Valhalla web service shows the different categories and the number of rules per category.

The Sigma rules can be retrieved in plain text or JSON format.

The JSON format allows users to filter or select based on certain values without parsing the rules, e.g., “only select rules that have been modified in the last 7 days”. 

Getting started

We offer the Sigma feed subscription independently of the YARA rule subscription at a much lower price. If you’re interested, please get in touch with your sales representative for pricing information or fill out this form.


Aurora – Sigma-Based EDR Agent – Preview

The following recorded video session includes information about our new Sigma-based EDR agent called “Aurora” and the free “Aurora Lite”. It’s a preview of the agent with information on its features, limits, advantages and a live demo.

The release is scheduled for December 2021. Follow us on Twitter or subscribe to the newsletter to get updates about the development of Aurora.

The slides with the pre-release information shared in the talk, can be downloaded here.

Sigma Scanning with THOR

Our compromise assessment scanner THOR is able to apply Sigma rules during the local Eventlog analysis. This can help any customer that has no central SIEM system or performs a live forensic analysis on a system group that does not report to central monitoring. 

By running THOR on these systems with activated Sigma feature, THOR becomes a kind of a distributed and portable SIEM.

Since the Sigma scan module isn’t active by default, we thought it a good idea to explain how to activate an use it in the best possible way. 

Open Source Rule Set

By default THOR uses the open-source Sigma rule set with more than 500+ rules provided by the Sigma project on their Github page

Since our head of research is also one of the project maintainers, it was reasonable to combine the detection capabilities of Sigma with THOR’s scanning functionality on the endpoint. 

We comply with Sigma’s DRL (Detection Rule License) by including the rule authors in the event data produced by these rules.  

Custom Sigma Rules

You can easily add you own Sigma rules by placing them in the “./custom-signatures/sigma” sub folder.

THOR’s Sigma Config

The THOR default configuration for Sigma can be found in the Sigma repository. 

This configuration shows you, which Windows Eventlogs and Linux/Unix log files get analyzed by the Sigma module in THOR.


Sigma Scanning

To activate the Sigma module simply use the “–sigma” flag (or “sigma: True” in a YML config file).

You can start a THOR scan that analyzes the local Eventlog and activates the Sigma feature with:

thor64.exe -a Eventlog --sigma

To run a Sigma scan on a single Eventlog e.g. Sysmon’s log, use the “-n” flag.

thor64.exe -a Eventlog --sigma -n "Microsoft-Windows-Sysmon/Operational"

To include the Sigma feature in a standard THOR scan and check only the last 3 days of the Windows Eventlogs to reduce the scan duration, use:

thor64.exe --sigma -lookback 3

Sigma Matches

Once a Sigma rule matches on a log entry, you’ll see it listed in one of the REASON’s that lead to the classification of an event. 

The following example shows the detection of a China Chopper (Caidao) ASP web shell. That web shell has been detected by multiple Sigma rules. 

  1. Webshell Detection With Command Line Keywords 
  2. Shells Spawned by Web Servers
  3. Whoami Execution

Getting Started

Since this feature isn’t available in THOR Lite, please contact us via the “Get Started” button in the upper right corner and get a free trial voucher. Most customers that use THOR with Sigma choose one of our THOR license packs, especially the SOC Toolkit Pack, which was geared to the needs of today’s SOC teams. 

The Best Possible Monitoring with Sigma Rules

Some of you may already have heard of Sigma, a generic approach for signatures used in SIEM systems. Its main purpose is to provide a structured form in which researchers or analysts can describe their once developed detection methods and make them shareable with others.
Today I would like to describe another use case that helps us build the best possible signatures for any application.
In typical security monitoring projects, people integrate different log sources and threats/use cases they want to monitor in these logs. The selection of the log sources is determined by importance, availability and operational requirements. The central question is:

“What do I want to detect?”

There are several white papers and guides for the most common log sources like the Windows Eventlog. What you have to do is to read the documents, extract the interesting detection methods and define searches, panels and dashboards in your SIEM system. One of the main purposes of Sigma is to relieve you from this time consuming process. These public descriptions of detection methods are already part of Sigma or will be integrated by one of the contributors.
But today I’d like to point out a different use case that is less obvious but very useful.

The Best Possible Monitoring

Security analysts try to define useful searches and statistical analyses on the log data from all different log sources. They typically follow the public guides and create many different “failed logon” rules for all kinds of operating systems and applications.
But what happens when start integrating log sources and types that are not covered by public guidelines?
There are different approaches to this problem:

  • Cover failed authentication attempts
  • Monitor any failure event (with a base-lining)
  • Use statistic deviations (many new events of a certain type)
  • Look through the log data and select interesting messages
  • Talk to the application owners / administrators (often rather disappointing)
  • Talk with the application developers (in case of inhouse development)

All of these methods are valid and can help you setup useful searches on the available data. However, in order to create the best possible signatures we have to consider the following:

  • The most interesting and relevant fault conditions do not happen under normal conditions and therefore do not appear in everyday log data (the log data we have to select interesting events from)
  • Developers know the rarest or most relevant error conditions and the corresponding error IDs and messages
  • All possible error messages of that application appear in its source code

Imagine that we can extract the most interesting messages that the application is able to generate and define rules for these conditions. Imagine that this process is an easy task that any developer is able to perform.
Well, we have at our fingertips all the tools that are needed.

An Example: vsftpd

Let’s make this approach clearer. Under optimal conditions we ask the developers to extract the most relevant or abnormal error messages that the applications produces under conditions that indicate a manipulation or attack. Alternatively, we can do it ourselves and cross-check the results with the available log data to avoid false positives.
I checked the source code of the FTP server ‘vsftpd’ for error messages that indicate rare error conditions and fatal failures.

The resulting Sigma rule looks like this:

title: Suspicious VSFTPD error messages
description: Detects suspicious VSFTPD error messages that indicate a fatal or suspicious error that could be caused by exploiting attempts
author: Florian Roth
date: 2017/07/05
    product: linux
    service: vsftpd
        - 'Connection refused: too many sessions for this address.'
        - 'Connection refused: tcp_wrappers denial.'
        - 'Bad HTTP verb.'
        - 'port and pasv both active'
        - 'pasv and port both active'
        - 'Transfer done (but failed to open directory).'
        - 'Could not set file modification time.'
        - 'bug: pid active in ptrace_sandbox_free'
        - 'PTRACE_SETOPTIONS failure'
        - 'weird status:'
        - "couldn't handle sandbox event"
        - 'syscall * out of bounds'
        - 'syscall not permitted:'
        - 'syscall validate failed:'
        - 'Input line too long.'
        - 'poor buffer accounting in str_netfd_alloc'
        - 'vsf_sysutil_read_loop'
    condition: keywords
    - Unknown
level: medium

After some testing we can now share the rule with the world in order to build a large signature database for the all different types of applications and services.
We could also try to convince developers to provide Sigma rules with their applications to help organisations establish the best possible monitoring of their software. I was thinking about an error message extractor that would parse a software project and extract error message strings that contain certain keywords like “failure”, “fatal”, “permission denied”, “refused” and others to facilitate this process for the rule writer.
If you want to support the project, create a pull request or send me your rules as gists or via pastebin to @cyb3ops. You could also become a contributor by providing a new backend for the rule converter “Sigmac” or become a member of our Slack team for threat hunters (references required).

GDPR Cookie Consent with Real Cookie Banner