Sending logs to Azure Sentinel with NXLog
In this post, the technology we will be examining is the Azure Monitor HTTP Data Collector API, which enables clients, such as the NXLog Enterprise Edition agent, to send events to a Log Analytics workspace, making them directly accessible using Azure Sentinel queries.
We will present two examples of sending logs to Azure Sentinel: in the first one, we send Windows DNS Server logs and in the second one, Linux kernel audit logs. Both of these log sources are of interest from a security perspective.
Proactive monitoring of DNS activity can help network administrators quickly detect and respond to attempted security breaches in DNS implementations that might otherwise lead to data theft, denial-of-service, or other service disruptions related to malicious activity.
In comparison, Linux Audit has a much wider scope and could arguably be called the most comprehensive tool for monitoring and reporting security events on Linux distributions.
About NXLog Enterprise Edition
If you aren’t familiar with the NXLog Enterprise Edition, it is a full-featured log processing agent with a small footprint. It can read and write all standard log formats and integrates with over 70 third-party products. It offers many additional features not found in the free Community Edition. To evaluate the configurations presented in this post, download the appropriate trial edition for your platform. For more information on supported platforms and how to install an agent, see the NXLog Deployment chapter of the NXLog EE User Guide.
Collecting DNS Server logs via Windows Event Tracing
Event Tracing for Windows (ETW) provides not only efficient logging of both kernel and user-mode applications but also access to the Debug and Analytical channels that are not available through Windows Event Log channels (which also contains some DNS Server logs).
Authentication
The pivotal part of sending secure HTTPS requests to Azure is the
authentication process. Azure validates the values of two custom HTTP headers,
Authorization
and x-ms-date
along with the length of the data payload to
determine if the request is authentic. The value assigned to the
Authorization
header is dynamically generated using a cryptographic hash. For
details, see the Azure Monitor
Authorization
section in the Microsoft documentation.
To allow easy integration with the NXLog HTTP(s) (om_http) module that sends events to REST API endpoints, NXLog provides a Perl script that regenerates the single-use authorization string for each new batch of events to be sent.
Capturing ETW events - The input side
NXLog can natively collect ETW logs without the need to capture the trace into
an .etl
file. Configuring an NXLog agent to capture Windows DNS Server events
using the
Event Tracing for Windows (im_etw)
input module is fairly straightforward as illustrated here:
1
2
3
4
5
<Input DNS_Logs>
Module im_etw
Provider Microsoft-Windows-DNSServer
Exec to_json();
</Input>
Note
|
The default location for the NXLog configuration file on Windows is
C:\Program Files\nxlog\conf\nxlog.conf . This file is used to configure
as many inputs, outputs, and routes as needed for a host. For more
information on configuring NXLog in general, see the
Configuration Overview
in the NXLog User Guide.
|
Please note that the first (opening) line of the Input
block defines the name
of this instance as DNS_Logs
. The output module for sending events to Azure
uses this name for creating the Azure Sentinel table that will collect these
events.
The Exec
statement on line 4 of the DNS_Logs
input instance
above invokes the to_json()
procedure, which
converts the Windows events to JSON records, as required by Azure’s
HTTP Data Collector API.
Sending ETW events - The output side
The output module is the part that connects directly to Azure. The first step in configuring the output instance is retrieving the Workspace ID and either the Primary key or the Secondary key (also referred to as the shared key). These keys can be found by navigating in the Azure portal to Log Analytics workspace > Settings > Agents management. The same set of keys can be viewed under either the Windows servers or Linux servers tab.

The next step is to add this information to the nxlog.conf
file as
constants
making them accessible to the output instance.
The SUBDOMAIN
, RESOURCE
, and APIVER
are used to construct the complete
URL. The value for SIZELIMIT
can be tuned to your needs. It represents the
maximum size in bytes of the data payload for each batch of events. 65000
is
the upper limit. The higher values mean better network efficiency. Lower values
mean events can be received faster because they are not waiting for a large
buffer to be full before they can be sent.
1
2
3
4
5
6
define WORKSPACE 18fb21ab-d8d4-4448-bdf6-3748c9c03135
define SHAREDKEY VfIQqBoz6fxmnI/E4PKVPza2clH/YAdJ20RnCDwzHCqCMnobYdM1/dD1+KJ6cI6AkR4xPJlTIWI/jfwPU6QHmw==
define SUBDOMAIN ods.opinsights.azure.com
define RESOURCE api/logs
define APIVER api-version=2016-04-01
define SIZELIMIT 65000
When looking at the entire output instance that uses the HTTP(s) (om_http) module, you can see how batches of events are buffered and then flushed:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
<Extension plxm>
Module xm_perl
PerlCode %INSTALLDIR%\modules\extension\perl\sentinelauth.pl
</Extension>
<Output AzureHTTP>
Module om_http
URL https://%WORKSPACE%.%SUBDOMAIN%/%RESOURCE%?%APIVER%
ContentType application/json
HTTPSAllowUntrusted TRUE
HTTPSCAFile %INSTALLDIR%\cert\ca-certificates.crt
<Exec>
create_stat('ec', 'COUNT');
create_stat('bc', 'COUNT');
create_var('batch');
create_var('nextbatch');
add_stat('ec',1);
#---BEGIN--- the enrichment of this event with any new fields:
# The following can be used for debugging batch mode if needed:
# $BatchNumber = get_stat('bc');
# $EventNumber = get_stat('ec');
# to_json();
#---END--- the enrichment of this event
if (size(get_var('batch')) + size($raw_event) + 3) > %SIZELIMIT%
# Flush this batch of events
{
set_var('nextbatch', $raw_event);
$raw_event = '[' + get_var('batch') + ']';
add_stat('bc',1);
set_var('batch',get_var('nextbatch'));
$Workspace = "%WORKSPACE%";
$SharedKey = "%SHAREDKEY%";
$ContentLength = string(size($raw_event));
$dts = strftime(now(),'YYYY-MM-DDThh:mm:ssUTC');
$dts_no_tz = replace($dts,'Z','');
$parsedate_utc_false = parsedate($dts_no_tz,FALSE);
$x_ms_date = strftime($parsedate_utc_false, '%a, %d %b %Y %T GMT');
plxm->call("genauth");
add_http_header('Authorization',$authorization);
add_http_header('Log-Type',$SourceModuleName);
add_http_header('x-ms-date',$x_ms_date);
}
else
{
$delimiter = get_stat('ec') == 1 ? '' : ",\n";
set_var('batch', get_var('batch') + $delimiter + $raw_event);
drop();
}
</Exec>
The values for the three HTTP headers Authorization
, Log-Type
, and
x-ms-date
are set using the
add_http_header
procedure as shown above on lines 41-43. Log-Type
is dynamically set to
$SourceModuleName
, the name of the input instance we chose at the beginning.
Since all REST API events are categorized by Azure Monitor as Custom Logs,
Azure appends _CL
to the value of Log-Type
in order to prevent naming
conflicts with other Azure tables thus the name we originally chose,
DNS_Logs
, appears in Azure Sentinel as DNS_Logs_CL
.
By leveraging $SourceModuleName
for defining Log-Type
, we have created a
completely generic output instance that can be used with any other log sources.
Configuration checklist
To prepare for testing, let’s run through the steps needed to ensure success:
-
Download/view the entire nxlog.conf configuration file and append its contents to your current
C:\Program Files\nxlog\conf\nxlog.conf
NXLog configuration file. -
Ensure that you have changed the values of
WORKSPACE
andSHAREDKEY
to match those of your Log Analytics workspace. -
Download the sentinelauth.pl Perl script. Copy it to the location defined by the
PerlCode
directive in thexm_perl
instance (lines 1-4 above) and rename it tosentinelauth.pl
. -
Read about the Windows requirements for Perl in the Perl (xm_perl) in the NXLog Reference Manual.
-
Once the Perl requirements for Windows have been met, restart the
nxlog
service via Windows Services.
To test DNS Server logging of audit events, we added an A
record for
R04LRC13.example.com
and reloaded the example.com
zone. This
logs an event with EventID 515
(Record Create) and another one with
EventID 561
(Zone Reload).
Now it’s time to log into the Azure Log Analytics workspace that was defined in
the DNS_Logs
output instance and open Logs. After expanding
Custom Logs the DNS_Logs_CL
should be visible. With a simple query,
the newly ingested events are visible.

Expanding the first event’s details shows the complete set of fields and their values:


For testing purposes, you may want to add a temporary output instance for
validating the integrity of your configuration. This lets you compare the
events and their fields with what Azure Sentinel is ingesting. As you can see
here, by adding a new output instance named TempFile
as an additional
destination to the route, this allows you to view the events in JSON format that
will be stored in the file defined by the File
directive.
1
2
3
4
5
6
7
8
<Output TempFile>
Module om_file
File 'C:\Program Files\nxlog\data\dnsetw.json'
</Output>
<Route DnsRoute1>
Path DNS_Logs => AzureHTTP, TempFile
</Route>
{
"SourceName": "Microsoft-Windows-DNSServer",
"ProviderGuid": "{EB79061A-A566-4698-9119-3ED2807060E7}",
"EventID": 515,
"Version": 0,
"ChannelID": 17,
"OpcodeValue": 0,
"TaskValue": 5,
"Keywords": "4611686018428436480",
"EventTime": "2020-10-06T10:59:00.795199-05:00",
"ExecutionProcessID": 1728,
"ExecutionThreadID": 5012,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "WIN-FFMCPAJ76HP",
"Domain": "WIN-FFMCPAJ76HP",
"AccountName": "Administrator",
"UserID": "S-1-5-21-1830054504-3820897498-340727717-500",
"AccountType": "User",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"Type": "1",
"NAME": "R04LRC13.example.com",
"TTL": "604800",
"BufferSize": "4",
"RDATA": "0xC0A8015D",
"Zone": "example.com",
"ZoneScope": "Default",
"VirtualizationID": ".",
"EventReceivedTime": "2020-10-06T10:59:03.295804-05:00",
"SourceModuleName": "DNS_Logs",
"SourceModuleType": "im_etw",
"DNS_LogType": "Audit"
}
Troubleshooting
If you are unable to see any events arriving in your Azure Sentinel table, try these troubleshooting steps:
-
Look at the NXLog internal log file for clues which is located in
C:\Program Files\nxlog\data\nxlog.log
on Windows. Success should look like this:
2020-09-30 22:06:15 INFO [om_http|DNS_Logs] Successfully connected to 18fb21ab-d8d4-4448-bdf6-3748c9c03135.ods.opinsights.azure.com(40.79.154.87):443 (using URL: https://18fb21ab-d8d4-4448-bdf6-3748c9c03135.ods.opinsights.azure.com)
2020-09-30 22:06:15 INFO [om_http|DNS_Logs] Generated from Shared Key and hashed signing string based on:; ContentLength: 64746; x-ms-date: Thu, 01 Oct 2020 03:06:15 GMT; Authorization: SharedKey 18fb21ab-d8d4-4448-bdf6-3748c2c03135:2I2iSNqGZeJZh8QdTPl7Ate2xRLvJbEL6dpa6UL4WKo=
2020-09-30 22:08:19 INFO [om_http|DNS_Logs] Reconnect...
-
The following error message in
C:\Program Files\nxlog\data\nxlog.log
usually indicates one or more of these three conditions:-
First line of the Perl script doesn’t contain
use lib 'c:\Program Files\nxlog\data';
-
Wrong version of Strawberry Perl (only 5.28.0.1 will work)
-
The presence of a conflicting copy of
perl528.dll
located inC:\Program Files\nxlog\
that will need to be deleted
-
Can't locate lib.pm in @INC (you may need to install the lib module) (@INC contains:) at C:\Program Files\nxlog\modules\extension\perl\sentinelauth.pl line 1.
BEGIN failed--compliation aborted at C:\Program Files\nxlog\modules\extension\perl\sentinelauth.pl line 1.
2020-07-30 10:25:39 ERROR [xm_perl|plxm] the perl interpreter failed to parse C:\Program Files\nxlog\modules\extension\perl\sentinelauth.pl
-
Make sure the input instance is correctly configured and that events are actually being captured by adding an additional output instance for logging them to a local temporary file as demonstrated above.
Including DNS Server analytical logs captured with ETW
If analytical event logging is enabled, you can capture and view DNS Sever analytical events having EventIDs ranging from 256 to 286. Technically, no further changes are needed for logging and viewing both audit and analytical events in Azure Sentinel. However, there is one enhancement you might want to implement:
Enrich the schema with a new attribute: DNS_LogType
. If you need to
frequently differentiate between audit and analytical DNS Server events,
querying for a range of values on a regular basis is not only tedious and makes
queries less readable, but it can also be slower on large data sets. This is as
simple as replacing the original Exec to_json();
with an Exec
block that
sets the new $DNS_LogType
field to either Audit
or Analytical
depending
on the value of EventID
before calling the to_json()
which will then enrich
the schema with this new field.
1
2
3
4
5
6
7
8
9
<Input DNS_Logs>
Module im_etw
Provider Microsoft-Windows-DNSServer
<Exec>
if $EventID >= 256 and $EventID <= 286 $DNS_LogType = 'Analytical';
if $EventID >= 512 and $EventID <= 596 $DNS_LogType = 'Audit';
to_json();
</Exec>
</Input>

Download/view the entire nxlog.conf configuration file.
Collecting Linux Audit logs
In this section we examine Linux Audit logs and how they can be sent to Azure Sentinel. Since the prerequisites of data format (JSON), transport (HTTPS REST API with some special headers), and authentication (single-use cryptographic hash) are the same for sending Linux log sources to Azure Sentinel, we are now free to focus on the log source itself and the minor differences between a Windows deployment and a Linux deployment.
The Linux Audit system provides fine-grained logging of security related
events. These logs can also provide a wealth of security information:
changes to DNS zone files, system shutdowns, attempts to access unauthorized
files, and other suspicious activity. The NXLog Enterprise Edition includes the im_linuxaudit
module for directly accessing the kernel component of the Audit System. With
this module, NXLog can be configured to build Audit rules and collect logs
without requiring auditd
or any other user-space software.
Capturing Linux Audit events - The input side
Let’s take a look at the configuration file to see how the input module is configured and how the rules are defined.
1
2
3
4
5
6
7
8
9
10
11
<Extension _resolver>
Module xm_resolver
</Extension>
<Input LinuxAudit>
Module im_linuxaudit
FlowControl FALSE
LoadRule %INSTALLDIR%/etc/im_linuxaudit.rules
ResolveValues TRUE
Exec to_json();
</Input>
Note
|
The default location for the NXLog configuration file on Linux is
/opt/nxlog/etc/nxlog.conf .
|
Instead of defining a small set of audit rules within a
Rules
block diectly in the LinuxAudit
input instance, we use the
LoadRule
directive to load a more comprehensive collection of rules in an
audit rule
file which is based on the ruleset maintained by the
Best Practice Auditd Configuration
project.
The
xm_resolver module
is needed for the ResolveValues
directive in the audit input instance, where
it is used for resolving some of the numeric values to more human-readable
string values.
Sending Linux Audit events - The output side
It should be noted that there are some configuration differences between Linux
and Windows as the NXLog directory structure is slightly different, thus the
PerlCode
path is as follows:
1
2
3
4
<Extension plxm>
Module xm_perl
PerlCode %INSTALLDIR%/lib/nxlog/modules/extension/perl/sentinelauth.pl
</Extension>
Also, the first line of Perl scripts on Linux needs to point to the location of
the perl
binary.
#!/usr/bin/perl
use strict;
use warnings;
use Log::Nxlog;
use MIME::Base64;
Since the Linux configuration files exhibit only minor differences when compared to their Windows counterparts displayed in the ETW section, we won’t display them here. Instead, you can download them using these links:
Download/view the entire sentinelauth.pl Perl script and the nxlog.conf configuration file.
Once these changes have been implemented and the NXLog service has been
restarted, events should be sent to the LinuxAudit_CL
Azure Sentinel table
based on the name given to the input module, LinuxAudit
. The following JSON
event was triggered and captured according to the very last line in the
im_linuxaudit.rules file.
{
"type": "PATH",
"time": "2020-10-06T16:58:58.518000+00:00",
"seq": 72170,
"item": 1,
"name": "/etc/bind/zones/db.example.com",
"inode": 527881,
"dev": "fc:02",
"mode": "file,644",
"ouid": "root",
"ogid": "bind",
"rdev": "00:00",
"nametype": "CREATE",
"cap_fp": "0",
"cap_fi": "0",
"cap_fe": "0",
"cap_fver": "0",
"cap_frootid": "0",
"EventReceivedTime": "2020-10-06T16:58:58.530798+00:00",
"SourceModuleName": "LinuxAudit",
"SourceModuleType": "im_linuxaudit"
}
Upon successful receipt in the Log Analytics workspace by Azure Monitor, events are further processed and finally ingested by Azure Sentinel where they can be viewed via user-defined queries.

After expanding the following event to reveal its columns and their values, it can be verified against the JSON formatted event above that was sent via the REST API.


Summary
Given the configuration samples and use cases presented here, you should now possess the basic information needed to benefit from these additional security monitoring opportunities in your own enterprise. To recap, the main advantages are:
-
Event Tracing for Windows (ETW) offers better performance because it doesn’t need to capture the trace into an .etl file and provides access to Debug and Analytical channels
-
The native NXLog Linux Audit input module that works out of the box without the need to install
auditd
and when coupled with the NXLog Resolver extension module can resolve IP addresses as well as group/user IDs to their respective names, making Linux audit logs more intelligible to security analysts -
A general-purpose output configuration enabling Azure Sentinel to ingest events from multiple, diverse log sources simultaneously, from any host in your enterprise having outbound access to Azure