- Introduction
- Deployment
- Configuration
- OS Support
- Integration
- 42. Amazon Web Services (AWS)
- 43. Apache HTTP Server
- 44. Apache Tomcat
- 45. APC Automatic Transfer Switch
- 46. Apple macOS kernel
- 47. ArcSight Common Event Format (CEF)
- 48. Box
- 49. Brocade switches
- 50. Browser History Logs
- 51. Check Point
- 52. Cisco ACS
- 53. Cisco ASA
- 54. Cisco FireSIGHT
- 55. Cisco IPS
- 56. Cloud Instance Metadata
- 57. Common Event Expression (CEE)
- 58. Dell EqualLogic
- 59. Dell iDRAC
- 60. Dell PowerVault MD series
- 61. Devo
- 62. DHCP logs
- 63. DNS Monitoring
- 64. Docker
- 65. Elasticsearch and Kibana
- 66. F5 BIG-IP
- 67. File Integrity Monitoring
- 68. FreeRADIUS
- 69. Google Chronicle
- 70. Graylog
- 71. HP ProCurve
- 72. IBM QRadar SIEM
- 73. Industrial Control Systems
- 74. Linux Audit System
- 75. Linux system logs
- 76. Log Event Extended Format (LEEF)
- 77. McAfee Enterprise Security Manager (ESM)
- 78. McAfee ePolicy Orchestrator
- 79. Microsoft Active Directory Domain Controller
- 80. Microsoft Azure
- 81. Microsoft Azure Event Hubs
- 82. Microsoft Azure Sentinel
- 83. Microsoft Exchange
- 84. Microsoft IIS
- 85. Microsoft SharePoint
- 86. Microsoft SQL Server
- 87. Microsoft System Center Endpoint Protection
- 88. Microsoft System Center Configuration Manager
- 89. Microsoft System Center Operations Manager
- 90. MongoDB
- 91. Nagios Log Server
- 92. Nessus Vulnerability Scanner
- 93. NetApp
- 94. .NET application logs
- 95. Nginx
- 96. Okta
- 97. Oracle Database
- 98. Osquery
- 99. Postfix
- 100. Promise
- 101. Rapid7 InsightIDR SIEM
- 102. RSA NetWitness
- 103. SafeNet KeySecure
- 104. Salesforce
- 105. Snare
- 106. Snort
- 107. Solarwinds Loggly
- 108. Splunk
- 109. Sumo Logic
- 110. Symantec Endpoint Protection
- 111. Synology DiskStation
- 112. Syslog
- 113. Sysmon
- 114. Ubiquiti UniFi
- 115. VMware vCenter
- 116. Windows AppLocker
- 117. Windows Command Line Auditing
- 118. Windows Event Log
- 119. Windows Firewall
- 120. Windows Group Policy
- 121. Windows Management Instrumentation (WMI)
- 122. Windows PowerShell
- 123. Microsoft Windows Update
- 124. Windows USB auditing
- 125. Zeek (formerly Bro) Network Security Monitor
- Troubleshooting
- Enterprise Edition Reference Manual
- NXLog Manager
- NXLog Add-Ons
69. Google Chronicle
Google Chronicle is a cloud-based service from Google which is designed to collect and process log data. The ingested data can be searched and selected based on specific criteria, such as assets, domains, or IP addresses. This service can help alert organizations when any of their systems are compromised.
Chronicle is the foundation for an extended SIEM system called CYDERES CNAP. For more details, visit the Fishtech Group website.
Registered users can access the official Chronicle documentation through
Chronicle’s user interface. To view the documentation, visit the
https://<customername>.backstory.chronicle.security
website and navigate to
the Documentation section.
Note
|
Since the documentation is only available to paid subscribers, you will
need to substitute the |
NXLog provides various ways to send logs to Chronicle.
69.1. Forwarding log data to Chronicle
Chronicle can accept both structured (UDM-formatted) and unstructured messages. For Chronicle to accept events as structured data, it needs special formatting prior to forwarding. For unstructured events, Chronicle parses and processes them on reception.
In Chronicle, unstructured events are typically assigned to a specific log
source, for instance,WINDOWS_DNS
or LINUX_OS
. Structured or UDM-formatted
events are associated with a user or system action like USER_CREATION
or
PROCESS_LAUNCH
. Consequently, both types of logs serve different purposes in
different situations thus one type cannot be preferred without exception over
the other. For more information regarding data formats, consult the
Google Chronicle documentation.
NXLog can be configured to collect and forward log data to Google Chronicle using the following methods:
-
Forwarding directly to the Chronicle Partner Ingestion API
-
Forwarding via the Chronicle Forwarder Software
-
Forwarding logs using a central NXLog agent (an enhanced replacement for Chronicle Forwarder)
Sending logs via the Ingestion API is a direct forwarding method independent of any intermediary software like Chronicle Forwarder. This method is more flexible and allows Chronicle to immediately parse events as they are received, provided they are formatted according to the Unified Data Model (UDM). The only downside of this method could be the additional overhead of the JSON payload, which in most cases is negligible.

Sending logs via Chronicle Forwarder offers an easier initial configuration and built-in passive network capabilities; however, it has some significant disadvantages. First of all, it requires intermediary software to be installed on the network, which can result in additional licensing costs and resource usage. Another inflexibility of this logging tool is the requirement that the Linux version can only run in a Docker environment, which might further complicate the setup. The Chronicle Forwarder is also limited to unstructured log data, thus lacking any capability of forwarding UDM-formatted logs. Although the Chronicle Forwarder is certainly an option, it will likely complicate the logging environment while providing little in return. Google Chronicle would also require additional configuration to enable fine-grained processing of incoming data.

Fortunately, a central NXLog agent can replace Chronicle Forwarder for sending logs to Google Chronicle without sacrificing any functionality. This approach provides flexibility by eliminating the need to install Chronicle Forwarder on Microsoft Windows or Docker. Additionally, NXLog can be configured to process both unstructured and UDM-formatted data. It can also provide additional functionality, like Passive network monitoring by using the im_pcap module.

69.1.1. Forwarding logs to the Ingestion API
The Chronicle Partner Ingestion API is the universal, preferred method of delivering logs to Chronicle. This RESTful API accepts incoming data in the form of JSON payloads and uses API keys for authentication.
The Ingestion API provides endpoints for the following operations:
69.1.1.1. Listing log sources
The Ingestion API provides the logtypes
endpoint for retrieving a list of
over 400 unstructured log sources. You can call this endpoint with the actual
API key as shown below which returns a list of JSON objects, each comprised
of a logType
and description
field.
$ curl --header "Content-Type: application/json" --request GET https://malachiteingestion-pa.googleapis.com/v1/logtypes?key=<YOURAPIKEY>
Note
|
Each logType in this list can only be sent as unstructured data to Chronicle.
|
69.1.1.2. Sending unstructured data to Chronicle
As its name suggests, the unstructuredlogentries
endpoint accepts
unstructured log data. However, before forwarding, log data should be contained
into a JSON payload as displayed in the sample below.
{
"log_type": "LOG_SOURCE",
"entries": [
{
"log_text": "Log message"
}
]
}
Note
|
The value for |
The sample consists of the following fields:
-
log_type
specifies the log source. -
entries
specifies an array of JSON objects, each object is comprised of a single key-value pair:{"log_text":"<an entire event as a string>"}
. -
log_text
specifies the entire event as a string value.
This example demonstrates how to configure NXLog to forward
unstructured log data. In this example, the log_type
is specified as
BIND_DNS
.
This sample DNS message was collected from a BIND DNS server.
16-Jan-2021 04:55:02.187 client 10.120.20.20#4238: query: yandex.com IN A + (100.90.80.102)
This NXLog configuration specifies Chronicle’s base URL, endpoint, and
API key by defining three constants at the
beginning of the file. For the API_KEY
constant, replace <YOURAPIKEY>
with
the actual API key you use for authentication.
Once the constants are defined, they are used to construct the value needed for
the URL
directive in the to_chronicle
om_http
output module instance. The Exec block is where the JSON
payload is constructed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# Defining constants to compose the domain name with the API key
define BASE_URL https://malachiteingestion-pa.googleapis.com/v1/
define ENDPOINT unstructuredlogentries
define API_KEY key=<YOURAPIKEY>
<Output to_chronicle>
Module om_http
URL %BASE_URL%%ENDPOINT%?%API_KEY%
HTTPSAllowUntrusted TRUE
ContentType application/json
<Exec>
$raw_event = escape_json($raw_event);
$raw_event = '{"log_type": "BIND_DNS",' \
+ '"entries": [{"log_text":"'+ $raw_event+'"}]}';
</Exec>
</Output>
{
"log_type": "BIND_DNS",
"entries": [
{
"log_text": "16-Jan-2021 04:55:02.187 client 10.120.20.20#4238: query: yandex.com IN A + (100.90.80.102)"
}
]
}
69.1.1.3. Forwarding structured logs to Chronicle
For log sources not included in the list of unstructured log sources, the Unified Data Model (UDM) can be used to forward structured data to Chronicle. Examples of such log sources might be events associated with sending an email or creating a new user.
Before forwarding, the event fields need to be incorporated into the JSON structure that Chronicle expects for incoming structured data.
This example shows how to collect a Windows Event Log event with Event ID 4688
(a new process has been created), format it to the PROCESS_LAUNCH
type, and
forward the results to Chronicle.
This XML sample event represents an event that matched the QueryXML and filter
defined in the from_eventlog
instance of the NXLog configuration
below.
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" />
<EventID>4688</EventID>
<Version>2</Version>
<Level>0</Level>
<Task>13312</Task>
<Opcode>0</Opcode>
<Keywords>0x8020000000000000</Keywords>
<TimeCreated SystemTime="2021-01-16T05:50:13.788209900Z" />
<EventRecordID>1683</EventRecordID>
<Correlation />
<Execution ProcessID="4" ThreadID="260" />
<Channel>Security</Channel>
<Computer>WIN-ET85AK2E1J1</Computer>
<Security />
</System>
<EventData>
<Data Name="SubjectUserSid">S-1-5-21-3213787892-1493673803-1430668809-500</Data>
<Data Name="SubjectUserName">Administrator</Data>
<Data Name="SubjectDomainName">WIN-ET85AK2E1J1</Data>
<Data Name="SubjectLogonId">0x25369</Data>
<Data Name="NewProcessId">0xcc8</Data>
<Data Name="NewProcessName">C:\Windows\System32\ftp.exe</Data>
<Data Name="TokenElevationType">%%1936</Data>
<Data Name="ProcessId">0xb14</Data>
<Data Name="CommandLine">ftp.exe</Data>
<Data Name="TargetUserSid">S-1-0-0</Data>
<Data Name="TargetUserName">-</Data>
<Data Name="TargetDomainName">-</Data>
<Data Name="TargetLogonId">0x0</Data>
<Data Name="ParentProcessName">C:\Windows\System32\cmd.exe</Data>
<Data Name="MandatoryLabel">S-1-16-12288</Data>
</EventData>
</Event>
To parse Windows Event Log entries, the NXLog configuration uses the im_msvistalog module.
The BASE_URL
, ENDPOINT
, and API_KEY
constants are used to construct the
value needed for the URL
directive in the to_chronicle
instance of the
om_http output module. The Exec block
enables the UDM-formatted payload to be constructed, which is then forwarded to
Chronicle.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
define BASE_URL https://malachiteingestion-pa.googleapis.com/v1/
define ENDPOINT udmevents
define API_KEY key=<YOURAPIKEY>
<Input from_eventlog>
Module im_msvistalog
<QueryXML>
<QueryList>
<Query Id="0">
<Select Path="Security">
*[System[Level=0 and (EventID=4688)]]
</Select>
</Query>
</QueryList>
</QueryXML>
<Exec>
if not ($NewProcessName =~ /.*ftp.exe/) drop();
delete($Message);
</Exec>
</Input>
<Output to_chronicle>
Module om_http
URL %BASE_URL%%ENDPOINT%?%API_KEY%
HTTPSAllowUntrusted TRUE
ContentType application/json
<Exec>
$timestamp = strftime($EventTime,'YYYY-MM-DDThh:mm:ss.sTZ');
$raw_event = '{"events":[{"metadata":{"event_timestamp":"' \
+ $timestamp +'","event_type":"PROCESS_LAUNCH",' \
+ '"vendor_name":"Microsoft","product_name":"Windows"},' \
+ '"principal":{"hostname":"'+ $Hostname +'"},' \
+ '"target":{"process":{"pid":"'+ $NewProcessId +'",' \
+ '"file":{"full_path":"'+ $NewProcessName +'"}}}}]}';
$raw_event = replace($raw_event,'\','\\');
</Exec>
</Output>
{
"events": [
{
"metadata": {
"event_timestamp": "2021-01-15T21:50:13.788209-08:00",
"event_type": "PROCESS_LAUNCH",
"vendor_name": "Microsoft",
"product_name": "Windows"
},
"principal": {
"hostname": "WIN-ET85AK2E1J1"
},
"target": {
"process": {
"pid": "0xcc8",
"file": {
"full_path": "C:\\Windows\\System32\\ftp.exe"
}
}
}
}
]
}
69.1.2. Forwarding logs using a central NXLog agent
Any NXLog agent can be configured to function as a central NXLog agent, which then relays events to one or more destinations for further processing and analysis. This is known as centralized log collection. In the case of Chronicle, a central NXLog agent receives logs, both structured and unstructured, from other NXLog agents installed locally on their respective log source hosts. The central NXLog agent can then interface directly with Chronicle to forward logs on behalf of the other NXLog agents.
The example below demonstrates the universal configuration of the central log collector for handling all types of logs.
This configuration demonstrates how to read events produced by
osquery on a Linux host using the im_file
input module. After each event is read and parsed by the from_file
instance,
the event is routed to the om_tcp module for further processing.
To send osquery
events to Chronicle, further processing is required which can
be done in the to_tcp
instance of the om_tcp output module. After
consulting the
list of unstructured log sources,
it is apparent that the unstructured Chronicle log type OSQUERY_EDR
exists.
The Exec block is used for constructing the JSON payload
structure Chronicle requires. First, the value of $raw_event
is assigned to
the log_text
key. This object is then defined as the sole element of the
top-level JSON array entries
. The top-level object is defined by two
key-value pairs: entries
and log_type
which is assigned a value of
OSQUERY_EDR
. This JSON object is then sent over TCP to the central
NXLog agent for relaying to Chronicle.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<Input from_file>
Module im_file
File '/var/log/osquery/osqueryd.snapshots.log'
Exec parse_json();
</Input>
<Output to_tcp>
Module om_tcp
Host 192.168.31.157:10500
<Exec>
$raw_event = escape_json($raw_event);
$raw_event = '{"log_type": "OSQUERY_EDR",' \
+ '"entries": [{"log_text":"'+ $raw_event+'"}]}';
</Exec>
</Output>
The following configuration shows how the central NXLog agent is
configured. The from_tcp
instance of the im_tcp module is
configured to listen for incoming events on TCP port 10500
using the network
interface having an IP address of 192.168.31.157
.
The to_chronicle
instance of the om_http output module forwards
the events to Chronicle. The BASE_URL
, ENDPOINT
, and API_KEY
constants
are used to construct the value needed for the URL
directive. Since the
events received over TCP were already processed to meet the JSON payload
structure Chronicle requires, the events are forwarded directly to Chronicle
without further processing.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
define BASE_URL https://malachiteingestion-pa.googleapis.com/v1/
define ENDPOINT unstructuredlogentries
define API_KEY key=<YOURAPIKEY>
<Input from_tcp>
Module im_tcp
ListenAddr 192.168.31.157:10500
</Input>
<Output to_chronicle>
Module om_http
URL %BASE_URL%%ENDPOINT%?%API_KEY%
HTTPSAllowUntrusted TRUE
ContentType application/json
</Output>
69.1.3. Forwarding log data via Chronicle Forwarder
Chronicle Forwarder, as introduced above, can only forward unstructured data. However, it is able to
-
run a Syslog server,
-
accept events from Splunk, and
-
provide passive network monitoring.
Chronicle Forwarder forwards events to Chronicle as soon as they are received.
The NXLog configuration below uses the im_etw module to
collect log data from the Microsoft-Windows-DNS-Client
provider. The
the to_tcp
instance of the om_tcp output module is configured to
send the DNS events to a Chronicle Forwarder agent listening on TCP port 10514
installed on a host with an IP address of 192.168.31.178.
1
2
3
4
5
6
7
8
9
<Input from_dns>
Module im_etw
Provider Microsoft-Windows-DNS-Client
</Input>
<Output to_tcp>
Module om_tcp
Host 192.168.31.178:10514
</Output>
{
"SourceName": "Microsoft-Windows-DNS-Client",
"ProviderGuid": "{1C95126E-7EEA-49A9-A3FE-A378B03DDB4D}",
"EventID": 3008,
"Version": 0,
"ChannelID": 16,
"OpcodeValue": 0,
"TaskValue": 0,
"Keywords": "9223372036854775808",
"EventTime": "2021-01-20T10:34:40.130607-08:00",
"ExecutionProcessID": 3324,
"ExecutionThreadID": 2160,
"EventType": "INFO",
"SeverityValue": 2,
"Severity": "INFO",
"Hostname": "WIN-ET85AK2E1J1",
"Domain": "WIN-ET85AK2E1J1",
"AccountName": "Administrator",
"UserID": "S-1-5-21-3213787892-1493673803-1430668809-500",
"AccountType": "User",
"Flags": "EXTENDED_INFO|IS_64_BIT_HEADER|PROCESSOR_INDEX (577)",
"QueryName": "pipeline-incoming-prod-elb-149169523.us-west-2.elb.amazonaws.com",
"QueryType": "28",
"QueryOptions": "140737489406145",
"QueryStatus": "9501",
"EventReceivedTime": "2021-01-20T10:34:40.711997-08:00",
"SourceModuleName": "from_dns",
"SourceModuleType": "im_etw"
}
Below is the sample configuration of Chronicle Forwarder.
- syslog:
common:
enabled: true
data_type: WINDOWS_DNS
data_hint:
batch_n_seconds: 10
batch_n_bytes: 1048576
tcp_address: 192.168.31.178:10514
udp_address: 192.168.31.178:10515
connection_timeout_sec: 60
69.2. Verifying data in Google Chronicle
Upon receipt, log data can be observed in Google Chronicle’s web interface:
-
In a web browser, navigate to the Google Chronicle instance using the
https://<customername>.backstory.chronicle.security
address. -
To search for all entries, type a period (
.
) in the search field and click SEARCH. -
In the Raw Log Scan dialog, specify the search interval and click SEARCH.
-
On the Raw Log Scan page, click any event of interest in the Asset pane to see its details.
-
The right-hand pane can be used to toggle (show/hide) both Raw Log and/or UDM Event details for the selected event.