- Introduction
- Deployment
- Configuration
- OS Support
- Integration
- 42. Amazon Web Services (AWS)
- 43. Apache HTTP Server
- 44. Apache Tomcat
- 45. APC Automatic Transfer Switch
- 46. Apple macOS kernel
- 47. ArcSight Common Event Format (CEF)
- 48. Box
- 49. Brocade switches
- 50. Browser History Logs
- 51. Check Point
- 52. Cisco ACS
- 53. Cisco ASA
- 54. Cisco FireSIGHT
- 55. Cisco IPS
- 56. Cloud Instance Metadata
- 57. Common Event Expression (CEE)
- 58. Dell EqualLogic
- 59. Dell iDRAC
- 60. Dell PowerVault MD series
- 61. Devo
- 62. DHCP logs
- 63. DNS Monitoring
- 64. Docker
- 65. Elasticsearch and Kibana
- 66. F5 BIG-IP
- 67. File Integrity Monitoring
- 68. FreeRADIUS
- 69. Google Chronicle
- 70. Graylog
- 71. HP ProCurve
- 72. IBM QRadar SIEM
- 73. Industrial Control Systems
- 74. Linux Audit System
- 75. Linux system logs
- 76. Log Event Extended Format (LEEF)
- 77. McAfee Enterprise Security Manager (ESM)
- 78. McAfee ePolicy Orchestrator
- 79. Microsoft Active Directory Domain Controller
- 80. Microsoft Azure
- 81. Microsoft Azure Event Hubs
- 82. Microsoft Azure Sentinel
- 83. Microsoft Exchange
- 84. Microsoft IIS
- 85. Microsoft SharePoint
- 86. Microsoft SQL Server
- 87. Microsoft System Center Endpoint Protection
- 88. Microsoft System Center Configuration Manager
- 89. Microsoft System Center Operations Manager
- 90. MongoDB
- 91. Nagios Log Server
- 92. Nessus Vulnerability Scanner
- 93. NetApp
- 94. .NET application logs
- 95. Nginx
- 96. Okta
- 97. Oracle Database
- 98. Osquery
- 99. Postfix
- 100. Promise
- 101. Rapid7 InsightIDR SIEM
- 102. RSA NetWitness
- 103. SafeNet KeySecure
- 104. Salesforce
- 105. Snare
- 106. Snort
- 107. Solarwinds Loggly
- 108. Splunk
- 109. Sumo Logic
- 110. Symantec Endpoint Protection
- 111. Synology DiskStation
- 112. Syslog
- 113. Sysmon
- 114. Ubiquiti UniFi
- 115. VMware vCenter
- 116. Windows AppLocker
- 117. Windows Command Line Auditing
- 118. Windows Event Log
- 119. Windows Firewall
- 120. Windows Group Policy
- 121. Windows Management Instrumentation (WMI)
- 122. Windows PowerShell
- 123. Microsoft Windows Update
- 124. Windows USB auditing
- 125. Zeek (formerly Bro) Network Security Monitor
- Troubleshooting
- Enterprise Edition Reference Manual
- NXLog Manager
- NXLog Add-Ons
65. Elasticsearch and Kibana
Elasticsearch is a search engine and document database that is commonly used to store logging data. Kibana is a popular user interface and querying front-end for Elasticsearch. Kibana is often used with the Logstash data collection engine—together forming the ELK stack (Elasticsearch, Logstash, and Kibana).
However, Logstash is not actually required to load data into Elasticsearch. NXLog can do this as well, and offers several advantages over Logstash—this is the KEN stack (Kibana, Elasticsearch, and NXLog).
-
Because Logstash is written in Ruby and requires Java, it has high system resource requirements. NXLog has a small resource footprint and is recommended by many ELK users as the log collector of choice for Windows and Linux.
-
Due to the Java dependency, Logstash requires system administrators to deploy the Java runtime onto their production servers and keep up with Java security updates. NXLog does not require Java.
-
Elastic’s Logstash wmi input plugin creates events based on the results of a WMI query. This method incurs a significant performance penalty. NXLog uses the native Windows Event Log API in order to more efficiently capture Windows events.
The following sections explain how to configure NXLog to:
-
send logs directly to Elasticsearch, replacing Logstash; or
-
forward collected logs to Logstash, acting as a log collector for Logstash.
65.1. Sending logs to Elasticsearch
Consult the Elasticsearch Reference and the Kibana User Guide for more information about installing and configuring Elasticsearch and Kibana. For NXLog Enterprise Edition 3.x, see Using Elasticsearch With NXLog Enterprise Edition 3.x in the Reference Manual.
-
Configure NXLog.
Example 289. Using om_elasticsearchThe om_elasticsearch module is only available in NXLog Enterprise Edition. Because it sends data in batches, it reduces the effect of the latency inherent in HTTP responses, allowing the Elasticsearch server to process the data much more quickly (10,000 EPS or more on low-end hardware).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
<Extension _json> Module xm_json </Extension> <Output out> Module om_elasticsearch URL http://localhost:9200/_bulk FlushInterval 2 FlushLimit 100 # Create an index daily Index strftime($EventTime, "nxlog-%Y%m%d") # Use the following if you do not have $EventTime set #Index strftime($EventReceivedTime, "nxlog-%Y%m%d") </Output>
Example 290. Using om_httpFor NXLog Community Edition, the om_http module can be used instead to send logs to Elasticsearch. Because it sends a request to the Elasticsearch HTTP REST API for each event, the maximum logging throughput is limited by HTTP request and response latency. Therefore this method is suitable only for low-volume logging scenarios.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
<Extension _json> Module xm_json </Extension> <Output out> Module om_http URL http://localhost:9200 ContentType application/json <Exec> set_http_request_path(strftime($EventTime, "/nxlog-%Y%m%d/" + $SourceModuleName)); rename_field("timestamp", "@timestamp"); to_json(); </Exec> </Output>
-
Restart NXLog, and make sure the event sources are sending data. This can be checked with
curl -X GET 'localhost:9200/_cat/indices?v&pretty'
. There should be an index matchingnxlog*
and itsdocs.count
counter should be increasing. -
Configure the appropriate index pattern for Kibana.
-
Open Management on the left panel and click on Index Patterns.
-
Set the Index pattern to
nxlog*
. A matching index should be listed. Click > Next step. -
Set the Time Filter field name selector to
EventTime
(orEventReceivedTime
if the$EventTime
field is not set by the input module). Click Create index pattern.
-
-
Test that the NXLog and Elasticsearch/Kibana configuration is working by opening Discover on the left panel.
65.2. Forwarding logs to Logstash
NXLog can be configured to act as a log collector, forwarding logs to Logstash in JSON format.
-
Set up a configuration on the Logstash server to process incoming event data from NXLog.
logstash.confinput { tcp { codec => json_lines { charset => CP1252 } port => "3515" tags => [ "tcpjson" ] } } filter { date { locale => "en" timezone => "Etc/GMT" match => [ "EventTime", "YYYY-MM-dd HH:mm:ss" ] } } output { elasticsearch { host => localhost } stdout { codec => rubydebug } }
NoteThe
json
codec in Logstash sometimes fails to properly parse JSON—it will concatenate more than one JSON record into one event. Use thejson_lines
codec instead.Although the im_msvistalog module converts data to UTF-8, Logstash seems to have trouble parsing that data. The
charset => CP1252
seems to help. -
Configure NXLog.
-
Restart NXLog.