- OS Support
- 42. Amazon Web Services (AWS)
- 43. Apache HTTP Server
- 44. Apache NiFi
- 45. Apache Tomcat
- 46. APC Automatic Transfer Switch
- 47. Apple macOS kernel
- 48. ArcSight Common Event Format (CEF)
- 49. Box
- 50. Brocade switches
- 51. Browser History Logs
- 52. Check Point
- 53. Cisco ACS
- 54. Cisco ASA
- 55. Cisco FireSIGHT
- 56. Cisco IPS
- 57. Cloud Instance Metadata
- 58. Common Event Expression (CEE)
- 59. Dell EqualLogic
- 60. Dell iDRAC
- 61. Dell PowerVault MD series
- 62. Devo
- 63. DHCP logs
- 64. DNS Monitoring
- 65. Docker
- 66. Elasticsearch and Kibana
- 67. F5 BIG-IP
- 68. File Integrity Monitoring
- 69. FreeRADIUS
- 70. Google Chronicle
- 71. Graylog
- 72. HP ProCurve
- 73. IBM QRadar SIEM
- 74. Industrial Control Systems
- 75. Linux Audit System
- 76. Linux system logs
- 77. Log Event Extended Format (LEEF)
- 78. Logstash
- 79. McAfee Enterprise Security Manager (ESM)
- 80. McAfee ePolicy Orchestrator
- 81. Microsoft Active Directory Domain Controller
- 82. Microsoft Azure
- 83. Microsoft Azure Event Hubs
- 84. Microsoft Azure Sentinel
- 85. Microsoft Exchange
- 86. Microsoft IIS
- 87. Microsoft SharePoint
- 88. Microsoft SQL Server
- 89. Microsoft System Center Endpoint Protection
- 90. Microsoft System Center Configuration Manager
- 91. Microsoft System Center Operations Manager
- 92. MongoDB
- 93. Nagios Log Server
- 94. Nessus Vulnerability Scanner
- 95. NetApp
- 96. .NET application logs
- 97. Nginx
- 98. Okta
- 99. Oracle Database
- 100. Osquery
- 101. Postfix
- 102. Promise
- 103. Raijin Database Engine
- 104. Rapid7 InsightIDR SIEM
- 105. RSA NetWitness
- 106. SafeNet KeySecure
- 107. Salesforce
- 108. Snare
- 109. Snort
- 110. Solarwinds Loggly
- 111. Splunk
- 112. Sumo Logic
- 113. Symantec Endpoint Protection
- 114. Synology DiskStation
- 115. Syslog
- 116. Sysmon
- 117. Ubiquiti UniFi
- 118. VMware vCenter
- 119. Windows AppLocker
- 120. Windows Command Line Auditing
- 121. Windows Event Log
- 122. Windows Firewall
- 123. Windows Group Policy
- 124. Windows Management Instrumentation (WMI)
- 125. Windows PowerShell
- 126. Microsoft Windows Update
- 127. Windows USB auditing
- 128. Zeek (formerly Bro) Network Security Monitor
- Enterprise Edition Reference Manual
- NXLog Manager
- NXLog Add-Ons
Azure Event Hubs is a big data streaming platform and event ingestion service from Microsoft. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
NXLog can be configured to send data to Azure Event Hubs via the Kafka and HTTP protocols using the om_kafka and om_http modules. NXLog can also receive log data from Azure Event Hubs via the Kafka protocol using the im_kafka module.
Kafka requires at least a Standard Tier, while HTTP works with any tier. For more information on tiers, see the What is the difference between Event Hubs Basic and Standard tiers? section in the Microsoft documentation. With both methods a SAS (Shared Access Signature) is used for authentication.
In order to successfully forward and retrieve logs from Azure Event Hubs, an Azure account with an appropriate subscription is required.
With all of the above created, the event hub can be found by browsing to the Home > Event Hubs > <YOURNAMESPACE> > Event Hubs page in the Azure portal. This page lists some basic details about the event hub as well as graphs of the data flow. In addition, the left side panel serves as a control panel for managing your event hub.
NXLog can forward logs to an Event Hub via the Kafka protocol.
In order to configure NXLog you need the following details:
The entry for the
BrokerListdirective. This is derived from the name of the namespace and a fixed URL with a port number and looks like:
<YOURNAMESPACE>.servicebus.windows.net:9093. The namespace needs to be changed to match your environment.
The name of the event hub created in Azure for the
The name of your resource group defined in an
Optiondirective. Read more about What is a resource group in the Microsoft Documentation.
Either your primary key or your secondary key will be needed per the instructions in Get an Event Hubs connection string for retrieving the connection string defined in an
Optiondirective as a SASL password.
A CA certificate, even though it is not listed as a requirement by Azure Event Hubs.
In this configuration the logs are forwarded to Azure Event Hubs by the om_kafka module.
1 2 3 4 5 6 7 8 9 10 11 <Output out> Module om_kafka BrokerList <YOURNAMESPACE>.servicebus.windows.net:9093 Topic <YOUREVENTHUB> Option security.protocol SASL_SSL Option group.id <YOURCONSUMERGROUP> Option sasl.mechanisms PLAIN Option sasl.username $ConnectionString Option sasl.password <YOUR Connection string–primary key> CAFile C:\Program Files\nxlog\cert\<ca.pem> </Output>
NXLog can forward its collected logs to Azure Event Hubs via the HTTP protocol.
In order to configure NXLog you need the following details:
A shared access signature (SAS) token. The Microsoft documentation lists various scripts and methods to generate a SAS token. For the example below, the PowerShell example were used. None of the other methods or scripts were tested.
Entries for the
URLdirective and the Host HTTP header set by the
AddHeaderdirective require the name of the namespace you have created.
|The PowerShell example can be executed in the Azure Cloud Shell using the Try it button.|
The om_http module also supports sending logs in batches by
defining the BatchMode directive. The accepted
values for this directive are
multipart, however Azure Event
Hubs can only process logs sent with the
multiline batching method.
In this configuration logs are sent to Azure Event Hubs using the om_http
1 2 3 4 5 6 7 8 9 <Output out> Module om_http BatchMode multiline URL https://<YOURNAMESPACE>.servicebus.windows.net/nxlogeventhub/messages HTTPSCAFile C:\cacert.pem AddHeader Authorization: <YOURSASTOKEN> AddHeader Content-Type: application/atom+xml;type=entry;charset=utf-8 AddHeader Host: <YOURNAMESPACE>.servicebus.windows.net </Output>
There are several ways to confirm data reception in Azure Event Hubs.
The easiest was to look at it is to browse to the Home > Event Hubs > <YOURNAMESPACE> > Event Hubs page in the Azure portal where Microsoft provides a chart which displays incoming and outgoing message counts as well as event throughput metrics.
Logs forwarded to Azure Event Hubs by NXLog can also be collected using the im_kafka module. The logs collected with this method are identical to the ones sent to Azure Event Hubs.
This configuration uses the same settings as the om_kafka configuration in the first example. The only difference is the direction of the log flow. This configuration collects the logs and writes them to a file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <Input in> Module im_kafka BrokerList nxlognamespace.servicebus.windows.net:9093 Topic nxlogeventhub Option security.protocol SASL_SSL Option group.id nxlogconsumergroup Option sasl.mechanisms PLAIN Option sasl.username $ConnectionString Option sasl.password <Connection string–primary key> CAFile C:\Program Files\nxlog\cert\ca.pem </Input> <Output file> Module om_file File "C:\\logs\\logmsg.txt" </Output>
|This section is for informational purposes only.|
When deciding on which method to use for sending logs to Azure Event Hubs, performance, throughput, and size can all be important and decisive factors. It is important to note, when measuring throughput, the performance of a system depends on a number of factors including, but not limited to:
The performance and resource availability of the node which NXLog runs on.
The performance and capability of any networking equipment between MS Azure Event Hubs and the machine NXLog runs on.
The quality of service provided by your ISP. This includes bandwidth restrictions as well.
Your geographic location and how you set up Azure Event Hubs.
The number of throughput units you have purchased with your Azure Event Hubs subscription.
In addition, it is worth considering which tier to use as Kafka requires a more expensive subscription as it is not available in the Basic Tier according to the Event Hubs pricing.
In our tests we have used a single data throughput unit generating data with the im_testgen module and concluded that both Kafka and HTTP works reliably, but HTTP offers better throughput especially with batching enabled.
In any case, we strongly recommend thorough testing in your environment.