NXLog User Guide
- OS Support
- Enterprise Edition Reference Manual
- 146. Man Pages
- 147. Configuration
- 148. Language
- 149. Extension Modules
- 150. Input Modules
- 151. Processor Modules
- 152. Output Modules
- 152.1. Microsoft Azure (om_azure)
- 152.2. Batched compression (om_batchcompress)
- 152.3. Blocker (om_blocker)
- 152.4. DBI (om_dbi)
- 152.5. Elasticsearch (om_elasticsearch)
- 152.6. EventDB (om_eventdb)
- 152.7. Program (om_exec)
- 152.8. File (om_file)
- 152.9. Go (om_go)
- 152.10. HTTP(s) (om_http)
- 152.11. Java (om_java)
- 152.12. Kafka (om_kafka)
- 152.13. Null (om_null)
- 152.14. ODBC (om_odbc)
- 152.15. Perl (om_perl)
- 152.16. Named pipes (om_pipe)
- 152.17. Python (om_python)
- 152.18. Raijin (om_raijin)
- 152.19. Redis (om_redis)
- 152.20. Ruby (om_ruby)
- 152.21. TLS/SSL (om_ssl)
- 152.22. TCP (om_tcp)
- 152.23. UDP (om_udp)
- 152.24. UDP with IP spoofing (om_udpspoof)
- 152.25. Unix domain sockets (om_uds)
- 152.26. WebHDFS (om_webhdfs)
- 152.27. ZeroMQ (om_zmq)
- NXLog Manager
- NXLog Add-Ons
|To examine the supported platforms, see the list of installer packages in the Available Modules chapter.|
|The om_kafka module is not supported on AIX, as the underlying librdkafka library is unstable, thus making om_kafka potentially unstable.|
The module uses an internal persistent queue to back up event records that should be pushed to a Kafka broker. Once the module receives an acknowledgement from the Kafka server that the message has been delivered successfully, the module removes the corresponding message from the internal queue. If the module is unable to deliver a message to a Kafka broker (for example, due to connectivity issues or the Kafka server being down), this message is retained in the internal queue (including cases when NXLog restarts) and the module will attempt to re-deliver the message again.
The number of re-delivery attempts can be specified by passing the
message.send.max.retries property via the Option
Option message.send.max.retries 5). By default, the number of retries
is set to 2 and the time interval between two subsequent retries is 5 minutes.
Thus, by altering the number of retries, it is possible to control the total
time for a message to remain in the internal queue. If a message cannot be
delivered within the allowed retry attempts, the message is dropped. The
maximum size of the internal queue defaults to 100 messages. To increase the
size of the internal queue, you can use the LogqueueSize directive.
This mandatory directive specifies the list of Kafka brokers to connect to for publishing logs. The list should include ports and be comma-delimited (for example,
This mandatory directive specifies the Kafka topic to publish records to.
This specifies the path of the certificate authority (CA) certificate that will be used to verify the certificate presented by the remote brokers. A remote broker’s self-signed certificate (which is not signed by a CA) can be trusted by specifying the remote broker certificate itself. In case of certificates signed by an intermediate CA, the certificate specified must contain the complete certificate chain (certificate bundle). CAFile is required if Protocol is set to
This specifies the path of the certificate file that will be presented to the remote broker during the SSL handshake.
This specifies the path of the private key file that was used to generate the certificate specified by the CertFile directive. This is used for the SSL handshake.
This directive specifies the compression types to use during transfer. Available types depend on the Kafka library, and should include
This directive specifies the passphrase of the private key specified by the CertKeyFile directive. A passphrase is required when the private key is encrypted. Example to generate a private key with Triple DES encryption using OpenSSL:
$ openssl genrsa -des3 -out server.key 2048
This directive is not needed for passwordless private keys.
This directive can be used to pass a custom configuration property to the Kafka library (librdkafka). For example, the group ID string can be set with
Option group.id mygroup. This directive may be used more than once to specify multiple options. For a list of configuration properties, see the librdkafka CONFIGURATION.md file.Warning
Passing librdkafka configuration properties via the Option directive should be done with care since these properties are used for the fine-tuning of the librdkafka performance and may result in various side effects.
This optional integer directive specifies the topic partition to write to. If this directive is not given, messages are sent without a partition specified.
This optional directive specifies the protocol to use for connecting to the Kafka brokers. Accepted values include
sasl_ssl. If Protocol is set to
sasl_ssl, then the CAFile directive must also be provided.
This directive specifies the Kerberos service name to be used for SASL authentication. The service name is required for the
This specifies the client’s Kerberos principal name for the
sasl_sslprotocols. This directive is only available and mandatory on Linux/UNIX. See note below.
Specifies the path to the kerberos keytab file which contains the client’s allocated principal name. This directive is only available and mandatory on Linux/UNIX.
The SASLKerberosServiceName and SASLKerberosPrincipal directives are only available on Linux/UNIX. On Windows, the login user’s principal name and credentials are used for SASL/Kerberos authentication.
For details about configuring Apache Kafka brokers to accept SASL/Kerberos
authentication from clients, please follow the instructions provided by the
This configuration sends events to a Kafka cluster using the brokers
specified. Events are published to the first partition of the
1 2 3 4 5 6 7 8 9 10 11 12 <Output out> Module om_kafka BrokerList localhost:9092,192.168.88.35:19092 Topic nxlog LogqueueSize 100000 Partition 0 Protocol ssl CAFile /root/ssl/ca-cert CertFile /root/ssl/client_debian-8.pem CertKeyFile /root/ssl/client_debian-8.key KeyPass thisisasecret </Output>
librdkafka library can produce its performance statistics and format it
in JSON. All fields from the JSON structure are explained on the
page of the
librdkafka project on the GitHub website. NXLog can be
configured to poll this data at a specified fixed interval. The result can be
saved to the internal logger.
To read statistical data of the
librdkafka library, the millisecond polling
interval needs to be specified against the Option
directive using the
To get the
librdkafka statistics produced and delivered synchronously, the
statistics.interval.ms option and the Schedule block should specify the
same interval amount.