Structured logging - why should you do that?

Structured logging is on the rise. A lot of tools and logging services are finally moving towards structured logging and JSON seems to be the format of choice for this.

But what is structured logging? Traditionally logs were generated in the form of free form text messages prepended with some basic metadata such as the time of the event, severity and the source of the event. This is what the old RFC3164 style Syslog format looks like:

<30>Nov 21 11:40:27 myhost sshd[26459]: Accepted publickey for myhost from port 424242 ssh2

This traditional syslog fromat has a header consisting of the severity, facility, timestamp, hostname and process name followed by a free form text string optionally containing additional metadata. The advantage of this is that log data in this format can be easily parsed and stored in text files for humans to look at.

Using NXLog with Elasticsearch and Kibana

The popularity of the ELK stack is steadily rising, many NXLog users send their event data to Elasticsearch and Kibana for log monitoring and analytics.

There are many tutorials and configurations scattered around on the web, some come with configuration samples that will likely not work properly.  For this reason we have written a short document introducing different options on how to use NXLog with Elasticsearch and Kibana, it's available under the documentation page.

Sending logs to loggly

Loggly offers cloud based storage and analytics services for log data. NXLog can be used to collect and send logs off to the Loggly service.

Below is a configuration that can be used for a start. Make sure to set the value of CUSTOMER_TOKEN properly. If you are unsure where to get this, see the article about the cusomer token in the Loggly support center.