2
responses

In a heavily loaded environment, around 250K values per minute using python modules, causes memory leaks and out of memory errors. Checked on: Ubuntu 20 LTS, Centos 8 Stream. After 15-20 minutes of working (Kafka python script) nxlog eats over 8 GB of RAM + 3G swap and crush. Every restart or reload service clean the memory. Using the perl module is not the solution.

Anyone has the same? Maybe any tips to resolve it.

AskedSeptember 6, 2022 - 9:18pm

Comments (2)

  • Klevin's picture
    (NXLog)

    Hello Grzegorz,

    May i suggest to create a gitlab issue to our ce repository with the possible details you have.

    For sure sharing nxlog.conf and nxlog.log file will help as well ( please note to redact sensitive information like ip, domains etc )

    Sincerely Klevin

  • NenadM's picture
    (NXLog)

    Hey Grzegorz

    The time needed for the the RAM to be depleted seems to small for the memory leak. But there are some other things in NXLog that could cause this, especially if you have multiple output and processor modules. Please check the following paragraph from the NXLog documents:

    The BatchSize and LogqueueSize directives may affect the memory usage of NXLog. These calculations determine the impact LogqueueSize directive has on memory consumption:

    QueueMemoryConsumption=BatchSize∗LogQueueSize∗EventSize∗(NumberofOutputs+NumberofProcessors)

    In addition, there is the unacknowledged network data in the output buffer, which may again be a considerable amount. The size of the unacknowledged data is the product of the throughput and the latency of the network in an otherwise unobstructed network:

    OutputBufferSize=Eventsize∗EPS∗RoundTripLatency∗NumberofOutputs

Answers (0)