Hello,
I have installed NXLog community edition to collect table data from PostgreSQL database but, the table isn't contain an ID column. As I understand, NXLog required to this field to bookmark but, we don't have. I'm looking for a workaround to solve the issue. On the other hand I can see a workaround from the following link and we can configure the ID with select statements but, the article isn't about the PostgreSQL. Could you please someone help me for PostgreSQL?
https://nxlog.co/documentation/nxlog-user-guide/mssql.html
the second question: Can we define a specific column (such as eventime) for ID (bookmark) with the following sample data?
2020-02-11 15:00:00.0000 2020-02-11 15:00:01.0001 2020-02-11 15:00:02.0002 2020-02-11 15:00:03.0000
Thanks in Advance!
Best Regards
SD
seckindemir created
jaredtully created
Hi all,
According to the documentation found here it indicates that the generic RPM doesn't have all available modules as opposed to the version specific RPM:
The generic RPM above contains all the libraries (such as libpcre and libexpat)
that are needed by NXLog, the only dependency is libc.
However, some modules are not available (im_checkpoint, for example).
The advantage of the generic RPM is that it can be installed on most RPM-based Linux distributions.
Is there documentation for what modules are not available?
Are there any issues for deploying this version that I should know about up front?
Thanks!!
casey1234 created
How can we rename nxlog package ? while we are placing both the rpm into spacewalk channel these are updating as “nxlog-ce-2.10.2150-1.x86_64.rpm” .So it's making a duplicate,So i hope renaming the rpm name will help us. Any help will be appreciated on this.
nxlog-ce-2.10.2150-1_rhel6.x86_64.rpm nxlog-ce-2.10.2150-1_rhel7.x86_64.rpm
Thanks! Ela
elango1 created
Our project is planning to reissue certificates for large amount of agents. Do we have API on the certificates so we'll able to reissue on these agents at the same time without doing it manually (one by one)?
ryangumba created
Hi,
We are planning to deploy NXLog to thousands of endpoints and need to know when an agent is no longer sending data regularly.
Is there an established method for determining NXLog is working normally at scale?
Thanks!
casey1234 created
mrkey148 created
Nofox created
I am currently running into an issue receiving syslog over ssl/tls. I cannot figure it out for the life of me!
Version: CE-2.10.2150
Error: INFO SSL connection accepted from IP_ADDRESS:PORT ERROR SSL certificate verification failed: unsupported certificate purpose (err: 26) WARNING SSL connection closed from IP_ADDRESS:PORT
Config: <Input in> Module im_ssl Host 0.0.0.0 Port 516 AllowUntrusted TRUE CAFile %CERTDIR%%CA-PEM% CertFile %CERTDIR%%CRT% CertKeyFile %CERTDIR%%KEY% KeyPass %PASSWORD% </Input>
jstock created
casey1234 created
Attempting to log to a syslog server based upon the specified filtered log ID's. When I use this config, I do not get any errors in the error log when the service starts. However I do not get anything to my syslog server. Not sure if this a problem with the in or output and would love some feedback.
Panic Soft #NoFreeOnExit TRUE
define ROOT C:\Program Files (x86)\nxlog define CERTDIR %ROOT%\cert define CONFDIR %ROOT%\conf define LOGDIR %ROOT%\data define LOGFILE %LOGDIR%\nxlog.log
LogFile %LOGFILE%
Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data
<Extension _syslog> Module xm_syslog </Extension> define HighEventIds 4618, 4649, 4719, 4765, 4766, 4794, 4897, 4964, 5124, 1102
define MediumEventIds 4621, 4675, 4692, 4693, 4706, 4713, 4714, 4715, 4716, 4724,
4727, 4735, 4737, 4739, 4754, 4755, 4764, 4764, 4780, 4816,
4865, 4866, 4867, 4868, 4870, 4882, 4885, 4890, 4892, 4896,
4906, 4907, 4908, 4912, 4960, 4961, 4962, 4963, 4965, 4976,
4977, 4978, 4983, 4984, 5027, 5028, 5029, 5030, 5035, 5037,
5038, 5120, 5121, 5122, 5123, 5376, 5377, 5453, 5480, 5483,
5484, 5485, 6145, 6273, 6274, 6275, 6276, 6277, 6278, 6279,
6280, 24586, 24592, 24593, 24594
define LowEventIds 4608, 4609, 4610, 4611, 4612, 4614, 4615, 4616, 4624, 4625,
4634, 4647, 4648, 4656, 4657, 4658, 4660, 4661, 4662, 4663,
4672, 4673, 4674, 4688, 4689, 4690, 4691, 4696, 4697, 4698,
4699, 4700, 4701, 4702, 4704, 4705, 4707, 4717, 4718, 4720,
4722, 4723, 4725, 4726, 4728, 4729, 4730, 4731, 4732, 4733,
4734, 4738, 4740, 4741, 4742, 4743, 4744, 4745, 4746, 4747,
4748, 4749, 4750, 4751, 4752, 4753, 4756, 4757, 4758, 4759,
4760, 4761, 4762, 4767, 4768, 4769, 4770, 4771, 4772, 4774,
4775, 4776, 4778, 4779, 4781, 4783, 4785, 4786, 4787, 4788,
4789, 4790, 4869, 4871, 4872, 4873, 4874, 4875, 4876, 4877,
4878, 4879, 4880, 4881, 4883, 4884, 4886, 4887, 4888, 4889,
4891, 4893, 4894, 4895, 4898, 5136, 5137
<Input events> Module im_msvistalog <QueryXML> <QueryList> <Query Id="0" Path="Directory Service"> <Select Path="Directory Service">*[System[Provider[ @Name='Microsoft-Windows-ActiveDirectory_DomainService']]] </Select> </Query> </QueryList> </QueryXML> <Exec> if $EventID NOT IN (%HighEventIds%) and $EventID NOT IN (%MediumEventIds%) and $EventID NOT IN (%LowEventIds%) drop(); </Exec> </Input>
<Output udp> Module om_udp Host 172.17.103.13 Port 514 Exec to_syslog_snare(); </Output>
<Route uds_to_udp> Path events => udp </Route>
<Extension _charconv> Module xm_charconv AutodetectCharsets iso8859-2, utf-8, utf-16, utf-32 </Extension>
<Extension _exec> Module xm_exec </Extension>
<Extension _fileop> Module xm_fileop
# Check the size of our log file hourly, rotate if larger than 5MB
<Schedule>
Every 1 hour
Exec if (file_exists('%LOGFILE%') and \
(file_size('%LOGFILE%') >= 5M)) \
file_cycle('%LOGFILE%', 8);
</Schedule>
# Rotate our log file every week on Sunday at midnight
<Schedule>
When @weekly
Exec if file_exists('%LOGFILE%') file_cycle('%LOGFILE%', 8);
</Schedule>
</Extension>
smplegge created
I am looking for the buster Debian package. I already tried stretch version but it complained about unmet dependencies related to libssl.
The following packages have unmet dependencies:
nxlog-ce : Depends: libperl5.24 (>= 5.24.0) but it is not installable
Depends: libssl1.0.2 (>= 1.0.2d) but it is not installable
root@debian:~# dpkg --search libssl
libssl1.1:amd64: /usr/share/doc/libssl1.1
libssl1.1:amd64: /usr/share/doc/libssl1.1/changelog.Debian.gz
libssl-dev:amd64: /usr/lib/x86_64-linux-gnu/pkgconfig/libssl.pc
libssl1.1:amd64: /usr/share/doc/libssl1.1/NEWS.Debian.gz
libssl-dev:amd64: /usr/share/doc/libssl-dev/changelog.gz
libssl1.1:amd64: /usr/lib/x86_64-linux-gnu/libssl.so.1.1
libssl-dev:amd64: /usr/share/doc/libssl-dev
libssl-dev:amd64: /usr/lib/x86_64-linux-gnu/libssl.a
libssl-dev:amd64: /usr/share/doc/libssl-dev/copyright
libssl1.1:amd64: /usr/share/doc/libssl1.1/copyright
libssl-dev:amd64: /usr/lib/x86_64-linux-gnu/libssl.so
libssl-dev:amd64: /usr/share/doc/libssl-dev/changelog.Debian.gz
android-libboringssl: /usr/lib/x86_64-linux-gnu/android/libssl.so.0
libssl1.1:amd64: /usr/share/doc/libssl1.1/changelog.gz
root@debian:~# dpkg --search libperl
libperl5.28:amd64: /usr/share/doc/libperl5.28/changelog.Debian.gz
libperl5.28:amd64: /usr/lib/x86_64-linux-gnu/libperl.so.5.28
libperl5.28:amd64: /usr/share/doc/libperl5.28
libperl5.28:amd64: /usr/lib/x86_64-linux-gnu/libperl.so.5.28.1
libperl5.28:amd64: /usr/share/doc/libperl5.28/copyright
I think it's because of compile, so I tried to compiled it myself on a Debian buster but I stuck on ./configure which can not find libcrypto.(libssl-dev installed, libraries exist in lib path, ... )
I appreciate if anyone can share the binary package for buster release.
Thank you.
nxpart created
casey1234 created
I am following the nxlog to splunk guide here: https://nxlog.co/documentation/nxlog-user-guide/splunk.html. Specifically, section '93.3. Sending Specific Log Types for Splunk to Parse'. When testing, even using the config from the page, I am still getting an error (see further below)
<Input eventxml> Module im_msvistalog Channel Security CaptureEventXML TRUE Exec $raw_event = $EventXML; </Input>
<Output splunk_hec> Module om_http URL https://127.0.0.1:8088/services/collector/raw AddHeader Authorization: Splunk c6580856-29e8-4abf-8bcb-ee07f06c80b3 </Output>
This generates this error: ERROR invalid keyword: CaptureEventXML at C:\Program Files (x86)\nxlog\conf\nxlog.conf
Any ideas? thanks
cpkg created
Hello, I'm using nxlog community to send logs from my firewalls through syslog. My output is like that: <output log_to_file> module om_file File 'c:\datas\firewall_' + $MessageSourceAddress + '.log' <output> If my firewalls 192.168.1.1 and 192.168.1.2 are correctly configured, the result will be two files: c:\datas\firewall_192.168.1.1.log and c:\datas\firewall_192.168.1.2.log My problem is now to rotate this file on a daily basis. I've tried to apply the command rotate_to but it seems that it applies only on the first file.
How can I do rotation on multiple files with names based on a variable ?
Thank you !
ddm70 created
nxlog-ce 2.9.1716 on Windows 10/Server 2016.
The usage of om_udp seems to cause nxlog.exe to listen on ephemeral port. om_tcp does not cause this. I can't find anything in documentation that explains this behavior.
Please help.
MK
mkangindep created
After using kvp parser i've got variables with spaces in names. For example: "$Event Time" or "$Source Name".
I'm interested in two things:
- How i can interact with this variable names? For example i'm trying construction "$EventTime = $Event Time;" with many shield variations: ",',),], etc, but this not work.
- it is possible to prevent this situation? Massage format example below:
"DeviceEvent: Virus found,IP Address: 10.X.X.X,Computer name: xxx-xxx,Source: Auto-Protect scan,Risk name: Infostealer.Gampass,Occurrences: 1,File path: X:\xxxx_xxx.exe,Description: ,Actual action: Moved back,Requested action: Quarantined,Secondary action: Deleted,Event time: 2020-01-21 17:24:58,Event Insert Time: 2020-01-21 17:27:06,End Time: 2020-01-21 17:59:17,Last update time: 2020-01-21 18:01:07,Domain Name: xxxx,Group Name: XXXX,Server Name: xx-xxx,User Name: SYSTEM,Source Computer Name: ,Source Computer IP: ,Disposition: Reputation was not used in this detection.........."
Stanislav created
Hello, I am sending a message with hostname to my syslog server, my conf is as follows:
define ROOT C: \ Program Files (x86) \ nxlog
Moduledir% ROOT% \ modules CacheDir% ROOT% \ data Pidfile% ROOT% \ data \ nxlog.pid SpoolDir% ROOT% \ data LogFile% ROOT% \ data \ nxlog.log
<Extension _syslog> Module xm_syslog </Extension>
<Input in> Module im_msvistalog
<Exec> parse_syslog (); $ Message = "hostnamexxx" + $ Message; to_syslog_ietf (); </Exec>
</Input>
<Output out> Om_udp module Host xx.xxx.xx Port 514 Exec to_syslog_ietf (); </Output>
<Route 1> Path in => out </Route>
My log is coming with the message correctly:
Feb 12 23:11:34 DESKTOP-XXXXX Microsoft-Windows-Eventlog [964] hostnamexxxxINFO 1102 The audit log was cleared. Subject: Security ID: # xxxxxxxx-1001 Account Name: Admin Domain Name: DESKTOP-XXXXX Logon ID: 0xD438A
However, the message "hostnamexxxx" is coming in the middle of the log, as you can see above. This is disturbing my parser, is there any way I can put this "hostnamexxxx" message last in my log? Example:
Feb 12 23:11:34 DESKTOP-XXXXX Microsoft-Windows-Eventlog [964] INFO 1102 The audit log was cleared. Subject: Security ID: # xxxxxxxx-1001 Account Name: Admin Domain Name: DESKTOP-XXXXX Logon ID: 0xD438A hostnamexxxx
Thanks
GustavoM created
Hello everyone,
I'm having trouble architecturing something with hmac verification, any help of yours would be welcome.
I'm trying to setup an architecture with three clients/servers and using hmac/hmac_check to guarantee the integrity of the logs. Logs 1 are created by client 1 and sent to client 2, which check their integrity, logs 2 are created by client 2, and both logs 1 and 2 are send in the end to client 3 which finally check for integrity for both of them. Here is a "beautiful" scheme to illustrate my words:
client 1 ---hmac(logs 1)---> client 2 ---hmac_check(logs 1) + hmac(logs 2)---> client 3 ---hmac_check(logs 1) + hmac_check(logs2) + hmac(logs 3)-->...
I would have multiple routes on each clients and I would use different instances of each processor on each route to avoid having errors like "processor X already used in route A and was not load in the route B". Also, I would be using batchcompress between client 1 and 2 but UDP between client 2 and 3.
I'm wondering how you would do this thing? Would you open multiple UDP ports on client 3 to receive independantly logs coming from client 1 and client 2 and check the hmac independantly or would you send those two logs on the same network port and check them with the same hmac_check processor? And would you use multiple routes to process independantly logs coming from different clients because of the hmac integrity check?
Thank you in advance, Kind Regards,
Jean created
casey1234 created