Ask questions. Get answers. Find technical product solutions from passionate experts in the NXLog community.
om_tcp - reconnect(); on a schedule
Jakauppila created
I have a series of file inputs and tpc outputs. The output targets are geo-balanced across data centers which traffic is directed to based on the originating request. If we have a situation where we need to take one of the two collectors down all the agents would point at one side. Because of this, I want NXLog to reconnect to the output target at a particular interval. How do you properly use the 'reconnect();' procedure? We have a series of inputs using the same outputs.
<Output Tomcat_App_Logs_Out>
Module om_tcp
Host NP-NC-TOMCAT.Contoso.com
Port 4112
<Schedule>
Every 1 min
Exec reconnect();
</Schedule>
</Output>
Jakauppila created
ERROR maximum number of fields reached, limit is 50
Juan Andrés.Ramirez created
Hello Guys,
I download and I'm using the version nxlog-ce-2.9.1504.msi and everything working as well, but I have to parse logs from Hadoop and I had the error in the maximum number of fields my fields in Hadoop logs are 80 fields.
Here is my module to parse this:
<code>
<Extension hp>
Module xm_csv
Fields $date,$jobname,$jobid,$username,$jobpriority,$jobstatus,$totalmaps,$totalreduces,$failedmaps,$failedreduces,$submittime,$launchtime,$finishtime,$mapavgtime,$reduceavgtime,$mapmaprfsbytesread,$reducemaprfsbytesread,$mapmaprfsbyteswritten,$reducemaprfsbyteswritten,$mapfilebyteswritten,$reducefilebyteswritten,$mapinputrecords,$mapoutputbytes,$mapspilledrecords,$reduceshufflebytes,$reducespilledrecords,$mapcpumilliseconds,$reducecpumilliseconds,$combineinputrecords,$combineoutputrecords,$reduceinputrecords,$reduceinputgroups,$reduceoutputrecords,$mapgctimeelapsedmilliseconds,$reducegctimeelapsedmilliseconds,$mapphysicalmemorybytes,$reducephysicalmemorybytes,$mapvirtualmemorybytes,$reducevirtualmemorybytes,$maptaskmaxtime,$successattemptmaxtime_maptaskmaxtime,$allattemptmaxtime_maptaskmaxtime,$server_successattemptmaxtime_maptaskmaxtime,$server_allattemptmaxtime_maptaskmaxtime,$maptaskmintime,$maptaskmaxinput,$maptaskmininput,$maptaskinputformaxtime,$maptaskinputformintime,$reducetaskmaxtime,$successattemptmaxtime_reducetaskmaxtime,$allattemptmaxtime_reducetaskmaxtime,$server_successattemptmaxtime_reducetaskmaxtime,$server_allattemptmaxtime_reducetaskmaxtime,$reducetaskmintime,$reducetaskmaxinput,$reducetaskmininput,$reducetaskinputformaxtime,$reducetaskinputformintime,$jobpool,$io_sort_spill_percent,$shuffle_input_buffer_percent$,$io_sort_mb,$io_sort_factor,$map_class,$reduce_class,$inputformat_class,$output_compress,$output_compression_codec,$compress_map_output,$map_output_compression_codec,$input_dir,$output_dir,$map_jvm,$reduce_jvm,$working_dir,$java_command,$job_submithost,$reduce_parallel_copies,$racklocalmaps,$datalocalmaps,$totallaunchedmaps,$fallowreduces,$fallowmaps,$mapoutputrecords,$dummy
FieldTypes text,text,text,text,text,text,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,integer,text,text,integer,integer,integer,integer,integer,integer,integer,integer,text,text,integer,integer,integer,integer,integer,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,text,integer,integer,integer,text,text,integer,text
Delimiter \t
</Extension>
#
<Input hadoop>
Module im_file
File 'E:\\Hadoop\\analytics.sp_hadoop_stats.txt'
SavePos TRUE
Recursive TRUE
Exec if ( $raw_event =~ /^#/ or size($raw_event) == 0 ) drop(); \
else \
{ \
hp->parse_csv(); \
$EventTime = parsedate($date); \
$EventTime = strftime($EventTime, "%Y-%m-%d"); \
$SourceName = "Hadoop"; \
$hostname = hostname(); \
to_json(); \
}
</Input>
</code>
If possible increase the fields in some files or add the option in the nxlog.conf file?.
Thank you.
Juan Andrés.Ramirez created
trying to post an existing Json file to a remote web api
gregB created
we use log4net to produce log files, and have the json extensions for log4net so the file output is as follows
{
"date":"2016-03-18T13:49:36.9504697-04:00","level":
"ERROR",
"appname":"log4net_json.vshost.exe",
"logger":"log4net_json.Program",
"thread":"9",
"ndc":"(null)",
"message":"System.DivideByZeroException: Attempted to divide by zero.\r\n at log4net_json.Program.Main() in c:\\temp\\tryMeOut\\log4net.Ext.J
son.Example-master\\log4net_json\\log4net_json\\Program.cs:line 20"
}
we want to use nxlog to consume the file and post the results to a remote web api that will insert the json values into our database. I have tried loads of variations of config changes and
spent hours on the internet / reading the documentation with no luck. Below is one of the config files I have used, however each time i send it out the i only get the first character of the
the text, the "{" . the rest is missing and will not post
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
LogLevel INFO
<Extension _syslog>
Module xm_syslog
</Extension>
<Input in>
Module im_file
File "C:/temp/WOWLog.xml"
InputType LineBased
SavePos FALSE
ReadFromLast FALSE
Exec if $raw_event =~ /^--/ drop();
Exec $raw_event = replace($raw_event, "\r\n", ";");
</Input>
<Output out>
Module om_http
URL http://localhost:51990/api/LogFile/
# HTTPSAllowUntrusted TRUE
</Output>
<Route 1>
Path in => out
</Route>
Looking for the correct settings for the configuration file.
for sanity I have been able to send the same file to the remote web api via a .net console application with a web client
gregB created
Help with connecting NXLog to Symantec MSS
Alex.Gregor created
Hi NXLog Helpers,
I am looking for some help on getting NXLog connected to Symantec MSS (Managed Security Services) and kind of on my last string with this. Right now, I am getting the following error and was wondering what I am missing. I am using this section as my conf file:
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension syslog>
Module xm_syslog
</Extension>
<Input internal>
Module im_internal
</Input>
<Input in>
Module im_msvistalog
# For windows 2003 and earlier use the following:
# Module im_mseventlog
</Input>
<Output out>
Module om_udp
Host (IP Address has been removed)
Port 514
Exec to_syslog_snare();
</Output>
<Route 1>
Path eventlog, in => out
</Route>
I have also attached my log file, if you need anymore information let me know.
Any help on this would be amazing and help me out a ton.
Thank you NXLog helpers, you guys/gals will save my day and be amazing.
Alex.Gregor created
DateTime format conversion
Ascendo created
Hi all
I'm trying to forward logs to my Graylog server using nxlog, and it's working fine, except for one minor problem which I've been unable to fix:
The date/time format in the log is as follows:
2016/03/17 07:06:27 AM Message
I have been able to extract the date into $1 and time into $2 with regex (and message into $3) without an issue. However, I'm unable to parse the combination of the two as a date and get it into 24H format using parsedate or strptime.
Any ideas how I can populate $EventTime with the date + 24H time format from the above? Everything I try seems to result in the field being undefined.
Thanks
Ascendo created
Recursive file_remove
Jakauppila created
Is there any way to recursively delete files with file_remove?
I have applications logging in the following structure:
D:\Applogs\App1\Access-3172016.log
D:\Applogs\App2\Access-3162016.log
We're able to define an input and collect the logs no problem with the following definition:
<Input Access_Logs>
Module im_file
File "D:\\AppLogs\\Access*.log"
Recursive TRUE
...
</Input>
The number of variance of apps are large, so ideally I would want to specify the schedule to delete logs with the following:
<Extension fileop>
Module xm_fileop
<Schedule>
Every 1 min
Exec file_remove('D:\\AppLogs\\Access-*.log', (now() - 259200 ));
</Schedule>
</Extension>
But this looks specifically in the directory defined and looks like you cannot recursively search the directory heirarchy for the file like you can in the input.
Is there any way I can get this functionality?
Jakauppila created
ODBC ASSERTION FAILED
Marazm created
Hello everyone!
When I using im_odbc I get an error in log file:
2016-03-17 11:57:38 INFO nxlog-2.10.1577-trial started
2016-03-17 11:57:45 ERROR ### ASSERTION FAILED at line 480 in im_odbc.c/im_odbc_execute(): "imconf->results.num_col > 0" ###
Anybody know what is mean and what I should to do?
I am using nxlog-2.10.1577 with next config:
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
LogLevel INFO
<Extension gl>
Module xm_gelf
</Extension>
# INPUTS ------------------------------------------------------
<Input in6>
Module im_odbc
ConnectionString DSN=PROD; database=BACAS_SRV;
PollInterval 250
# EXCEPTIONS IN DATA
SQL {CAll dbo.uspGetLoggingExtendedUpperLakes}
SavePos TRUE
Exec $Hostname = "UpperLakes";
Exec $SourceName = "UpperLakes";
</Input>
# OUTPUTS ------------------------------------------------------
<Output out6>
Module om_udp
Host 192.168.9.25
Port 4449
OutputType GELF_UDP
</Output>
# ROUTES ------------------------------------------------------
<Route 6>
Priority 6
Path in6 => out6
</Route>
Marazm created
Transfer TLS Windows Server 2012R2 DNS logs by nxlog towards ELK pile on debian
OncleThorgal created
Hello everyone!
I'm new to the forum, so, I appeal to you because I meet a problem in viewing my DNS logs on ELK stack.
Here is my problem: I have Windows Server 2012R2 VM with nxlog above . The configuration file is the following :
define ROOT C:\Program Files (x86)\nxlog
define CERTDIR %ROOT%\cert
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension _json>
Module xm_json
</Extension>
<Input dnslog>
Module im_file
File "C:\\dns-log.log"
InputType LineBased
Exec $Message = $raw_event;
SavePos TRUE
</Input>
<Output out>
Module om_ssl
Host IP_DU_SERVEUR_LOGSTASH
Port PORT_DU_SERVEUR_LOGSTASH
CAFile %CERTDIR%\logstash-forwarder.crt
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Output>
<Route 1>
Path dnslog => out
</Route>
And when I start it :
My ELK stack run on debian. This are config files :
input {
tcp {
codec =>line { charset => CP1252 }
port => PORT_DU_SERVEUR_LOGSTASH
ssl_verify => false
ssl_enable => true
ssl_cert => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
type => "nxlog"
}
filter {
if [type] == "nxlog" {
grok {
match => [ "message", "(?<date_n_time_us>%{DATE_US} %{TIME} (?:AM|PM))%{SPACE}%{WORD:dns_thread_id}%{SPACE}%{WORD:dns_context}%{SPACE}%{WORD:dns_internal_packet_identifier}%{SPACE}%{WORD:dns_protocol}%{SPACE}%{WORD:dns_direction}%{SPACE}%{IP:dns_ip}%{SPACE}%{WORD:dns_xid}%{SPACE}(?<dns_query_type>(?:Q|R Q))%{SPACE}[%{NUMBER:dns_flags_hex}%{SPACE}%{WORD:dns_flags_chars}%{SPACE}%{WORD:dns_response_code}]%{SPACE}%{WORD:dns_question_type}%{SPACE}%{GREEDYDATA:dns_question_name}" ]
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][nxlog]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
stdout {
codec => rubydebug
}
}
Issue : I can not view my DNS logs on Kibana. Also configure a dashboard . I'm not sure of my configuration files for Logstash , especially the "filter" section and "output". However, when I type the command ngrep INTERFACE -d -t -W byline on my debian, I have queries that appears to be from my WS, so my logs are well received. Could you help me ?
Thank you very much for your time ! And sorry for my english writing...
OncleThorgal created
how to fix apr_sockaddr_info failed & not functional without input modules for splunk SIEM
Deval.Khatri created
1)
2016-03-11 12:03:01 ERROR apr_sockaddr_info failed for 192.168.1.253:514;The requested name is valid, but no data of the requested type was found.
2)
2016-03-11 13:21:37 ERROR module 'in' is not declared at C:\Program Files (x86)\nxlog\conf\nxlog.conf:43
2016-03-11 13:21:37 ERROR route 1 is not functional without input modules, ignored at C:\Program Files (x86)\nxlog\conf\nxlog.conf:43
2016-03-11 13:21:37 WARNING no routes defined!
2016-03-11 13:21:37 WARNING not starting unused module internal
2016-03-11 13:21:37 WARNING not starting unused module out
2016-03-11 13:21:37 INFO nxlog-ce-2.9.1504 started
My nxlog.conf file
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension syslog>
Module xm_json
</Extension>
<Input internal>
Module im_internal
</Input>
<Output out>
Module om_tcp
Host 192.168.253.134
Port 9001
Exec _json();
</Output>
<Route 1>
Path in => out
</Route>
I have configured Receive port on Splunk server which is :9001 and my splunk server ip : 192.168.253.134
I have set the receiving port on my splunk server and trying to get windows 7 logs into my splunk server using nxlog configurations.but having this erros. not able to interpreat these both erros.Appriciate if any one has answer for these both erros.
Thanks!!
Deval.Khatri created
Using NXlog as a server and filtering output log files with hostname.
ankit3sharma created
I am working on a centralised logging server using nxlog both as a client on a windows machine and as a server on rhel 7.
I want to filter the incoming logs using Hostname and SourceName.
I want that the hostname should be used to create the folder with the hostname in /var/log/$Hostname/ and
The filename should use the SourceName like /var/log/$Hostname/$SourceName.log
So nxlog server should create the folder and file using $hostname and $sourcename respectively.
Please help me with the config file for the same.
ankit3sharma created
How to convert local time to UTC before sending logs to Logstash
achechen created
I have the following output config:
<Output out>
Module om_tcp
Host 10.36.52.62
Port 12201
Exec $EventTime = strftime($EventTime, '%Y-%m-%d %H:%M:%S %Z'); \
to_json();
</Output>
Which is sending the EventTime in the local time zone of the server. This is how it looks like at Logstash side:
{
"message" => "{\"EventTime\":\"2016-03-03 03:07:29 Central Standard Time\",\"EventTimeWritten\":\"2016-03-03 03:07:29\",\"Hostname\":\"testwin2012\",\"EventType\":\"INFO\",\"SeverityValue\":2,\"Severity\":\"INFO\",\"SourceName\":\"Service Control Manager\",\"FileName\":\"System\",\"EventID\":7036,\"CategoryNumber\":0,\"RecordNumber\":34297,\"Message\":\"The nxlog service entered the running state. \",\"EventReceivedTime\":\"2016-03-03 03:07:30\",\"SourceModuleName\":\"eventlog\",\"SourceModuleType\":\"im_mseventlog\"}\r",
"@version" => "1",
"@timestamp" => "2016-03-03T09:07:34.479Z",
"host" => "testwin2012",
"port" => 49632,
"type" => "windows",
"EventTime" => "2016-03-03 03:07:29 Central Standard Time",
"EventTimeWritten" => "2016-03-03 03:07:29",
"SeverityValue" => 2,
"Severity" => "INFO",
"SourceName" => "Service Control Manager",
"FileName" => "System",
"EventID" => 7036,
"CategoryNumber" => 0,
"RecordNumber" => 34297,
"Message" => "The nxlog service entered the running state. "
}
I have to do a lot of expensive operations in Logstash to convert the timestamp into UTC. I have to convert "Central Standard Time" to Joda, which requires me to take that string, put it into a seperate field, prepare a dictionary, use an expensive translate operation on that new field and put it back to the timestamp field. Is there any way to make nxlog convert the EventTime field into UTC before sending?
achechen created
How to drop the incoming logs based on the severity
arun.dharan created
I am fairly new to nxlog. I am looking for a help to complete my task. How do i drop the log message based on log levels (severity). The incoming log messages have different log levels (debug, info, warning, error, critical).
For example, If i set severity as warning, the nxlog should drop info and debug log messages. Please provide some examples of nxlog.conf to make use of it.
Thanks for the help in advance.
arun.dharan created
Specific windows event 1102 not getting UserData
system0845 created
Hi,
We have the following configuration for event id 1102 (eventlog cleared):
<Input clearev>
Module im_msvistalog
Query <QueryList>\
<Query Id="3">\
<Select Path="Security">*[System[(EventID=1102)]]</Select>\
</Query>\
</QueryList>
Exec delete($Message);
Exec $Message = to_json();
Exec $SyslogFacilityValue = 17; $SyslogSeverityValue=6;
</Input>
The received message is like that:
Feb 29 10:37:17 XXXXXXXX.sdsd.local Microsoft-Windows-Eventlog[1004]: {"EventTime":"2016-02-29 10:37:17","Hostname":"XXXXXXXX.sdsd.local","Keywords":4620693217682128896,"EventType":"INFO","SeverityValue":2,"Severity":"INFO","EventID":1102,"SourceName":"Microsoft-Windows-Eventlog","ProviderGuid":"{FC65DDD8-D6EF-4962-83D5-6E5CFE9CE148}","Version":0,"Task":104,"OpcodeValue":0,"RecordNumber":124745,"ProcessID":1004,"ThreadID":7792,"Channel":"Security","Category":"Effacement de journal","Opcode":"Informations","EventReceivedTime":"2016-02-29 10:37:18","SourceModuleName":"clearev","SourceModuleType":"im_msvistalog"}
As you can see the SubjectUserName information is missing.
But if we look at the detailed view in the eventviewer we can find the information in the XML data:
~~ <Provider Name="Microsoft-Windows-Eventlog" Guid="{fc65ddd8-d6ef-4962-83d5-6e5cfe9ce148}" />
<EventID>1102</EventID>
<Version>0</Version>
<Level>4</Level>
<Task>104</Task>
<Opcode>0</Opcode>
<Keywords>0x4020000000000000</Keywords>
<TimeCreated SystemTime="2016-02-29T09:37:17.602206200Z" />
<EventRecordID>124745</EventRecordID>
<Correlation />
<Execution ProcessID="1004" ThreadID="7792" />
<Channel>Security</Channel>
<Computer>XXXXXXXX.sdsd.local</Computer>
<Security />
</System>
- <UserData>
- <LogFileCleared xmlns:auto-ns3="http://schemas.microsoft.com/win/2004/08/events" xmlns="http://manifests.microsoft.com/win/2004/08/windows/eventlog">
<SubjectUserSid>S-1-5-21-1659004503-179605362-725345543-5237</SubjectUserSid>
<SubjectUserName>myuser</SubjectUserName>
<SubjectDomainName>SDSD</SubjectDomainName>
<SubjectLogonId>0xa5c77</SubjectLogonId>
</LogFileCleared>
</UserData>
</Event>
How could we get this information through the json format ? do we have to develop something for specificxml view and if yes how can we do that ?
Please let me know.
Kind regards,
system0845 created
Log on papertrailapp from Windows 10
hamdy.aea created
I have change the conf file like said in the papertrailapp but i don't receive any log from Windows 10. I have stop and start the service but nothing is received.
hamdy.aea created
Filter out all messages, but the ones we want
yman182 created
Hello,
I have a config that I thought would work, but it does not. I would like to have the syslog service only send specific messages it finds in the log file and ignore all other and not send them to the syslog server. Her is the config I currently have, but it seems to be sending everything. Any help would be great.
<Input watchfile_m_LOGFILENAME>
Module im_file
File 'C:\\logs\\log.log'
Exec $Message = $raw_event;
Exec if $raw_event =~ /has failed/ $SyslogSeverityValue = 3;
Exec if $raw_event =~ /Rx error in packet/ $SyslogSeverityValue = 3;
Exec if $raw_event =~ /LossCounter non zero in packet/ $SyslogSeverityValue = 3;
Exec $SyslogSeverityValue = 6;
Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1;
Thank You,
Yury
yman182 created
pm_repeat not avoiding log duplication
system0845 created
Dear all,
I have the following setup:
Only important part of the config has been extracted:
<Input screenlock>
Module im_msvistalog
Query <QueryList>\
<Query Id="2">\
<Select Path="Security">*[System[(EventID=4624)]]</Select>\
</Query>\
</QueryList>
Exec delete($Message);
Exec if string($EventID) =~ /^4624$/ and string($LogonType) =~ /^7$/ $Message = to_json();
Exec $SyslogFacilityValue = 17; $SyslogSeverityValue=6;
</Input>
<Processor norepeatscreen1>
Module pm_norepeat
CheckFields RecordNumber
</Processor>
<Processor norepeatscreen2>
Module pm_norepeat
CheckFields EventID, TargetUsername, TargetDomainName, LogonType
</Processor>
<Route screen>
Path screenlock => norepeatscreen2 => norepeatscreen1 => out
</Route>
Unfortunately i still receive the event twice if the previous event was a 4625... nay reason / idea ?
Feb 23 12:15:17 XXXXXXXXX.dsds.local Microsoft-Windows-Security-Auditing[636]: {"EventTime":"2016-02-23 12:15:17","Hostname":"XXXXXXXXX.dsds.local","Keywords":-9214364837600034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4624,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{54849625-5478-4994-A5BA-3E3B0328C30D}","Version":0,"Task":12544,"OpcodeValue":0,"RecordNumber":114161,"ProcessID":636,"ThreadID":12056,"Channel":"Security","Category":"Ouvrir la session","Opcode":"Informations","SubjectUserSid":"S-1-5-18","SubjectUserName":"XXXXXXXXX$","SubjectDomainName":"DFINET","SubjectLogonId":"0x3e7","TargetUserSid":"S-1-5-21-1659004503-179605362-725345543-5237","TargetUserName":"myuser","TargetDomainName":"DSDS","TargetLogonId":"0x33be1d17","LogonType":"7","LogonProcessName":"User32 ","AuthenticationPackageName":"Negotiate","WorkstationName":"XXXXXXXXXX","LogonGuid":"{35666711-DC67-5E5C-7155-C9DB261A1FE0}","TransmittedServices":"-","LmPackageName":"-","KeyLength":"0","ProcessName":"C:\\Windows\\System32\\winlogon.exe","IpAddress":"127.0.0.1","IpPort":"0","EventReceivedTime":"2016-02-23 12:15:17","SourceModuleName":"screenlock","SourceModuleType":"im_msvistalog"}
Feb 23 12:15:17 XXXXXXXXX.dsds.local Microsoft-Windows-Security-Auditing[636]: {"EventTime":"2016-02-23 12:15:17","Hostname":"XXXXXXXXX.dsds.local","Keywords":-9214364837600034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4624,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{54849625-5478-4994-A5BA-3E3B0328C30D}","Version":0,"Task":12544,"OpcodeValue":0,"RecordNumber":114161,"ProcessID":636,"ThreadID":12056,"Channel":"Security","Category":"Ouvrir la session","Opcode":"Informations","SubjectUserSid":"S-1-5-18","SubjectUserName":"XXXXXXXXXXXX$","SubjectDomainName":"DFINET","SubjectLogonId":"0x3e7","TargetUserSid":"S-1-5-21-1659004503-179605362-725345543-5237","TargetUserName":"myuser","TargetDomainName":"DSDS","TargetLogonId":"0x33be1d17","LogonType":"7","LogonProcessName":"User32 ","AuthenticationPackageName":"Negotiate","WorkstationName":"XXXXXXXXXXXX","LogonGuid":"{35666711-DC67-5E5C-7155-C9DB261A1FE0}","TransmittedServices":"-","LmPackageName":"-","KeyLength":"0","ProcessName":"C:\\Windows\\System32\\winlogon.exe","IpAddress":"127.0.0.1","IpPort":"0","EventReceivedTime":"2016-02-23 12:15:17","SourceModuleName":"screenlock","SourceModuleType":"im_msvistalog"}
Kind regards,
system0845 created
Detection of broken connection with syslog host
nauman73 created
Hi Guys
I am using NXLog CE for sending logs to syslog host. My output definition is as follows.
<Output out_WebAdmin>
Module om_tcp
Host 10.51.4.38
Port 5544
Exec to_syslog_bsd();
</Output>
I am looking for a way to raise an alert when connection between syslog host and NXLog CE breaks for some reason. I have looked in NXLog documentation and have also tried to find a way on web but so far I have not found a way. The only thing I see is a message in the NXLog log file.
2016-02-19 19:02:34 ERROR om_tcp send failed; An existing connection was forcibly closed by the remote host.
Any ideas?
Regards
Nauman
nauman73 created
Buffer is not working
Konstantin.Grudnev created
Hello,
Here is my config on Windows machine, running nxlog-ce-2.9.1504
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
###############
# Extensions #
###############
<Extension syslog>
Module xm_syslog
</Extension>
<Extension json>
Module xm_json
</Extension>
###########
# Inputs #
###########
<Input some_input>
Module im_file
File 'C:\Logs\input.log'
SavePos TRUE
</Input>
###############
# Processors #
###############
<Processor buffer>
Module pm_buffer
# 1Gb disk buffer 1048576 kilo-bytes
MaxSize 1048576
Type Disk
Directory C:\Logs\buffer
</Processor>
############
# Outputs #
############
<Output tcpout>
Module om_tcp
Port 5170
Host fluentd.company.lan
</Output>
############
# Routes #
############
<Route file>
Path some_input => buffer => tcpout
</Route>
Here's testing case initials:
1. Service ' fluentd.company.lan' is up and running and listens on 5170
2. nxlog up and running with given config
3. Data coming to input.log is successfully routed to output via buffer and is seen in Kibana
Then
1. I change 'C:\Windows\system32\drivers\etc\hosts' file and add '127.0.0.1 fluentd.company.lan' line, saving file
2. Using TCPView tool from SysInternals close current TCP connection with 'fluentd.company.lan:5170'
3. See in nxlog.log, that it tries to connect to 'fluentd.company.lan:5170' and fails to connect
4. Wait for some new data in input.log
5. New data arrived and I see buffer file created 'buffer.1.q' in C:\Logs\buffer and see relevant data in it
6. Wait for some time (2-3 minutes)
7. Again I change 'C:\Windows\system32\drivers\etc\hosts' file and comment '127.0.0.1 fluentd.company.lan' line, saving file
8. nxlog successfully connects to fluentd.company.lan:5170
And here's interesting part, nxlog writes new data found in input file, but I don't see logs in Kibana from buffer file with timestamps from intervals in point #6
Please check this case and make sure buffer is not working on Windows and fix this bug
Konstantin.Grudnev created
BufferSize not change
macymin created
I am running a nxlog enterprise edition trial version now. although I set the bufferSize for my input module, but it seems doesnt take effect as I still get below errors:
2016-02-17 17:51:53 ERROR data size (1261963) is over the limit (65000), will be truncated
2016-02-17 17:53:41 ERROR data size (1271672) is over the limit (65000), will be truncated
2016-02-17 17:54:18 ERROR data size (687118) is over the limit (65000), will be truncated
2016-02-17 17:54:18 ERROR data size (687638) is over the limit (65000), will be truncated
2016-02-17 18:02:55 ERROR data size (689151) is over the limit (65000), will be truncated
2016-02-17 18:02:56 ERROR data size (689671) is over the limit (65000), will be truncated
what is the maximum limit for the buffersize I can set? as you can see I have lines can go up to 1.3M
below is my conf. file:
Panic Soft
#NoFreeOnExit TRUE
define ROOT C:\Program Files (x86)\nxlog
define CERTDIR %ROOT%\cert
define CONFDIR %ROOT%\conf
StringLimit 10485760
LogFile E:\logs\nxlog\nxlog.log
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
<Input beta2_SigmaSrv>
Module im_file
File '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaSrv\beta\traces\SigmaSrv*POOL*.trc'
SavePos TRUE
ReadFromLast TRUE
BufferSize 1500000
</Input>
<Input beta2_SigmaNonTrackout>
Module im_file
File '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaNonTrackoutSrv\beta\traces\SigmaNonTrackoutSrv*POOL*.trc'
SavePos TRUE
ReadFromLast TRUE
BufferSize 1500000
</Input>
<Input beta2_SigmaDUSrv>
Module im_file
File '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaDUSrv\beta\traces\SigmaDUSrv*POOL*.trc'
SavePos TRUE
ReadFromLast TRUE
BufferSize 1500000
</Input>
<Output prod02_out>
Module om_tcp
Host fslelkprod02
Port 4500
</Output>
<Route 1>
Path beta2_SigmaSrv,beta2_SigmaNonTrackout,beta2_SigmaDUSrv => prod02_out
</Route>
macymin created
Buffer is not doing buffering
mazharnazeer created
Hi,
I am implementing configuration for nxlog to read from a file and write to UDP socket. I am also implementing to check if nxlog failed to forward log messages, then it should write in its log. For this case, I am using buffer and checking buffer count to log the message. I checked it by unplugging the network cable and it is not working. Please review following code.
<Extension _syslog>
Module xm_syslog
</Extension>
<Extension _exec>
Module xm_exec
</Extension>
<Processor buffer_Check>
Module pm_buffer
MaxSize 2048
Type Mem
Exec log_info("In Buffer" + buffer_count());
Exec if buffer_count() > 2 \
{\
log_info("Route Failover");\
}\
</Processor>
<Input in_WebAdmin>
# Exec log_info("Reading File");
Module im_file
SavePos TRUE
PollInterval 0.5
File 'C:\\ProgramData\\Cisco\\CUACA\\Server\\Logging\\WAD\\AACM_*.log'
</Input>
<Output out_WebAdmin>
Module om_udp
Host 10.110.22.6
Port 514
Exec to_syslog_ietf();
</Output>
<Route route_CTServer>
Path in_WebAdmin => buffer_Check => out_WebAdmin
</Route>
In above configuration, it should log the message "Route Failover" in case nxlog is not forwarding log messages and nxlog is storing log messages in buffer. Please review it and let me know the solution as soon as possible.
Thanks,
Mazhar Nazeer
mazharnazeer created