Ask questions. Get answers. Find technical product solutions from passionate experts in the NXLog community.

Transfer TLS Windows Server 2012R2 DNS logs by nxlog towards ELK pile on debian
Hello everyone! I'm new to the forum, so, I appeal to you because I meet a problem in viewing my DNS logs on ELK stack. Here is my problem: I have Windows Server 2012R2 VM with nxlog above . The configuration file is the following : define ROOT C:\Program Files (x86)\nxlog define CERTDIR %ROOT%\cert   Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data LogFile %ROOT%\data\nxlog.log   <Extension _json>     Module      xm_json </Extension>   <Input dnslog>     Module      im_file     File        "C:\\dns-log.log"     InputType    LineBased     Exec $Message = $raw_event;     SavePos TRUE </Input>   <Output out>     Module      om_ssl     Host        IP_DU_SERVEUR_LOGSTASH     Port        PORT_DU_SERVEUR_LOGSTASH     CAFile      %CERTDIR%\logstash-forwarder.crt     Exec        $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json(); </Output>   <Route 1>     Path        dnslog => out </Route> And when I start it : My ELK stack run on debian. This are config files : input { tcp {   codec =>line { charset => CP1252 }          port => PORT_DU_SERVEUR_LOGSTASH   ssl_verify => false   ssl_enable => true   ssl_cert => "/etc/pki/tls/certs/logstash-forwarder.crt"   ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"   type => "nxlog" } filter { if [type] == "nxlog" {   grok {    match => [ "message", "(?<date_n_time_us>%{DATE_US} %{TIME} (?:AM|PM))%{SPACE}%{WORD:dns_thread_id}%{SPACE}%{WORD:dns_context}%{SPACE}%{WORD:dns_internal_packet_identifier}%{SPACE}%{WORD:dns_protocol}%{SPACE}%{WORD:dns_direction}%{SPACE}%{IP:dns_ip}%{SPACE}%{WORD:dns_xid}%{SPACE}(?<dns_query_type>(?:Q|R Q))%{SPACE}[%{NUMBER:dns_flags_hex}%{SPACE}%{WORD:dns_flags_chars}%{SPACE}%{WORD:dns_response_code}]%{SPACE}%{WORD:dns_question_type}%{SPACE}%{GREEDYDATA:dns_question_name}" ]   } } } output { elasticsearch {   hosts => ["localhost:9200"]   sniffing => true   manage_template => false    index => "%{[@metadata][nxlog]}-%{+YYYY.MM.dd}"   document_type => "%{[@metadata][type]}" } stdout {   codec => rubydebug } } Issue : I can not view my DNS logs on Kibana. Also configure a dashboard . I'm not sure of my configuration files for Logstash , especially the "filter" section and "output". However, when I type the command ngrep INTERFACE -d -t -W byline on my debian, I have queries that appears to be from my WS, so my logs are well received. Could you help me ? Thank you very much for your time ! And sorry for my english writing...

OncleThorgal created
Replies: 1
View post »
last updated
how to fix apr_sockaddr_info failed & not functional without input modules for splunk SIEM
1) 2016-03-11 12:03:01 ERROR apr_sockaddr_info failed for 192.168.1.253:514;The requested name is valid, but no data of the requested type was found.     2) 2016-03-11 13:21:37 ERROR module 'in' is not declared at C:\Program Files (x86)\nxlog\conf\nxlog.conf:43 2016-03-11 13:21:37 ERROR route 1 is not functional without input modules, ignored at C:\Program Files (x86)\nxlog\conf\nxlog.conf:43 2016-03-11 13:21:37 WARNING no routes defined! 2016-03-11 13:21:37 WARNING not starting unused module internal 2016-03-11 13:21:37 WARNING not starting unused module out 2016-03-11 13:21:37 INFO nxlog-ce-2.9.1504 started   My nxlog.conf file   #define ROOT C:\Program Files\nxlog define ROOT C:\Program Files (x86)\nxlog Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data LogFile %ROOT%\data\nxlog.log <Extension syslog>     Module xm_json </Extension> <Input internal>     Module im_internal </Input>   <Output out>     Module om_tcp     Host 192.168.253.134     Port 9001     Exec _json();      </Output> <Route 1>     Path   in => out        </Route>   I have configured Receive port on Splunk server which is  :9001  and my splunk server ip : 192.168.253.134 I have set the receiving port on my splunk server and trying to get windows 7 logs into my splunk server using nxlog configurations.but having this erros. not able to interpreat these both erros.Appriciate if any one has answer for these both erros.    Thanks!!

Deval.Khatri created
Replies: 1
View post »
last updated
Using NXlog as a server and filtering output log files with hostname.
I am working on a centralised logging server using nxlog both as a client on a windows machine and as a server on rhel 7. I want to filter the incoming logs using Hostname and SourceName. I want that the hostname should be used to create the folder with the hostname in /var/log/$Hostname/ and The filename should use the SourceName like /var/log/$Hostname/$SourceName.log   So nxlog server should create the folder and file using $hostname and $sourcename respectively.   Please help me with the config file for the same.

ankit3sharma created
How to convert local time to UTC before sending logs to Logstash
I have the following output config:   <Output out>     Module      om_tcp     Host        10.36.52.62     Port        12201     Exec     $EventTime = strftime($EventTime, '%Y-%m-%d %H:%M:%S %Z'); \                 to_json(); </Output> Which is sending the EventTime in the local time zone of the server. This is how it looks like at Logstash side: { "message" => "{\"EventTime\":\"2016-03-03 03:07:29 Central Standard Time\",\"EventTimeWritten\":\"2016-03-03 03:07:29\",\"Hostname\":\"testwin2012\",\"EventType\":\"INFO\",\"SeverityValue\":2,\"Severity\":\"INFO\",\"SourceName\":\"Service Control Manager\",\"FileName\":\"System\",\"EventID\":7036,\"CategoryNumber\":0,\"RecordNumber\":34297,\"Message\":\"The nxlog service entered the running state. \",\"EventReceivedTime\":\"2016-03-03 03:07:30\",\"SourceModuleName\":\"eventlog\",\"SourceModuleType\":\"im_mseventlog\"}\r", "@version" => "1", "@timestamp" => "2016-03-03T09:07:34.479Z", "host" => "testwin2012", "port" => 49632, "type" => "windows", "EventTime" => "2016-03-03 03:07:29 Central Standard Time", "EventTimeWritten" => "2016-03-03 03:07:29", "SeverityValue" => 2, "Severity" => "INFO", "SourceName" => "Service Control Manager", "FileName" => "System", "EventID" => 7036, "CategoryNumber" => 0, "RecordNumber" => 34297, "Message" => "The nxlog service entered the running state. " }   I have to do a lot of expensive operations in Logstash to convert the timestamp into UTC. I have to convert "Central Standard Time" to Joda, which requires me to take that string, put it into a seperate field, prepare a dictionary, use an expensive translate operation on that new field and put it back to the timestamp field. Is there any way to make nxlog convert the EventTime field into UTC before sending?

achechen created
Replies: 1
View post »
last updated
How to drop the incoming logs based on the severity
I am fairly new to nxlog. I am looking for a help to complete my task. How do i drop the log message based on log levels (severity). The incoming log messages have different log levels (debug, info, warning, error, critical). For example, If i set severity as warning, the nxlog should drop info and debug log messages. Please provide some examples of nxlog.conf to make use of it. Thanks for the help in advance.

arun.dharan created
Replies: 1
View post »
last updated
Specific windows event 1102 not getting UserData
Hi, We have the following configuration for event id 1102 (eventlog cleared): <Input clearev>     Module      im_msvistalog  Query   <QueryList>\     <Query Id="3">\      <Select Path="Security">*[System[(EventID=1102)]]</Select>\            </Query>\            </QueryList>  Exec delete($Message);  Exec $Message = to_json();  Exec $SyslogFacilityValue = 17; $SyslogSeverityValue=6; </Input> The received message is like that: Feb 29 10:37:17 XXXXXXXX.sdsd.local Microsoft-Windows-Eventlog[1004]: {"EventTime":"2016-02-29 10:37:17","Hostname":"XXXXXXXX.sdsd.local","Keywords":4620693217682128896,"EventType":"INFO","SeverityValue":2,"Severity":"INFO","EventID":1102,"SourceName":"Microsoft-Windows-Eventlog","ProviderGuid":"{FC65DDD8-D6EF-4962-83D5-6E5CFE9CE148}","Version":0,"Task":104,"OpcodeValue":0,"RecordNumber":124745,"ProcessID":1004,"ThreadID":7792,"Channel":"Security","Category":"Effacement de journal","Opcode":"Informations","EventReceivedTime":"2016-02-29 10:37:18","SourceModuleName":"clearev","SourceModuleType":"im_msvistalog"} As you can see the SubjectUserName information is missing. But if we look at the detailed view in the eventviewer we can find the information in the XML data: ~~  <Provider Name="Microsoft-Windows-Eventlog" Guid="{fc65ddd8-d6ef-4962-83d5-6e5cfe9ce148}" />   <EventID>1102</EventID>   <Version>0</Version>   <Level>4</Level>   <Task>104</Task>   <Opcode>0</Opcode>   <Keywords>0x4020000000000000</Keywords>   <TimeCreated SystemTime="2016-02-29T09:37:17.602206200Z" />   <EventRecordID>124745</EventRecordID>   <Correlation />   <Execution ProcessID="1004" ThreadID="7792" />   <Channel>Security</Channel>   <Computer>XXXXXXXX.sdsd.local</Computer>   <Security />   </System> - <UserData> - <LogFileCleared xmlns:auto-ns3="http://schemas.microsoft.com/win/2004/08/events" xmlns="http://manifests.microsoft.com/win/2004/08/windows/eventlog">   <SubjectUserSid>S-1-5-21-1659004503-179605362-725345543-5237</SubjectUserSid>   <SubjectUserName>myuser</SubjectUserName>   <SubjectDomainName>SDSD</SubjectDomainName>   <SubjectLogonId>0xa5c77</SubjectLogonId>   </LogFileCleared>   </UserData>   </Event>   How could we get this information through the json format ? do we have to develop something for specificxml view and if yes how can we do that ?   Please let me know.   Kind regards,      

system0845 created
Replies: 1
View post »
last updated
Log on papertrailapp from Windows 10
I have change the conf file like said in the papertrailapp but i don't receive any log from Windows 10. I have stop and start the service but nothing is received.

hamdy.aea created
Filter out all messages, but the ones we want
Hello, I have a config that I thought would work, but it does not.  I would like to have the syslog service only send specific messages it finds in the log file and ignore all other and not send them to the syslog server.  Her is the config I currently have, but it seems to be sending everything.  Any help would be great. <Input watchfile_m_LOGFILENAME>   Module im_file   File 'C:\\logs\\log.log'   Exec $Message = $raw_event;   Exec if $raw_event =~ /has failed/ $SyslogSeverityValue = 3;   Exec if $raw_event =~ /Rx error in packet/ $SyslogSeverityValue = 3;   Exec if $raw_event =~ /LossCounter non zero in packet/ $SyslogSeverityValue = 3;   Exec $SyslogSeverityValue = 6;   Exec if file_name() =~ /.*\\(.*)/ $SourceName = $1;  Thank You,   Yury

yman182 created
Replies: 1
View post »
last updated
pm_repeat not avoiding log duplication
Dear all, I have the following setup:   Only important part of the config has been extracted: <Input screenlock>     Module      im_msvistalog  Query   <QueryList>\     <Query Id="2">\      <Select Path="Security">*[System[(EventID=4624)]]</Select>\            </Query>\            </QueryList> Exec delete($Message); Exec if string($EventID) =~ /^4624$/ and string($LogonType) =~ /^7$/ $Message = to_json(); Exec $SyslogFacilityValue = 17; $SyslogSeverityValue=6; </Input> <Processor norepeatscreen1> Module pm_norepeat CheckFields RecordNumber </Processor> <Processor norepeatscreen2> Module pm_norepeat CheckFields EventID, TargetUsername, TargetDomainName, LogonType </Processor> <Route screen>    Path        screenlock => norepeatscreen2 => norepeatscreen1 => out </Route> Unfortunately i still receive the event twice if the previous event was a 4625... nay reason / idea ? Feb 23 12:15:17 XXXXXXXXX.dsds.local Microsoft-Windows-Security-Auditing[636]: {"EventTime":"2016-02-23 12:15:17","Hostname":"XXXXXXXXX.dsds.local","Keywords":-9214364837600034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4624,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{54849625-5478-4994-A5BA-3E3B0328C30D}","Version":0,"Task":12544,"OpcodeValue":0,"RecordNumber":114161,"ProcessID":636,"ThreadID":12056,"Channel":"Security","Category":"Ouvrir la session","Opcode":"Informations","SubjectUserSid":"S-1-5-18","SubjectUserName":"XXXXXXXXX$","SubjectDomainName":"DFINET","SubjectLogonId":"0x3e7","TargetUserSid":"S-1-5-21-1659004503-179605362-725345543-5237","TargetUserName":"myuser","TargetDomainName":"DSDS","TargetLogonId":"0x33be1d17","LogonType":"7","LogonProcessName":"User32 ","AuthenticationPackageName":"Negotiate","WorkstationName":"XXXXXXXXXX","LogonGuid":"{35666711-DC67-5E5C-7155-C9DB261A1FE0}","TransmittedServices":"-","LmPackageName":"-","KeyLength":"0","ProcessName":"C:\\Windows\\System32\\winlogon.exe","IpAddress":"127.0.0.1","IpPort":"0","EventReceivedTime":"2016-02-23 12:15:17","SourceModuleName":"screenlock","SourceModuleType":"im_msvistalog"} Feb 23 12:15:17 XXXXXXXXX.dsds.local Microsoft-Windows-Security-Auditing[636]: {"EventTime":"2016-02-23 12:15:17","Hostname":"XXXXXXXXX.dsds.local","Keywords":-9214364837600034816,"EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","EventID":4624,"SourceName":"Microsoft-Windows-Security-Auditing","ProviderGuid":"{54849625-5478-4994-A5BA-3E3B0328C30D}","Version":0,"Task":12544,"OpcodeValue":0,"RecordNumber":114161,"ProcessID":636,"ThreadID":12056,"Channel":"Security","Category":"Ouvrir la session","Opcode":"Informations","SubjectUserSid":"S-1-5-18","SubjectUserName":"XXXXXXXXXXXX$","SubjectDomainName":"DFINET","SubjectLogonId":"0x3e7","TargetUserSid":"S-1-5-21-1659004503-179605362-725345543-5237","TargetUserName":"myuser","TargetDomainName":"DSDS","TargetLogonId":"0x33be1d17","LogonType":"7","LogonProcessName":"User32 ","AuthenticationPackageName":"Negotiate","WorkstationName":"XXXXXXXXXXXX","LogonGuid":"{35666711-DC67-5E5C-7155-C9DB261A1FE0}","TransmittedServices":"-","LmPackageName":"-","KeyLength":"0","ProcessName":"C:\\Windows\\System32\\winlogon.exe","IpAddress":"127.0.0.1","IpPort":"0","EventReceivedTime":"2016-02-23 12:15:17","SourceModuleName":"screenlock","SourceModuleType":"im_msvistalog"}   Kind regards,  

system0845 created
Replies: 1
View post »
last updated
Detection of broken connection with syslog host
Hi Guys I am using NXLog CE for sending logs to syslog host. My output definition is as follows. <Output out_WebAdmin>  Module om_tcp  Host 10.51.4.38  Port 5544  Exec to_syslog_bsd(); </Output> I am looking for a way to raise an alert when connection between syslog host and NXLog CE breaks for some reason. I have looked in NXLog documentation and have also tried to find a way on web but so far I have not found a way. The only thing I see is a message in the NXLog log file. 2016-02-19 19:02:34 ERROR om_tcp send failed; An existing connection was forcibly closed by the remote host. Any ideas? Regards Nauman

nauman73 created
Replies: 1
View post »
last updated
Buffer is not working
Hello, Here is my config on Windows machine, running nxlog-ce-2.9.1504 #define ROOT C:\Program Files\nxlog define ROOT C:\Program Files (x86)\nxlog Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data LogFile %ROOT%\data\nxlog.log ############### # Extensions  # ############### <Extension syslog>     Module    xm_syslog </Extension> <Extension json>     Module    xm_json </Extension> ########### # Inputs  # ########### <Input some_input>     Module    im_file     File    'C:\Logs\input.log'     SavePos    TRUE </Input> ############### # Processors  # ############### <Processor buffer>     Module      pm_buffer     # 1Gb disk buffer 1048576 kilo-bytes     MaxSize    1048576     Type    Disk     Directory  C:\Logs\buffer     </Processor> ############ # Outputs  # ############ <Output tcpout>     Module    om_tcp     Port    5170     Host    fluentd.company.lan </Output> ############ # Routes   # ############ <Route file>     Path   some_input => buffer => tcpout </Route> Here's testing case initials: 1. Service ' fluentd.company.lan' is up and running and listens on 5170 2. nxlog up and running with given config 3. Data coming to input.log is successfully routed to output via buffer and is seen in Kibana Then 1. I change 'C:\Windows\system32\drivers\etc\hosts' file and add '127.0.0.1 fluentd.company.lan' line, saving file 2. Using TCPView tool from SysInternals close current TCP connection with 'fluentd.company.lan:5170' 3. See in nxlog.log, that it tries to connect to 'fluentd.company.lan:5170' and fails to connect 4. Wait for some new data in input.log 5. New data arrived and I see buffer file created 'buffer.1.q' in C:\Logs\buffer and see relevant data in it 6. Wait for some time (2-3 minutes) 7. Again I change 'C:\Windows\system32\drivers\etc\hosts' file and comment '127.0.0.1 fluentd.company.lan' line, saving file 8. nxlog successfully connects to fluentd.company.lan:5170 And here's interesting part, nxlog writes new data found in input file, but I don't see logs in Kibana from buffer file with timestamps from intervals in point #6 Please check this case and make sure buffer is not working on Windows and fix this bug          

Konstantin.Grudnev created
Replies: 5
View post »
last updated
BufferSize not change
I am running a nxlog enterprise edition trial version now. although I set the bufferSize for my input module, but it seems doesnt take effect as I still get below errors: 2016-02-17 17:51:53 ERROR data size (1261963) is over the limit (65000), will be truncated 2016-02-17 17:53:41 ERROR data size (1271672) is over the limit (65000), will be truncated 2016-02-17 17:54:18 ERROR data size (687118) is over the limit (65000), will be truncated 2016-02-17 17:54:18 ERROR data size (687638) is over the limit (65000), will be truncated 2016-02-17 18:02:55 ERROR data size (689151) is over the limit (65000), will be truncated 2016-02-17 18:02:56 ERROR data size (689671) is over the limit (65000), will be truncated what is the maximum limit for the buffersize I can set? as you can see I have lines can go up to 1.3M  below is my conf. file: Panic Soft #NoFreeOnExit TRUE define ROOT C:\Program Files (x86)\nxlog define CERTDIR %ROOT%\cert define CONFDIR %ROOT%\conf StringLimit 10485760  LogFile E:\logs\nxlog\nxlog.log Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data <Input beta2_SigmaSrv>     Module    im_file     File        '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaSrv\beta\traces\SigmaSrv*POOL*.trc'        SavePos TRUE     ReadFromLast TRUE     BufferSize 1500000  </Input> <Input beta2_SigmaNonTrackout>     Module    im_file     File        '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaNonTrackoutSrv\beta\traces\SigmaNonTrackoutSrv*POOL*.trc'        SavePos TRUE     ReadFromLast TRUE     BufferSize 1500000 </Input> <Input beta2_SigmaDUSrv>     Module    im_file     File        '\\fswmesbeta2\E\MTAPPS\IS_Frontend\Sigma\SigmaDUSrv\beta\traces\SigmaDUSrv*POOL*.trc'        SavePos TRUE     ReadFromLast TRUE     BufferSize 1500000 </Input> <Output prod02_out>     Module    om_tcp     Host    fslelkprod02     Port    4500 </Output> <Route 1>     Path        beta2_SigmaSrv,beta2_SigmaNonTrackout,beta2_SigmaDUSrv => prod02_out </Route>  

macymin created
Replies: 1
View post »
last updated
Buffer is not doing buffering
Hi, I am implementing configuration for nxlog to read from a file and write to UDP socket. I am also implementing to check if nxlog failed to forward log messages, then it should write in its log. For this case, I am using buffer and checking buffer count to log the message. I checked it by unplugging the network cable and it is not working. Please review following code. <Extension _syslog>     Module      xm_syslog </Extension> <Extension _exec>     Module    xm_exec </Extension> <Processor buffer_Check>     Module pm_buffer     MaxSize 2048     Type Mem     Exec log_info("In Buffer" + buffer_count());     Exec if buffer_count() > 2 \         {\         log_info("Route Failover");\         }\ </Processor> <Input in_WebAdmin>     # Exec     log_info("Reading File");     Module    im_file     SavePos    TRUE     PollInterval 0.5     File 'C:\\ProgramData\\Cisco\\CUACA\\Server\\Logging\\WAD\\AACM_*.log' </Input> <Output out_WebAdmin>     Module om_udp     Host 10.110.22.6     Port 514     Exec to_syslog_ietf(); </Output> <Route route_CTServer>     Path in_WebAdmin => buffer_Check => out_WebAdmin </Route>   In above configuration, it should log the message "Route Failover" in case nxlog is not forwarding log messages and nxlog is storing log messages in buffer. Please review it and let me know the solution as soon as possible.   Thanks, Mazhar Nazeer

mazharnazeer created
Replies: 1
View post »
last updated
regex delimiter
Dear all, I have a lot of text logs and I want to parse them with RegEx in NXlog comm.edition. Somehow I figured out how to parse them but I dont know how to put them in variables in the config file. In my example ; I parsed first 4 fields with regex like "date time ID product_code" and I have the regex statement. 01/29/2016  09:13:01.000000  1344 140334835169024     49  2          0  Target data state for connection 1 from ip://10.10.100.72 : 1500 has changed because a mirror has been stopped. What is the delimiter which is separating fields in regex ? My regex code is ; [0-2][0-9]\/[[1-2][0-9]\/[1-2]0[1-2][1-9]\s\s\w+:\w+:\w+\.\w+\s\s\w+\s\w+ I want to do something like this in my conf file thank you  $Time = $1;\                 $CStatus = $2;\                 $Process = $3;\                 $Process_result = $4;\  

mbuyukkarakas created
Weird NXLOG behavior sending wrong data
I installed NXLOG onto our windows server. I setup INPUT to send the c:\squid\var\logs\access.log to our graylog server. I restarted the NXLOG service. On the graylog service, i still keep getting windows event log instead of the squid proxy logs. Has anyone encountered this before?  

wilsonchua created
Replies: 1
View post »
last updated
ERROR invalid keyword when I tried parse logs with regex.
Hello ,      I'm trying get specific data from some logs of hadoop with REGEX and I recieved this error: ERROR invalid keyword: Output at C:\Program Files (x86)\nxlog\conf\nxlog.conf:45       Here is my config file: define ROOT C:\Program Files (x86)\nxlog #  Moduledir %ROOT%\modules CacheDir %ROOT%\data Pidfile %ROOT%\data\nxlog.pid SpoolDir %ROOT%\data LogFile %ROOT%\data\nxlog.log # <Extension gelf>     Module         xm_gelf </Extension> <Extension fileop>     Module         xm_fileop </Extension> <Extension json>     Module      xm_json </Extension> <Extension multi>     Module      xm_multiline     HeaderLine  /^(\d+-\d+-\d+\s\d+:\d+:\d+,\d+)/     EndLine        /(.*)/ </Extension> # <Input hadoop>   Module         im_file   File             "E:\\Hadoop\\test\\*.*"   SavePos         TRUE   Recursive     TRUE   InputType        multi      Exec      if $raw_event =~/^(\d+-\d+-\d+\s\d+:\d+:\d+,\d+)\s(?:INFO|ERROR|WARN)\s(org.apache.hadoop.\w+.\w+):\s(.*)/g\             {\                 $Time = $1;\                 $CStatus = $2;\                 $Process = $3;\                 $Process_result = $4;\                 to_json();\             }\             else\             {\                 drop();\             }\ </Input> <Output graylog>     Module      om_udp     Host        10.101.78.224     Port        12201     OutputType    GELF       #Use the following line for debugging (uncomment the fileop extension above as well)     #Exec file_write("C:\\Program Files (x86)\\nxlog\\data\\nxlog_output.log", $raw_event); </Output> <Route eventlog>     Path        hadoop => graylog </Route> Anyone know what is bad in this config file?. THank you.

Juan Andrés.Ramirez created
Replies: 1
View post »
last updated
Eventlog UserID don't contain SID but user name
There is bug in im_msvistalog.c (around line 560): if (ConvertSidToStringSid(imconf->renderbuf[EvtSystemUserID].SidVal, &sidstr)) <p><span style="font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">{</span></p> <p><span class="pl-c1" style="box-sizing: border-box; color: rgb(0, 134, 179); font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">nx_logdata_set_string</span><span style="font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">(logdata, </span><span class="pl-s" style="box-sizing: border-box; color: rgb(24, 54, 145); font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;"><span class="pl-pds" style="box-sizing: border-box;">"</span>UserID<span class="pl-pds" style="box-sizing: border-box;">"</span></span><span style="font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">, user); </span><span style="color: rgb(255, 0, 0); font-family: 'Open Sans', Arial, sans-serif; font-size: 14px; font-weight: bold; line-height: 20px;">&lt;&lt;&lt; There Should be sidstr instead of user</span></p> <p><span class="pl-c1" style="box-sizing: border-box; color: rgb(0, 134, 179); font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">LocalFree</span><span style="font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">(sidstr);</span></p> <p><span style="font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; line-height: 15.2727px; white-space: pre;">}</span></p> <p>&nbsp;</p> </td> </tr> <tr> <td class="blob-code blob-code-inner js-file-line" style="box-sizing: border-box; padding: 0px 10px; position: relative; vertical-align: top; font-family: Consolas, 'Liberation Mono', Menlo, Courier, monospace; font-size: 12px; white-space: pre; overflow: visible; word-wrap: normal;">&nbsp;</td> </tr> </tbody>

Petr.Řehoř created
Replies: 1
View post »
last updated
Nxlog memory issue?
We are using Windows event collector which is pulling in from over 400 hundred servers. We have configured both disk and memory buffers and looks like nxlog peaks at 2GB memory and then starts to crash and no longer sends logs. I am seeing the following messages in the nxlog log. When using mem only buffer 2016-01-29 17:46:52 ERROR EvtNext failed with error 14: Not enough storage is available to complete this operation.   2016-01-29 17:46:52 ERROR EvtUpdateBookmark failed: The handle is invalid.   2016-01-29 17:46:52 ERROR EvtNext failed with error 14: Not enough storage is available to complete this operation.   2016-01-29 17:46:52 ERROR EvtUpdateBookmark failed: The handle is invalid.   2016-01-29 17:46:52 ERROR EvtCreateRenderContext failed; Not enough storage is available to complete this operation. I've adjusted the buffer to use both disk and mem and now getting this...  2016-01-29 18:03:51 ERROR couldn't connect to tcp socket on IP:3515; An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.   Did we reach some sort of limitation or perhaps just too much logs incoming for it to handle? Using version: nxlog-ce-2.9.1504  

optimusb created
Replies: 1
View post »
last updated
How to install in AIX
Hello, is there any guideline for install the community edition in AIX?

Chris.Leung created
Replies: 5
View post »
last updated
NXlog GELF Udp is cutting my Doubletake logs
Hello everybody, I'm trying to collect Doubletake logs from a Centos 6.x server. I have that kind of logs ; 01/31/2016  21:15:26.000000 25786 140456885212928     48  2          0  Ops Skipped:        308390 01/31/2016  21:15:26.000000 25786 140456885212928     49  2          0  Total Mirror Ops:   321057 01/31/2016  21:15:26.000000 25786 140456885212928     50  2          0  Elapsed Time:          2318 seconds 01/31/2016  21:15:26.000000 25786 140456885212928     51  2          0  Paused Time:          0 seconds 01/31/2016  21:15:26.000000 25786 140456885212928     52  2          0  Total number of pauses:          0 01/31/2016  21:44:31.000000 25786 140456771974912     53  2         77  Connection lost with IP address ip://127.0.0.1 : 1501 01/31/2016  21:44:32.000000 25786 140457057756928     54  2     700000  Server Monitor was successfully stopped 01/31/2016  21:44:32.000000 25786 140457057756928     55  2      51503  Source module Stopped 01/31/2016  21:44:32.000000 25786 140457057756928     56  2      52503  Stopping all targets 01/31/2016  21:44:32.000000 25786 140457057756928     57  2      52503  Target module Stopped 00/00/0000  00:00:00.0000 Start of logfile  00/00/0000  00:00:00.0000 Application starting 01/31/2016  21:44:44.000000  9976 139690482448128      1  2          0  Buffer allocator limit is: 67108864 bytes 01/31/2016  21:44:44.000000  9976 139690482448128      2  2          0  QMemoryBufferMax size is: 268435456 bytes 01/31/2016  21:44:45.000000  9976 139690482448128      3  2          0  ActivationCode is valid: "6uvb-kyqa-tgar-wpeu-0t52-ubuv". 01/31/2016  21:44:45.000000  9976 139690482448128      4  2          0  Source Failover is allowed 01/31/2016  21:44:45.000000  9976 139690482448128      5  2          0  Target Failover is allowed 01/31/2016  21:44:45.000000  9976 139690482448128      6  2          0  Source Full Server Failover is allowed 01/31/2016  21:44:45.000000  9976 139690482448128      7  2          0  Target Full Server Failover is allowed 01/31/2016  21:44:45.000000  9976 139690482448128      8  2          0  Source Replication is allowed 01/31/2016  21:44:45.000000  9976 139690482448128      9  2          0  Target Replication is allowed 01/31/2016  21:44:45.000000  9976 139690482448128     10  2          0  Heartbeat Transmission started on port 1500 (interval=3seconds) 01/31/2016  21:44:48.000000  9976 139690482448128     11  2         69  Kernel Started on bl-db01.marsathletic.com  ip://10.10.100.75 : 1500  Version: 7.1.1.1255.0 01/31/2016  21:44:48.000000  9976 139690482448128     12  2     504002  Double-Take has successfully found /dev/dtrep0 01/31/2016  21:44:48.000000  9976 139690482448128     13  2      52501  Target module loaded successfully 01/31/2016  21:44:48.000000  9976 139690482448128     14  2          0  Disabling all replication from the driver 01/31/2016  21:44:48.000000  9976 139690482448128     15  2          0  Returning default addr: 10.10.100.75 : 1500 01/31/2016  21:44:48.000000  9976 139690482448128     16  2         71  Originator Attempting ip://10.10.150.10 : 1500 01/31/2016  21:44:48.000000  9976 139689629570816     17  2         73  Connected to  IP address ip://10.10.150.10 : 1500 01/31/2016  21:44:48.000000  9976 139690413819648     18  2         75  Connection resumed with IP address ip://10.10.150.10 : 1500 01/31/2016  21:44:48.000000  9976 139690482448128     19  2         80  Auto-reconnecting Lvra_0d341950-6cba-412c-834d-8afa3426cc83 to ip://10.10.150.10 : 1500::/var/lib/mysql/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/var/lib/mysql/;/boot/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/boot/;/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/; 01/31/2016  21:44:48.000000  9976 139690482448128     20  2        800  Transmission manually resumed by client 01/31/2016  21:44:48.000000  9976 139690482448128     21  2          0  Returning default addr: 10.10.100.75 : 1500 01/31/2016  21:44:48.000000  9976 139690482448128     22  2         87  Starting replication of set Lvra_0d341950-6cba-412c-834d-8afa3426cc83 for connection 1 01/31/2016  21:44:48.000000  9976 139690482448128     23  2          0  Activating replication on / 01/31/2016  21:44:48.000000  9976 139690482448128     24  2          0  Activating replication on /boot 01/31/2016  21:44:48.000000  9976 139690482448128     25  2          0  Activating replication on /var/lib/mysql 01/31/2016  21:44:48.000000  9976 139690482448128     26  2          0  Disabling replication on /var/log/DT 01/31/2016  21:44:48.000000  9976 139690482448128     27  2          0  Disabling replication on /var/cache/DT 01/31/2016  21:44:48.000000  9976 139690482448128     28  2     500000  Starting a connection for a Linux Virtual Recovery job. 01/31/2016  21:44:48.000000  9976 139690482448128     29  2     500000  Lvra_0d341950-6cba-412c-834d-8afa3426cc83 is connected to ip://10.10.150.10 : 1500::/var/lib/mysql/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/var/lib/mysql/;/boot/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/boot/;/ -> /opt/dbtk/mnt/.job-0d341950-6cba-412c-834d-8afa3426cc83/; using compression level 1 (1) 01/31/2016  21:44:48.000000  9976 139690482448128     30  2     500000  Auto-Reconnect success. ConID = 1 01/31/2016  21:44:48.000000  9976 139690482448128     31  2      51501  Source module loaded successfully 01/31/2016  21:44:48.000000  9976 139690482448128     32  2          0  Detected RedHat configuration for failover persistence. 01/31/2016  21:44:53.000000  9976 139690311149312     33  2         72  Connection request from IP address 10.10.150.10 01/31/2016  21:44:53.000000  9976 139690311149312     34  2      99001  Telling peer IP: ip://10.10.150.10 : 1500 that conditions are OK to proceed. 01/31/2016  21:45:01.000000  9976 139690269189888     35  2        800  Local connection accepted, spinning up new local listen thread 01/31/2016  21:45:01.000000  9976 139690269189888     36  2         72  Responding to request from IP address 127.0.0.1 : 1502 using 127.0.0.1 : 1501 01/31/2016  21:45:01.000000  9976 139690269189888     37  2     600002  User :lms: has FULL access (2) 01/31/2016  21:45:04.000000  9976 139689361135360     38  2          0  Repset contains 41751738977 byte(s) to mirror 01/31/2016  21:45:04.000000  9976 139689361135360     39  2          0  Repset requires 324078 ops to mirror 01/31/2016  21:45:04.000000  9976 139689361135360     40  2         94  Delete Orphans Started <1> 01/31/2016  21:45:04.000000  9976 139689361135360     41  2         89  Mirror Started, Differences, Block Checksum <1> Nxlog is able to send the logs to Graylog via GELF UDP but I cant display the complete line on Graylog. All I can see is a cutted piece of line like ; 01/31/2016  21:15:26.000000 25786 140456885212928 I cant display more than this. I will be very happy if somebody can help to solve this. Thank you. Mehmet   Here is my nxlog.conf ######################################## # Global directives                    # ######################################## User root Group nxlog LogFile /var/log/nxlog/nxlog.log LogLevel INFO ######################################## # Modules                              # ######################################## <Extension gelf>     Module      xm_gelf </Extension> <Input Doubletake>     Module      im_file     File        "/tmp/2.log"     SavePos     TRUE </Input> <Output graylog_out>     Module      om_udp     Host        192.168.2.94     Port        12201     OutputType  GELF </Output> ######################################## # Routes                               # ######################################## <Route 1>     Path        Doubletake => graylog_out </Route>  

mbuyukkarakas created
Replies: 1
View post »
last updated