Hi,
I want to use buffer (disk and memory) before sending my data to a TCP syslog, for that I create 2 processors (diskBuffer and memoryBuffer) that I use in a route : IN => diskBuffer => memoryBuffer => out.
When i try to create another route with one or more different process but which also uses buffers (IN2 => P1 => P2 => diskBuffer => memoryBuffer => out), i have an error message on log :
2016-07-23 13:28:51 ERROR cannot add processor module 'diskBuffer' to route 'XXX' because it is already added to route 'YYYY'
2016-07-23 13:28:51 ERROR cannot add processor module 'memoryBuffer' to route 'XXX' because it is already added to route 'YYYY'
This concept is not really explain in the community documentation and suggest to think of the opposite with the example given in page 18 :
Example 4.14 Different routes
<Input in1>
Module im_null
</Input>
<Input in2>
Module im_null
</Input>
<Processor p1>
Module pm_null
</Processor>
<Processor p2>
Module pm_null
</Processor>
<Output out1>
Module om_null
</Output>
<Output out2>
Module om_null
</Output>
<Route 1>
# no processor modules
Path in1 => out1
</Route>
<Route 2>
# one processor module
Path in1 => p1 => out1
</Route>
<Route 3>
# multiple modules
Path in1, in2 => p1 => p2 => out1, out2
</Route>
We have the same error: 2016-07-23 13:36:09 ERROR cannot add processor module 'p1' to route '3' because it is already added to route '2'.
Why a processor is limited on one route ? Is it a bug or a mistake in documentation ?
I use the latest version of nxlog-ce : V2.9.1716.
Best regards
Popote created
Hi,
I'm trying to fit output into the GELF format and I'd like to preserve their specification that user fields have underscore prefixes. From what I read in the nxlog docs, any fields with underscore prefix wouldn't be preserved by xm_json or xm_gelf
Is that true?
Is there any way around this?
dls314 created
My company is looking to setup NxLog. We are having issues reading in multiline exception logs from applications such as Tomcat, Java, Apache etc. I am able to read in the files but unfortunately the output in our GrayLog application is showing every event as one line. I tried to implement the xm_multiline module but i seem to be having issues getting it to work.
installed NxLog and checked my configuration to the following below. restarted the services, let the service run all night and still the output is the same as shown below.
Sample Input Log:
07/07/2016 13:35:11.654 [tomcat-http--43] [ERROR] [4114723 ms] Warning - unprocessed rows in esolutions.care.assess.WeAssessment
esolutions.EsolutionsException: There were 83 unprocessed rows out of 84
at esolutions.base.WeObject.sleep(WeObject.java:2767)
at esolutions.base.WeObject.clear(WeObject.java:3250)
at esolutions.care.assess.WeAssessment.clear(WeAssessment.java:7699)
at esolutions.base.WeObject.close(WeObject.java:2815)
at esolutions.util.WeHTMLTable.getTableHTML(WeHTMLTable.java:541)
at esolutions.util.WeHTMLTable.toHTML(WeHTMLTable.java:508)
at org.apache.jsp.admin.client.cp_005fassessment_jsp._jspService(cp_005fassessment_jsp.java:4412)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
07/07/2016 13:36:21.828 [tomcat-http--26] [ERROR] [4184897 ms] Warning - unprocessed rows in esolutions.care.assess.WeAssessment
esolutions.EsolutionsException: There were 82 unprocessed rows out of 83
at esolutions.base.WeObject.sleep(WeObject.java:2767)
at esolutions.base.WeObject.clear(WeObject.java:3250)
at esolutions.care.assess.WeAssessment.clear(WeAssessment.java:7699)
at esolutions.base.WeObject.close(WeObject.java:2815)
at esolutions.util.WeHTMLTable.getTableHTML(WeHTMLTable.java:541)
at esolutions.util.WeHTMLTable.toHTML(WeHTMLTable.java:508)
at org.apache.jsp.admin.client.cp_005fassessment_jsp._jspService(cp_005fassessment_jsp.java:4412)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:432)
Sample Output From GrayLog in CSV format. The output in the webui is each event as it shows in the "message" column.
timestamp source EventReceivedTime level message SourceModuleName SourceModuleType 2016-07-19T21:27:08.000Z GDPCCA02 07/19/16 17:27 6 2016/07/19 17:27:08.032 | srvmain | INFO | 07/19/2016 17:27:08 pcc-wrapper-log im_file 2016-07-19T21:27:08.000Z GDPCCA02 07/19/16 17:27 6 2016/07/19 17:27:08.032 | srvmain | INFO | java.lang.NumberFor pcc-wrapper-log im_file 2016-07-19T21:27:08.000Z GDPCCA02 07/19/16 17:27 6 2016/07/19 17:27:08.032 | srvmain | INFO | at com.pointclickc pcc-wrapper-log im_file 2016-07-19T21:27:08.000Z GDPCCA02 07/19/16 17:27 6 2016/07/19 17:27:08.032 | srvmain | INFO | at org.apache.cata pcc-wrapper-log im_file
Configuration File. I tried multiple regular expressions with no success.
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/docs/
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension gelf>
Module xm_gelf
</Extension>
<Extension fileop>
Module xm_fileop
</Extension>
<Extension multiline>
Module xm_multiline
HeaderLine /^\d{0,2}\/\d{0,2}\/\d{0,4}/
# HeaderLine '^\d{0,2}\/\d{0,2}\/\d{0,4}\ \d{0,3}\:\d{0,3}\:\d{0,3}\.\d{0,4}\ \['
</Extension>
<Input pcc-wrapper-log>
Module im_file
File "C:\\pivotal-tc-server-standard-3.1.0.RELEASE\\pccweb\\logs\\wrapper.log"
SavePos TRUE
InputType multiline
</Input>
<Input pcc-mdstrace-log>
Module im_file
File "C:\\pivotal-tc-server-standard-3.1.0.RELEASE\\pccweb\\logs\\mdstrace.log"
SavePos TRUE
InputType multiline
</Input>
<Input pcc-exceptionHidingUtil-log>
Module im_file
File "C:\\pivotal-tc-server-standard-3.1.0.RELEASE\\pccweb\\logs\\exceptionHidingUtil.log"
SavePos TRUE
InputType multiline
</Input>
<Input pcc-esolutions-log>
Module im_file
File "C:\\pivotal-tc-server-standard-3.1.0.RELEASE\\pccweb\\logs\\esolutions.log"
SavePos TRUE
InputType multiline
</Input>
#<Input pcc-localHostAccess-log>
# Module im_file
# File "C:\\pivotal-tc-server-standard-3.1.0.RELEASE\\pccweb\\logs\\localhost_access_log.*"
# SavePos TRUE
# InputType multiline
#</Input>
<Output graylog>
Module om_udp
Host graylog.genesishcc.com
Port 12201
OutputType GELF
</Output>
<Route PCC>
Path pcc-wrapper-log => pcc-mdstrace-log => pcc-exceptionHidingUtil-log => pcc-esolutions-log => graylog
## Path pcc-wrapper-log => pcc-mdstrace-log => pcc-exceptionHidingUtil-log => pcc-esolutions-log => pcc-localHostAccess-log => graylog
</Route>
gmelasecca created
Hello
Help me please.
I want collect events from the database use Time-based (Not Id)
What can i do?
Thank
toreno93 created
Team,
As part of the assesment i need to provide list of open source items nxlog uses and its licensing information so our software legal team can do its assesment. Is there any link our documentation available which explains all the open source item we are currently using in nxlog.
Thanks,
Imran
enghouse created
Hi all
I would be glad to know, is it possible to use Nxlog community edition with ElasticSearch?
In the documentation I have read that the om_elasticsearch module is needed which exists in Enterprise Edition only.
Thanks in advanced.
ehsanTC created
How can I select some messages from a single source for 1 output and some for another based on the syslog content, I'm using community edition I have RTFMed but haven't found anything describing how to do this. I've tried using the Route block to send to multiple outputs and then using the drop() option in the output inside some <Exec> tags but it doesn't seem to work and I end up with the same stuff in both outputs.
Preston.Taylor created
Hi, I´m trying to add some info to my logs via xm_perl before send it to elasticsearch (using json format). As result, it would be nice to add some fileds from my perl code in nested way. Is it posible to use something like set_field_XXX($event, "myAddedfield.myAddedSubfield", "value")?
At the end, I want to create nested fields inside my json object.
Thx.
zz created
Hello, I am testing nxlog to see if it works with sending security logs to our SIEM. I only want to send the security Events on our servers, and have our config file as shown:
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension syslog>
Module xm_syslog
</Extension>
<Input internal>
Module im_internal
</Input>
<Input eventlog>
Module im_msvistalog
# Uncomment the following to collect specific event logs only
Query <QueryList>\
<Query Id="0">\
# <Select Path="Application">*</Select>\
# <Select Path="System">*</Select>\
<Select Path="Security">*</Select>\
</Query>\
</QueryList>
</Input>
<Output out>
Module om_udp
Host 10.250.254.19
Port 514
Exec to_syslog_snare();
</Output>
<Route 1>
Path eventlog, internal => out
</Route>
I get some security logs, but many are missing, like logon/logoff events (4624, 4634)
1. - why are these events missing and
2. Eventually, I would like to only send certain Event ID"s to our SIEM, and hope to get help with an example of what the query would look like with the specific Event ID's needed.
I will want to just send PCI Event ID's to our SEIM for retention.
cwalter created
I want to be able to take into account the version of the operating system (which is unknown at time of installation) in the configuration.
For example, I might want to output to a different server based on whether the installation is running on a server or on a workstation.
Thanks
loomsystems created
I am attempimg to get nxlog installed on ubuntu. It appears that libperl5.18 is required but no longer available. Any suggestions?
sudo dpkg -i nxlog-ce_2.9.1504_ubuntu_1404_amd64.deb
[sudo] password for
(Reading database ... 59404 files and directories currently installed.)
Preparing to unpack nxlog-ce_2.9.1504_ubuntu_1404_amd64.deb ...
Unpacking nxlog-ce (2.9.1504) over (2.9.1504) ...
dpkg: dependency problems prevent configuration of nxlog-ce:
nxlog-ce depends on libperl5.18 (>= 5.18.2); however:
Package libperl5.18 is not installed.
sudo apt-get install libperl5.18
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package libperl5.18 is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
Linux ubuntunxlog 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
bobbyclarke663 created
Hello,
We're testing nxlog and at least a few times per day it's throwing this error. When this happens the error repeats itself in the logs for a few hours until either the log fills up the partition or the kernel kills nxlog due to memory running out (Out of memory: Kill process 2422 (nxlog) score 762 or sacrifice child).
I'm not a C programmer but looking at the om_dbi driver source it seems to me as if this assertion might be checking if the length of the SQL query is longer than a predefined constant. Could this be the case? Is there a limit on the size of the SQL query nxlog can send? I think I might be wrong because I've hardcoded SQL queries I believe are longer than those triggering the assertion error and they went through just fine.
I came across a thread on the same error and the poster stated they resolved the issue by recompiling nxlog from source. I did that but unfortunately the latest available source is from an older version and couldn't get our nxlog.conf to work with it.
Any ideas what this assertion is checking?
BTW, NXLog is running on CentOS 6.8 and there's plenty of free memory (8 GB) and CPU power available.
Thank you,
Babak B.
onyxbb created
Hello
I need to support the gelf_tcp on my Gentoo servers, but on your website the last source avaible is nxlog-ce-2.8.1248.tar.gz. This version does not support the gelf_tcp.
Where can I download the latest source of nxlog (at least NXLog-CE 2.9.1347) ?
Thanks so much
ckogel created
Hi everyone,
I’ve configured an Windows EventLog collection server and setup a handful of custom eventlog channels per the following article.
https://blogs.technet.microsoft.com/russellt/2016/05/18/creating-custom-windows-event-forwarding-logs/
My custom event log channels are receiving the correct logs, and everything is working as expected as far as event collection goes.
http://i133.photobucket.com/albums/q54/1point3liter/misc/WEC_zpsscp5bw2s.png
I'm now trying to configure nxlog to pick up the event logs from my custom channels and forward them to a syslog server, but it doesn’t seem to be working.
nxlog does forward if I query the built in "Security" channel, but not from my custom channels (or even "forwarded events).
Any ideas?
Bryan
Here is a copy of my NXlog configuration file:
define ROOT C:\Program Files (x86)\nxlog
define ROOT_STRING C:\Program Files (x86)\nxlog
define CERTDIR %ROOT%\cert
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
<Extension _syslog>
Module xm_syslog
</Extension>
<Extension json>
Module xm_json
</Extension>
#<Input eventlog>
# Module im_msvistalog
# SavePos TRUE
# #Query <QueryList><Query Id="0"><Select Path="_ApplicationServers">*</Select></Query></QueryList>
# #Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
# Exec $Message = to_json();
#</Input>
<Input eventlog>
Module im_msvistalog
Query <QueryList>\
<Query Id="0">\
<Select Path="WEC/AllServers">*</Select>\
</Query>\
</QueryList>
</Input>
<Output out>
Module om_tcp
Host x.x.x.x
Port 514
</Output>
<Route 1>
Path internal, eventlog => out
</Route>
BryanMahin created
Hello,
I'm trying to convert a date in NXlog from 06/15/16 to 2016-06-15 because NXlog is not able to parse the date (DEBUG couldn't parse date: 06/14/16).
I created a regular expression ($Date =~ s/(\d+)\/(\d+)\/(\d+)/20$3-$2-$1/;) in my module to convert the date. See the module below
Exec if $raw_event =~ /^[0-9][0-9],/ \{ \
ParseDHCP->parse_csv(); \
if $raw_event =~ /^00/ $IDdef = "The log was started."; \
if $raw_event =~ /^01/ $IDdef = "The log was stopped."; \
if $raw_event =~ /^02/ $IDdef = "The log was temporarily paused due to low disk space."; \
if $raw_event =~ /^10/ $IDdef = "A new IP address was leased to a client."; \
if $raw_event =~ /^11/ $IDdef = "A lease was renewed by a client."; \
if $raw_event =~ /^12/ $IDdef = "A lease was released by a client."; \
if $raw_event =~ /^13/ $IDdef = "An IP address was found to be in use on the network."; \
if $raw_event =~ /^14/ $IDdef = "A lease request could not be satisfied because the scope's address pool was exhausted."; \
if $raw_event =~ /^15/ $IDdef = "A lease was denied."; \
if $raw_event =~ /^16/ $IDdef = "A lease was deleted."; \
if $raw_event =~ /^17/ $IDdef = "A lease was expired and DNS records for an expired leases have not been deleted."; \
if $raw_event =~ /^18/ $IDdef = "A lease was expired and DNS records were deleted."; \
if $raw_event =~ /^20/ $IDdef = "A BOOTP address was leased to a client."; \
if $raw_event =~ /^21/ $IDdef = "A dynamic BOOTP address was leased to a client."; \
if $raw_event =~ /^22/ $IDdef = "A BOOTP request could not be satisfied because the scope's address pool for BOOTP was exhausted."; \
if $raw_event =~ /^23/ $IDdef = "A BOOTP IP address was deleted after checking to see it was not in use."; \
if $raw_event =~ /^24/ $IDdef = "IP address cleanup operation has began."; \
if $raw_event =~ /^25/ $IDdef = "IP address cleanup statistics."; \
if $raw_event =~ /^30/ $IDdef = "DNS update request to the named DNS server."; \
if $raw_event =~ /^31/ $IDdef = "DNS update failed."; \
if $raw_event =~ /^32/ $IDdef = "DNS update successful."; \
if $raw_event =~ /^33/ $IDdef = "Packet dropped due to NAP policy."; \
if $raw_event =~ /^34/ $IDdef = "DNS update request failed.as the DNS update request queue limit exceeded."; \
if $raw_event =~ /^35/ $IDdef = "DNS update request failed."; \
if $raw_event =~ /^36/ $IDdef = "Packet dropped because the server is in failover standby role or the hash of the client ID does not match."; \
if $raw_event =~ /^[5-9][0-9]/ $IDdef = "Codes above 50 are used for Rogue Server Detection information."; \
if $raw_event =~ /^.+,.+,.+,.+,.+,.+,.+,.+,0,/ $QResultDef = "NoQuarantine"; \
if $raw_event =~ /^.+,.+,.+,.+,.+,.+,.+,.+,1,/ $QResultDef = "Quarantine"; \
if $raw_event =~ /^.+,.+,.+,.+,.+,.+,.+,.+,2,/ $QResultDef = "Drop Packet"; \
if $raw_event =~ /^.+,.+,.+,.+,.+,.+,.+,.+,3,/ $QResultDef = "Probation"; \
if $raw_event =~ /^.+,.+,.+,.+,.+,.+,.+,.+,6,/ $QResultDef = "No Quarantine Information ProbationTime:Year-Month-Day Hour:Minute:Second:MilliSecond."; \
$host = hostname_fqdn(); \
$Date =~ s/(\d+)\/(\d+)\/(\d+)/20$3-$2-$1/; \
$EventTime = parsedate($Date + " " + $Time); \
$SourceName = "DHCPEvents"; \
$Message = to_json(); \
} \
else \
drop();
However it returns 2016-06-15 17:37:29 INFO EventTime: 20$3-$2-$1
Jan Henk.Veldman created
Hi,
I use multiple input files.
I wish to log periodically (every minute) in nxlog.log the filename of the current input file to control all process chain, and add a Exec log_info("Current InputFile : " + $InputFileName1); in a Schedule Block in Output section.
But it seems that it doesn’t work in a Schedule block => error on nxlog.log => “…field not available in this context…”
My config :
<Input in>
Module im_file
SavePos TRUE
ReadFromLast FALSE
ActiveFiles 20
CloseWhenIdle TRUE
File "/var/log/MUP10/sac/APMUZS4WBS04*.log"
Exec $InputFileName1 = file_name();
</Input>
<Output logstash>
Module om_tcp
Port 6002
Host 10.x.y.z
Exec create_stat("stat", "RATE", 60); add_stat("stat", 1);
<Schedule>
Every 60 sec
Exec log_info("Events send to logstash for the last minute: " + get_stat("stat"));
Exec log_info("Current InputFile : " + $InputFileName1);
</Schedule>
</Output>
If I move my Exec log_info("Current InputFile : " + $InputFileName1); line in a Schedule block in my input section => same error
The only way I find is to move the line in Input section without using a schedule block, like this :
<Input in>
Module im_file
SavePos TRUE
ReadFromLast FALSE
ActiveFiles 20
CloseWhenIdle TRUE
File "/var/log/MUP10/sac/APMUZS4WBS04*.log"
Exec $InputFileName1 = file_name();
Exec log_info("Current InputFile : " + $InputFileName1);
</Input>
But it writes too many files in log….
Any idea ?
Thanks in advance
RemyVeo created
Hi All
Could you confirm please that File_remove with wildcards AND File's created date condition works ?
I'm trying file_remove commands below :
The two first works properly => no problem with file_remove, "simple" or with wildcards
But not the last… using "now() - 18000", to remove file older than 5h never works, and no error in nxlog.log...looks like the line doesn't exist...?
<Schedule>
Every 1 min
Exec file_remove('/var/log/MUP10/sac/APMUZS4WBS03-2016061300.log');
Exec file_remove('/var/log/MUP10/sac/APMUZS4WBS04-201606130*', now());
Exec file_remove('/var/log/MUP10/sac/APMUZS4WBS*.log', (now() - 18000));
</Schedule>
Thks
In Nxlog documentation :
~file_remove(string file, datetime older);
description Remove the file ’file’ if its creation time is older than the value specified in ’older’. It is possible to specify a wildcard in filenames (but not in the path). If you use backslash as the directory separator with wildcards, make sure to escape this (e.g. ’C:\\test\\*.log’). This procedure will reopen the LogFile if this is removed. An error is logged if the operation fails.
RemyVeo created
Hi,
I am using om_http module to send windows eventlogs to AWS API Gateway for further processing. I kept HTTPSAllowUntrusted to True. But I keep getting <cloudfront_hostname>:443 connection failure reconnecting in ## seconds. I can POST data to the URI using curl just fine. I believe it is related to SNI support, which was also limiting other tools like wrk, ab, siege https://github.com/wg/wrk/issues/149 .
Is there any workaround or fix to support SNI?
Thanks,
Shri
shribigb created
My Setup:
- Graylog2 server to collect logs
- Ubuntu machine running Zimbra sending logs from various Zimbra logfiles in GELF format
My problem:
- Messaged received by Graylog are truncated.
Here is an actual message as it appeared on my Ubuntu server's "mailbox.log" file (please note that I have X'd out the email address):
2016-06-12 08:51:17,832 INFO [ImapSSLServer-95] [name=XXXXXXX@XXX.org;ip=10.10.48.74;ua=iPod touch Mail/13C75;] imap - ID elapsed=0
Here is the log as received by Graylog:
2016-06-12 08:51:17,832 INFO [ImapSSLServer-95] [name=XXXXXXX
All message seem to be truncated after exactly the same number of characters. I cannot seem to figure this out and would love some help. Below I have pasted my nxlog.conf file:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally under
## /usr/share/doc/nxlog-ce/ and is also available online at
## http://nxlog.org/docs
########################################
# Global directives #
########################################
User nxlog
Group nxlog
LogFile /var/log/nxlog/nxlog.log
LogLevel INFO
########################################
# Modules #
########################################
<Extension gelf>
Module xm_gelf
</Extension>
<Extension syslog2>
Module xm_syslog
</Extension>
<Input mailbox.log>
Module im_file
File "/opt/zimbra/log/mailbox.log"
InputType LineBased
SavePos TRUE
</Input>
<Input access_log>
Module im_file
File "/opt/zimbra/log/access_log*"
SavePos True
</Input>
<Input audit.log>
Module im_file
File "/opt/zimbra/log/audit.log"
SavePos TRUE
</Input>
<Input clamd.log>
Module im_file
File "/opt/zimbra/log/clamd.log"
SavePos TRUE
</Input>
<Input freshclam.log>
Module im_file
File "/opt/zimbra/log/freshclam.log"
SavePos TRUE
</Input>
<Input mysql_error.log>
Module im_file
File "/opt/zimbra/log/mysql_error.log"
SavePos TRUE
</Input>
<Input mail.log>
Module im_file
File "/var/log/mail.log"
SavePos TRUE
</Input>
<Input zimbra.log>
Module im_file
File "/var/log/zimbra.log"
SavePos TRUE
</Input>
<Input syslog>
Module im_file
File "/var/log/syslog"
SavePos TRUE
</Input>
<Input zimbra-stats.log>
Module im_file
File "/var/log/zimbra-stats.log"
SavePos TRUE
</Input>
<Output out>
Module om_udp
Host 10.10.90.45
Port 5407
Exec to_syslog_snare();
</Output>
<Output out2>
Module om_udp
Host 10.10.90.45
Port 5413
OutputType GELF
</Output>
########################################
# Routes #
########################################
<Route 1>
Path mailbox.log => out
</Route>
<Route 2>
Path access_log => out
</Route>
<Route 3>
Path audit.log => out
</Route>
<Route 4>
Path clamd.log => out
</Route>
<Route 5>
Path freshclam.log => out
</Route>
<Route 6>
Path mysql_error.log => out
</Route>
<Route 7
Path mail.log => out
</Route>
<Route 8>
Path zimbra.log => out
</Route>
<Route 9>
Path syslog => out2
</Route>
<Route 10>
Path zimbra-stats.log => out
</Route>
dtilly created