As promised earlier on the mailing list, I am designing the new message rewrite capabilities in syslog-ng.
As you probably know, syslog-ng currently supports message templates for each destination, and this template can be used to rewrite the message payload. Each template may contain literal text and macro references. Macros can either expand to parts of the original message or parts that were matched using a regexp.
Here's an example:
destination d_file { file("/var/log/messages" template("<$PRI> $HOST $MSG -- literal text $1\n")); };
The example above uses the format string specified at template to define the log file structure. The words starting with '$' are macros and expand to well defined parts of the original message. Numbered macros like $1 above are substituted to the last regular expression matches, all other characters are put into the result intact.
While this functionality is indeed useful, it is somewhat limited: you cannot use sed-like search-and-replace functions that some of the users requested.
My problem with rewriting message contents was somewhat fundamental: my original intention was to keep message pipelines independent from each other. If the message would be changed while traversing one pipe, this change would be propagated to the pipelines processed later.
This behaviour is sometimes desirable, sometimes directly unwanted. In the case of anonymization the changes would have to be global, e.g. all log paths would receive the anonimized messages, but if you want to store an unanomized version of the logs for troubleshooting, you want the original message, not a stripped version.
The solution I came up with is to generalize the log pipeline concept. Currently a pipe connects one or more sources with one or more destinations with some filtering added. In the new concept everything becomes a pipe element:
source -> filter1 -> filter2 -> ... -> filterN -> destination1 -> destination2 -> ... -> destinationN
Each pipeline may fork into several pipes, e.g. it is possible to do the following:
This is still nothing new, but consider this:
This means that rewrite happens before forking to the two set of destinations, they both receive the rewritten message. However if the user had another global pipeline in her configuration, it would start with the original, unchanged message.
In syslog-ng configuration file speak, this would be something like this:
E.g. you can have log statements embedded into another log statement, log statements at the same level receive the same log message, and have retain the power of filters and log pipe construction at each level.
Not to mention that message pipelines are a natural place for paralellization, e.g. each log statement could be processed by a separate thread, which becomes necessary if the message transformations become CPU intensive.
Whew, this was a long post, expect another post about the message parsing capability I basically finished already.
As you probably know, syslog-ng currently supports message templates for each destination, and this template can be used to rewrite the message payload. Each template may contain literal text and macro references. Macros can either expand to parts of the original message or parts that were matched using a regexp.
Here's an example:
destination d_file { file("/var/log/messages" template("<$PRI> $HOST $MSG -- literal text $1\n")); };
The example above uses the format string specified at template to define the log file structure. The words starting with '$' are macros and expand to well defined parts of the original message. Numbered macros like $1 above are substituted to the last regular expression matches, all other characters are put into the result intact.
While this functionality is indeed useful, it is somewhat limited: you cannot use sed-like search-and-replace functions that some of the users requested.
My problem with rewriting message contents was somewhat fundamental: my original intention was to keep message pipelines independent from each other. If the message would be changed while traversing one pipe, this change would be propagated to the pipelines processed later.
This behaviour is sometimes desirable, sometimes directly unwanted. In the case of anonymization the changes would have to be global, e.g. all log paths would receive the anonimized messages, but if you want to store an unanomized version of the logs for troubleshooting, you want the original message, not a stripped version.
The solution I came up with is to generalize the log pipeline concept. Currently a pipe connects one or more sources with one or more destinations with some filtering added. In the new concept everything becomes a pipe element:
- a filter is a pipe that either drops or forwards messages
- a destination is a pipe that sends the message to a specific destination and the forwards the message to the next node
source -> filter1 -> filter2 -> ... -> filterN -> destination1 -> destination2 -> ... -> destinationN
Each pipeline may fork into several pipes, e.g. it is possible to do the following:
destination1 -> destination2 -> ... -> destinationN
/
source -> filter1 -> filter2 -> ... -> filterN -
\
destination1' -> destination2' -> ... -> destinationN'
This is still nothing new, but consider this:
destination1 -> destination2 -> ... -> destinationN
/
source -> filter1 -> ... -> ... -> rewrite -
\
destination1' -> destination2' -> ... -> destinationN'
This means that rewrite happens before forking to the two set of destinations, they both receive the rewritten message. However if the user had another global pipeline in her configuration, it would start with the original, unchanged message.
In syslog-ng configuration file speak, this would be something like this:
log { source(s_all); rewrite(r_anonimize);
log { filter(f_anonimized_files); destination(d_files); flags(final); };
log { filter(f_anonimized_rest); destination(d_rest_log); };
};
log { source(s_all); destination(d_troubleshoot_logs); };
E.g. you can have log statements embedded into another log statement, log statements at the same level receive the same log message, and have retain the power of filters and log pipe construction at each level.
Not to mention that message pipelines are a natural place for paralellization, e.g. each log statement could be processed by a separate thread, which becomes necessary if the message transformations become CPU intensive.
Whew, this was a long post, expect another post about the message parsing capability I basically finished already.
Comments
program("/root/test.sh $MSG");
Into test.sh, i get the message with $1 statement. But it doesn't work.
It's possible to set/give macros value in a external program ?
So you cannot pass macros as arguments.
template t_essai { template("$HOSTµ$FACILITYµ$PRIORITYµ$LEVELµ$TAGµ$YEAR-$MONTH-$DAY $HOUR:$MIN:$SECµ$PROGRAMµ$MSG'\n"); };
program("/root/getIdPostfix.sh " template(t_essai));
In my getIdPostfix.sh, i use the command 'read' to get the macros from STDIN. It's work !
But i can't receive all the logs by this method. For real 14 logs, i lost 8.
You could increase the buffer size (log_fifo_size) if it is only peaks that your script cannot handle, or you could enable flow-control but that could reduce the performance of your applications, or you could write in something different than shell.