Thursday, October 30, 2008

syslog-ng message parsing

Earlier this month, I announced the new syslog-ng 3.0 git tree, adding a lot of new features to syslog-ng Open Source Edition. I thought it'd be useful to describe the new features with some more details, so this time I'd write about message parsing.

First of all, the message structure was a bit generalized in syslog-ng. Earlier it was encapsulating a syslog message and had little space to anything beyond that. That is, every log message that syslog-ng handled had date, host, program and message fields, but syslog-ng didn't care about message contents.

This has changed, a LogMessage became a set of name-value pairs, with some "built-in" pairs that correspond to the parts of a syslog message.

The aim with this change is: new name-value pairs can be associated with messages through the use of a parsing. It is now possible to parse non-syslog logs and use the columns the same way you could do it with syslog fields. Use them in the name of files, SQL tables or columns in an SQL table.

Here is an example:


parser p_parse_apache_logs { ... };

destination d_peruser {
file("/var/log/apache/${APACHE.USER_NAME}.log");
};

log {
source(s_local);
parser(p_parse_apache_logs);
destination(d_peruser_files);
};


This means that you can "extract" information from the payload and use this information for naming destination files or SQL tables, basically anywhere where you can use a template.

There are currently two parsers implemented in syslog-ng:
  • a generic CSV (comma separated-values) parser, which can be parameterized to basically accept any kind of formally formatted input (so tab/space separated is also ok)
  • a database based parser, which uses a log pattern database to recognize messages belonging to specific applications and extract information on that.
Since the database based parser is quite complex so it deserves its own post, I'd skip that for now. The CSV parser has the following options:

  • template: defines the input to be used for parsing, can use macros
  • columns: list of strings, the names to be associated with the columns parsed
  • delimiters: the set of characters that delimit columns
  • quotes or quote_pairs: the quote characters to support, quote_pairs makes it possible to use different start and end quote (like enclosing fields in braces)
  • null: the null value which if found should substituted with an empty string
  • flags: see the documentation
The csv parser is capable of parsing real CSV data, e.g. it knows about quoting rules. So if you have an application that logs into files using space or comma separated data, you can almost be sure that you can process it with CSV parser.

Here is an example that parses Apache logs, so that each field in the message becomes a name-value pair:


parser p_apache {
csv-parser(columns("APACHE.CLIENT_IP",
"APACHE.IDENT_NAME",
"APACHE.USER_NAME",
"APACHE.TIMESTAMP",
"APACHE.REQUEST_URL",
"APACHE.REQUEST_STATUS",
"APACHE.CONTENT_LENGTH",
"APACHE.REFERER",
"APACHE.USER_AGENT",
"APACHE.PROCESS_TIME",
"APACHE.SERVER_NAME")
# flags:
# escape-none,escape-backslash,escape-double-char,
# strip-whitespace
flags(escape-double-char,strip-whitespace)
delimiters(" ")
quote-pairs('""[]')
);
};

parser p_apache_timestamp {
csv-parser(columns("APACHE.TIMESTAMP.DAY",
"APACHE.TIMESTAMP.MONTH",
"APACHE.TIMESTAMP.YEAR",
"APACHE.TIMESTAMP.HOUR",
"APACHE.TIMESTAMP.MIN",
"APACHE.TIMESTAMP.MIN",
"APACHE.TIMESTAMP.ZONE")
delimiters("/: ")
flags(escape-none)
template("${APACHE.TIMESTAMP}"));
};


The first parser splits the major fields, and the second splits the timestamp to manageable pieces. You can then bind this parser to a log path of your choosing:


log {
source(s_apache);
parser(p_apache); parser(p_apache_timestamp);
destination(d_apache);
};


As you can see the second parser uses a value created by the previous parser, using its template() option. Once this parsing is done, you can use any of the values created this way
in your d_apache destination, be it the name of the file, or a column in an SQL table.

Wednesday, October 08, 2008

6th Netfilter workshop

I've spent my last week in Paris, where this year's Netfilter Workshop was held. I wanted to take this opportunity to thank Eric of INL for the organization. It was a wonderful and useful event, and I enjoyed it a lot. It is always nice to meet these wonderful guys.

Here are some blog posts about the same event:

Finally we could get Transparent Proxying merged, now queued for 2.6.28.

Wednesday, October 01, 2008

syslog-ng OSE 3.0 git tree published

I could finally get my syslog-ng 3.0 OSE tree published at git.balabit.hu. No nightly snapshots yet and I still have to prepare a formal announcement to post on the mailing list, but for those I teased with functions from the 3.0 branch, here it comes.

From the top of my head, OSE 3.0 supports:
  • TLS encrypted channels,
  • syslog message rewrite,
  • parse parts of the syslog message and use the parsed parts in macros
  • PCRE and glob filters (in addition to POSIX regexps),
  • support for the new IETF syslog protocols,
  • program sources,
  • new statistics framework that can be queried using UNIX domain sockets
  • etc.
I just wanted to get the word out. Success/failure reports would be appreciated.