resolved performance bug in cdata handling of revision text
Previously, when appending character data to long text fields, each call
to charhndl required a call to strlen on the text field to which the new
character data was to be appended. This is O(N^2)! The problem was
most severe for vandalized articles with inlined HTML, but it ultimately
affected all data parsing as expat calls the charhndl function every
time it resolves a default XML entity.
By tracking the length of each field in our revisionData structure and
using a custom strncat-type function, it's possible to avoid this
overhead. Now, we are pipe-buffer-bound when processing a 7z-compressed
mediawiki dump. The current simple english wiki dump takes about 3
minutes to process on my 2x2.4ghz laptop, even when handling all text
data. Just decompressing it to /dev/null takes around 1 minute.