Comparing DOM and other object models

By Michael Kay on September 12, 2012 at 04:22p.m.

Saxon has for a long time supported a number of implementations of the XDM tree model:

What we haven't done is to do any serious comparisons of performance, except for occasional bouts of intensive tuning of one or another of the models while it was under development.

The addition of two more external models suggested it was time we did some measurements. We adapted our test driver for the XMark benchmark suite to work with any of the above models, and ran the queries to compare the results. The results quoted below are for the nominal 1Mb source document (actual size 1182530 bytes).

DOM of course is an interface rather than an implementation; the DOM used here was the Apache Xerces DOM.

First, memory usage: the table below gives the "expansion factor", that is, the amount of memory used divided by the number of bytes in the source XML document:

Model Expansion factor
TinyTree 3.79
TinyTreeCondensed 3.12
LinkedTree 5.86
DOM 6.17
JDOM 6.88
JDOM2 6.57
DOM4J 6.31
XOM 4.68
AXIOM 7.79

No great surprises there.

As for query performance, our first measurements showed some alarmingly bad performance on many of the external models for Q7. This query is dominated by traversal of the descendant axis. On some of the models this was taking almost 100 times as long as for the TinyTree. We know that the TinyTree is particularly good at traversing the descendant axis, but this ratio was worse than we expected. We found that the recursive algorithm used by some of the models was particularly inefficient, and replaced it with the non-recursive algorithm used by the LinkedTree, which significantly improved the figures. After this tuning exercise, the following table shows for each model, the average elapsed time compared with the TinyTree figure, together with the minimum and maximum values of this ratio.

Model Average Best Worst
TinyTree 1 1 1
TinyTreeCondensed 1.03 0.97 1.23
LinkedTree 2.29 1.2 6.35
DOM 8.05 1.98 23.05
JDOM 12.17 2.45 71.07
JDOM2 6.51 2.08 28.47
DOM4J 17.3 2.11 98.38
XOM 3.8 1.99 6.09
Axiom 11.15 2.61 60.18

This clearly shows that of the external object models, XOM is clearly out-performing the others. But even XOM is nearly four times slower than the TinyTree on average.

Finally, we also measured build time: the time taken to build the tree. The figures below are in ms:

Model Build time
TinyTree 44
TinyTreeCondensed 139
LinkedTree 205
DOM 184
JDOM 177
JDOM2 183
DOM4J 187
XOM 283
Axiom 675

Nothing dramatic here, except to note that Axiom appears to be very slow to build the tree.

The results here are of course not necessarily representative of other workloads. Notoriously, the XMark data does not use namespaces, and that could seriously distort the results. The XMark queries are also heavily optimized; for example, they probably do less sorting-into-document-order than your average workload, and that's an operation that can be very expensive with some object models. Nevertheless, any data is better than none, and publishing this may encourage some people to take another look at XOM. Unfortunately, most of the people who use DOM with Saxon do so out of ignorance, and they are the people who are least likely t read this article.

Footnote

A more detailed look at the XMark performance data showed that in many cases the worst-performing code in a number of the external model wrappers (XOM being a notable exception) was when navigating the descendant axis. We've changed some of the wrappers, notably DOM and DOM4J, to use the same algorithm as XOM was using for the descendant axis, and the result is a notable improvement. However, one concern about this algorithm is that it has the potential for very poor performance in trees with a high fan-out, for example, where one parent node has tens of thousands of immediate children. In the case of DOM the problem is compounded by the fact that DOM is an interface, not an implementation, so different implementations of DOM might exhibit very different performance characteristics: the right approach for one implementation is not necessarily the right approach for others.