Benchmark results are always dependent on a very individual setup. Normally it is not useful to generalize such results for every use case, but it can give you a hint. However, if you're really in the need of maximum performance, you should probably create an own benchmark with your objects or even use a profiler to detect the real hot spots in your application.
Parser | Single Value Converters | Standard Converters | Reflection Converter | Serializable Converter |
---|---|---|---|---|
W3C DOM | 1823.0 | 1495.0 | 1437.0 | 1758.0 |
JDOM | 2620.0 | 1796.0 | 1776.0 | 2030.0 |
DOM4J | 1824.0 | 1811.0 | 2636.0 | 2177.0 |
XOM | 571.0 | 716.0 | 995.0 | 958.0 |
StAX (BEA) | 430.0 | 359.0 | 531.0 | 737.0 |
StAX (Woodstox) | 357.0 | 344.0 | 535.0 | 725.0 |
StAX (SJSXP) | 332.0 | 445.0 | 491.0 | 667.0 |
XPP (Xpp3) | 351.0 | 395.0 | 544.0 | 743.0 |
XPP (kXML2) | 299.0 | 353.0 | 517.0 | 761.0 |
XppDom (Xpp3) | 351.0 | 395.0 | 544.0 | 743.0 |
XppDom (kXML2) | 299.0 | 353.0 | 517.0 | 761.0 |
The values have been generated running the ParserBenchmark harness of the XStream benchmark module's test code.
The values above's unit is ms measured after 1000 unmarshalling operations with the object graphs described in the setup using XStream 1.4. The benchmark was run on an AMD Athlon with 2.1GHz running a JVM of Sun JDK 1.6.0_13 (32-bit) in Gentoo Linux. Note again, that these values are no replacement for real profiler results and they may vary from run to run for ~100ms due to this machine's background processes on a single CPU. However, it can give you some idea of what you can expect using different parser technologies.