Over the years we have run Virtuoso on different hardware. We will here give a few figures that help identify the best price point for machines running Virtuoso.
Our test is very simple: Load 20 warehouses of TPC-C data, and then run one client per warehouse for 10,000 new orders. The way this is set up, disk I/O does not play a role and lock contention between the clients is minimal.
The test essentially has 20 server and 20 client threads running the same workload in parallel. The load time gives the single thread number; the 20 clients run gives the multi-threaded number. The test uses about 2-3 GB of data, so all is in RAM but is large enough not to be all in processor cache.
All times reported are real times, starting from the start of the first client and ending with the completion of the last client.
Do not confuse these results with official TPC-C. The measurement protocols are entirely incomparable.
| Test |
Platform |
Load (seconds) |
Run (seconds) |
GHz / cores / threads |
| 1 |
Amazon EC2 Extra Large (4 virtual cores) |
340 |
42 |
1.2 GHz? / 4 / 1 |
| 1 |
Amazon EC2 Extra Large (4 virtual cores) |
305 |
43.3 |
1.2 GHz? / 4 / 1 |
| 2 |
1 x dual-core AMD 5900 |
263 |
58.2 |
2.9 GHz / 2 / 1 |
| 3 |
2 x dual-core Xeon 5130 ("Woodcrest") |
245 |
35.7 |
2.0 GHz / 4 / 1 |
| 4 |
2 x quad-core Xeon 5410 ("Harpertown") |
237 |
18.0 |
2.33 GHz / 8 / 1 |
| 5 |
2 x quad-core Xeon 5520 ("Nehalem") |
162 |
18.3 |
2.26 GHz / 8 / 2 |
We tried two different EC2 instances to see if there would be variation. The variation was quite small. The tested EC2 instances costs 20 US cents per hour. The AMD dual-core costs 550 US dollars with 8G. The 3 Xeon configurations are Supermicro boards with 667MHz memory for the Xeon 5130 ("Woodcrest") and Xeon 5410 ("Harpertown"), and 800MHz memory for the Nehalem. The Xeon systems cost between 4000 and 7000 US dollars, with 5000 for a configuration with 2 x Xeon 5520 ("Nehalem"), 72 GB RAM, and 8 x 500 GB SATA disks.
Caveat: Due to slow memory (we could not get faster within available time), the results for the Nehalem do not take full advantage of its principal edge over the previous generation, i.e., memory subsystem. We'll see another time with faster memories.
The operating systems were various 64 bit Linux distributions.
We did some further measurements comparing Harpertown and Nehalem processors. The Nehalem chip was a bit faster for a slightly lower clock but we did not see any of the twofold and greater differences advertised by Intel.
We tried some RDF operations on the two last systems:
| operation |
Harpertown |
Nehalem |
| Build text index for DBpedia |
1080s |
770s |
| Entity Rank iteration |
263s |
251s |
Then we tried to see if the core multithreading of Nehalem could be seen anywhere. To this effect, we ran the Fibonacci function in SQL to serve as an example of an all in-cache integer operation. 16 concurrent operations took exactly twice as long as 8 concurrent ones, as expected.
For something that used memory, we took a count of RDF quads on two different indices, getting the same count. The database was a cluster setup with one process per core, so a count involved one thread per core. The counts in series took 5.02s and in parallel they took 4.27s.
Then we took a more memory intensive piece that read the RDF quads table in the order of one index and for each row checked that there is the equal row on another, differently-partitioned index. This is a cross-partition join. One of the indices is read sequentially and the other at random. The throughput can be reported as random-lookups-per-second. The data was English DBpedia, about 140M triples. One such query takes a couple of minutes with a 650% CPU utilization. Running multiple such queries should show effects of core multithreading since we expect frequent cache misses.
- On the host OS of the Nehalem system —
| n |
cpu% |
rows per second |
| 1 query |
503 |
906,413 |
| 2 queries |
1263 |
1,578,585 |
| 3 queries |
1204 |
1,566,849 |
- In a VM under Xen, on the Nehalem system —
| n |
cpu% |
rows per second |
| 1 query |
652 |
799,293 |
| 2 queries |
1266 |
1,486,710 |
| 3 queries |
1222 |
1,484,093 |
- On the host OS of the Harpertown system —
| n |
cpu% |
rows per second |
| 1 query |
648 |
1,041,448 |
| 2 queries |
708 |
1,124,866 |
The CPU percentages are as reported by the OS: user + system CPU divided by real time.
So, Nehalem is in general somewhat faster, around 20-30%, than Harpertown. The effect of core multithreading can be noticed but is not huge, another 20% or so for situations with more threads than cores. The join where Harpertown did better could be attributed to its larger cache — 12 MB vs 8 MB.
We see that Xen has a measurable but not prohibitive overhead; count a little under 10% for everything, also tasks with no I/O. The VM was set up to have all CPU for the test and the queries did not do disk I/O.
The executables were compiled with gcc with default settings. Specifying -march=nocona (Core 2 target) dropped the cross-partition join time mentioned above from 128s to 122s on Harpertown. We did not try this on Nehalem but presume the effect would be the same, since the out-of-order unit is not much different. We did not do anything about process-to-memory affinity on Nehalem, which is a non-uniform architecture. We would expect this to increase performance since we have many equal size processes with even load.
The mainstay of the Nehalem value proposition is a better memory subsystem. Since the unit we got was at 800 MHz memory, we did not see any great improvement. So if you buy Nehalem, you should make sure it is with 1333 MHz memory, else the best case will not be over 50% over a 667 MHz Core 2-based Xeon.
Nehalem remains a better deal for us because of more memory per board. One Nehalem box with 72 GB costs less than two Harpertown boxes with 32 GB and offers almost the same performance. Having a lot of memory in a small space is key. With faster memory, it might even outperform two Harpertown boxes, but this remains to be seen.
If space were not a constraint, we could make a cluster of 12 small workstations for the price of our largest system and get still more memory and more processor power per unit of memory. The Nehalem box was almost 4x faster than the AMD box but then it has 9x the memory, so the CPU to memory ratio might be better with the smaller boxes.