I recently read Oracle's papers about RAC, Real Application Clusters. This is relevant as we are presently working on the Virtuoso equivalent.

Caveat: The following is quite technical and not the final word on the matter.

Oracle's claim is roughly as follows: Take a number of machines with access to a shared pool of disks and get scalability in processing power and memory without having to explicitly partition the data or perform other complicated configuration.

This works through implementing a cache consistency protocol between the participating boxes and by parallelizing queries just as one would do on a shared memory SMP box. Each disk page has a box assigned to keep track of it and the responsibility migrates so that the box most often needing the page gets to be the page's guardian, so as not to have to ask anybody else for permission to write the page.

This is a compelling proposition. Surely, it must be unrealistic to expect people to manually partition databases. This would require some understanding of first principles which is scarce out there.

So, should we implement clustering a la Oracle?

Let's look at some basics. If we have an OLTP workload like TPC-C, we usually have affinity between clients and the data they access. This will make each client's pages migrate to be managed by the box the client is connected to. This will work pretty well, no worse than with a single box. If two clients are updating the same data but are connected to two different boxes, this is quite bad since the box that does not have responsibility for the page must ask the other box for write access. This is a round trip, at least tens of microseconds (µs). Consider in comparison that finding a row out of a million takes some 3µs.

Would it not be better to have each partition in a known place and leave all processing to that place? The write contention would be resolved in the box owning the partition and there would be a message but now for requesting the update, not dealing with cache consistency. At what level should one communicate between cluster nodes? Talk about disk pages or about logical operations? If there is complete affinity between boxes and data, the RAC style shared cache needs no messages, each box ends up managing the pages of its clients and all works just as with a local situation. If on the other hand any client will update any page at random, most updates must request the write permission from another node. I will here presume that index tree tops get eventually cached on all nodes. If this were not so, even index lookups would have to most often request each index page from a remote box. Never forget that with a tree index, it takes about 1µs to descend one level and 50µs for a message round trip between processes, not counting any transport latency.

What of RDF query workloads? After all, we are in the first instance concerned with winning the RDF storage race and after this the TPC ones. We design for both but do RDF first since this is our chosen specialty.

The disadvantage of having to specify partitioning is less weighty with RDF since there are only a few big tables and they will be at default settings, pretty much always. We do not expect the application developer to ever change these settings although it is in principle possible.

What about queries? The RDF workload is mostly random access and loop joins. How would these run on RAC? For now, let's make a thought experiment and compare cache fusion to a hash partitioned cluster. In the following, I do not describe how Oracle actually works but will just describe how I would do things if I implemented a RAC style clustering. With a RAC style cluster, I'd split the outer loop into equal partitions and run them in parallel on different boxes. Each would build a working set for its part of the query and pages that were needed by more than one box would be read once from disk and a second time from the node that had them in cache. The top nodes of index trees would end up cached on all boxes. It would seem that all boxes would fill their cache with the same data. Now it may be that RAC makes it so that a page is cached only on one box and other boxes wanting the page must go to that box to get access to the page. But this would be a disaster in index lookups. It is less than a microsecond per local index tree level, but if there is a round trip, it would be at least 50µs per level of the index tree. I don't know for sure about Oracle but if I did RAC, I'd have to allow duplicate read content in caches. This would have the effect that the aggregate cache size would be closer to the single cache size than to the sum of the cache sizes. A physically partitioned database would not ship pages so caches would not overlap and the aggregate cache would indeed be the sum of the sizes. Now this is good only insofar all boxes participate but with evenly distributed data this is a good possibility.

Of course, if RAC knew to split queries so that data and nodes had real affinity then the problem would be less. For indices one would need a map of key values to boxes. A little like a top level index shared among all nodes. The key value would give the node, just like with partitioning.

This would make partitioning on the fly. Joins that were made the most frequently would cause migration that would make these joins co-located.

We must optimize the number of messages needed to execute long series of loop joins. For parallelizing single queries, the most obvious approach would be to partition the first/outermost loop that is more than one iteration into equal size chunks. With RDF data, the join keys will mostly begin GS or GO with a possible P appended. If GO or GS specify the partition, partitioning by hash will yield the node that will provide the result.

The number of messages can be reduced to a minimum of the number of join steps times number of boxes minus one if the loops are short enough and multiple operations are carried by one message.

With RAC style clustering, each index lookup would have to be sent to the node most likely to hold the answer. If pages have to be fetched from other nodes, we have disastrous performance, at least 50µs for each non-local page. If there are two non-local pages in a lookup, the overhead will exceed the overhead of delegating the single lookup. Index page access in lookups cannot be easily batched, the way index lookups going to the same node can be batched. Batching multiple hopefully long operations into a single message is the only way to defeat the extreme cost of sending messages. An index lookup does not know what page it will need until it needs it. A way of batching these would be to run multiple lookups in parallel and to combine remote page requests grouping them by destination. This would not be impossible, simply we would have to run 100 index lookups in parallel on a thread, 100 first levels, 100 second levels and so forth. Suppose an outer loop that gives 100 rows and then an inner loop that retrieves 1 row for each. A query to get the email address of Mary would do this, supposing 100 Marys in the db. {?person firstName "Mary" . ?person email ?m .}. Suppose a cluster of 10 nodes. The first node gets the 100 rows of the outer loop, splits these into 10x10, 10 on each node and then each node does 10 lookups in parallel, meaning 10 first levels, 10 second levels, 10 third levels. The index tree would be 4 deep, branching 300 ways. Running the query a second time would find all data in memory and run with only 18 messages after getting the 100 rows of the first loop. The first run would send lots of messages, almost two per page, for about 800 messages after getting the 100 rows of the first loop.

With partitioning, the situation would be sending 18 messages constant. 9 batches of 10 index lookups and their replies. The latency is 50µs and the lookup is 4µs. We would in fact gain in real time, counting 50µs for messages and 4µs per lookup, the time through the whole exercise of 10x10 random lookups would be 90µs.

If I did RAC style clustering, I'd have to allow replicating the tops of index trees to all caches, and I'd have to batch page request messages from index lookups, effectively doing the lookup vector processing style, meaning 100 first levels, 100 second levels etc. Given a key beginning, I'd have to know what node to send this to, meaning pretty much doing the first levels of the lookup before deciding where to send the lookup, only to have the lookup redone by the box ending up with the lookup. Doing things this way would make Oracle RAC style clustering work with the use case.

Given this, it appears that hash partitioning is easier to implement. Cache fusion clustering without the above mentioned gimmicks would be easiest of all but it would have a disastrous number of messages or it would fill all the caches with the same data. Avoiding this is possible but hard, as described above.

We will have to experiment with Oracle RAC itself a bit farther down the road. Deciding to use partitioning instead of cache fusion does bring along conversion cost and a very high cost for repartitioning.

Now let us look at the issue of co-location of joins. In a loop join this means that the node that holds the row from the outer loop also holds the row in the inner loop. For example, if order and order line are partitioned on order id, joining them on order id will be a  co-located join. Such joins do not involve necessary messages in partitioned clusters. In RAC, they do not involve messages if the pages have migrated to be managed by the node doing the join, otherwise they do, up to 20 or so for the worst case.

Do we get any benefit from co-location with RDF? Supposing joins that go from S to O to S (e.g., population of the city where Mary lives), we do not get much guarantee of co-location.

Suppose the indices GSPO partitioned of GS and OGPS on OG, we know the box with Marys and then we'd know the box where the residence of each Mary was, based on GS. given the city as S, we would again know the box that had the population. All the three triples could be on different boxes. This cannot be helped at design time. At run time this can be helped by batching messages that go to the same node. Let's see how this fans out. 100 Marys from the first node. To get the city, we get 10 batches of 10 messages. We get the 100 cities and then we get their populations, again 10 batches of 10. In this scenario, the scheduling is centrally done by one thread. Suppose it were done by the 10 batches of 10 for getting the city of each Mary. For 10 cities, we'd get 10 lookups for population, each potentially to a different node. For this case, managing the execution by one thread instead of several makes bigger batches and less messages, as one would expect.

It seems that with the RDF case, one may as well forget co-location. In the relational case, one must take advantage of it when there is co-location and when not, try to compensate with longer batches of function shipping.

Excellent as some of the RAC claims are, it still seems that making it work well for an RDF workload would take such magical heuristics of location choice that implementing them would be hard and the result not altogether certain. I could get it to work eventually but hash partitioning seems by far the more predictable route. Also hash partitioning will work in shared nothing scenarios whereas RAC requires shared disks. Shared nothing will not require a SAN, which may make it somewhat lower cost. Also, if messages are grouped in large batches, the performance of the interconnect is not so critical, meaning that maybe even gigabit ethernet might do in cases. RAC style cache maintenance is more sensitive to interconnect latency than batched function shipping. Batched cache consistency is conceivable as discussed above but tough to do.

For recovery and hot software updates, things can be arranged if there is non-local disk access or if partitions are mirrored. A RAC type cluster could use a SAN with internal mirroring. A hash partitioned system could mirror partitions to more than one box with local disk, thus using no mirrored disks. Repartitioning remains the bane of partitioning and not much can be done about that, it seems. The only easy repartitioning is doubling the cluster size. So it seems.