Elaborating on my previous post, as food for thought for an RDF store benchmarking activity under the W3C, I present the following rough sketch. At the end of the below, I propose some common business questions that should be answered by a social web aggregator.

The problem with these is that it is not really possible to ask interesting questions over a large database without involving some sort of counting and grouping. I feel that we simply cannot make a representative benchmark without these, quite regardless of the fact that SPARQL in its present form does not have these features. Hence I have simply stated the questions and left any implementation open. If this seems like an interesting direction, the nascent W3C benchmarking XG (experimental group) can refine the business questions, relative query frequencies, exact data set composition, etc.

Social Web RDF Benchmark

by Orri Erling

Goals

This benchmark model's use of RDF for representing and analyzing use of social software by user communities. The benchmark consists of a scalable synthetic data set, a feed of updates to the data set, and a query mix. The data set reflects the common characteristics of the social web, with realistic distribution of connections, user contributed content, commenting, tagging, and other social web activities. The data set is expressed in the FOAF and SIOC vocabularies. The query mix is divided between relatively short, dashboard or search engine style lookups, and longer running analytics queries.

The system being modeled is an an aggregator of social web content; we could liken it to an RDF-based Technorati with some extra features.

Users can publish their favorite queries or mesh-ups as logical views served by the system. In this manner, queries come to depend on other queries, somewhat like SQL VIEWs can reference each other.

There is a small qualification data set that can be tested against the queries to validate that the system under test (SUT) produces the correct results.

The benchmark is scaled by number of users. To facilitate comparison, some predefined scales are offered, i.e., 100K, 300K, 1M, 3M, 10M users. Each simulated user both produces and consumes content. The level of activity of users is unevenly divided.

There are two work mixes — the browsing mix, which consists of a mix of lookups and contributing content, and the analytics mix, which consists of long-running queries for tracking the state of the network. For each 100 browsing mixes, one analytics mix is performed.

A benchmark run is at least 1h real-time in duration. The metric is calculated by the number of browsing mixes completed during the test window. This simulates 10% of the users being online at any one time, thus for a scale of 1M users, 100K browsing mixes will be simultaneously proceeding.

The test driver submits the work via HTTP. What load balancing or degree of parallel serving of the requests is used is left up to the SUT.

The metric is expressed as queries per second, taking the total number of queries executed by completed browsing mixes and dividing this by the real time of the measurement window. The metric is called qpsSW, for queries per second, socialweb. The cost metric is $/qpsSW, calculated by the costing rules of the TPC. If compute-on-demand infrastructure is used, the costing will be $/qpsSW/day.

The test sponsor is the party contributing the result. The contribution consists of the metric and of a full disclosure report (FDR), written following a template given in the benchmark specification. The disclosure requirements follow the TPC practices, including publishing any configuration scripts, data definition language statements, timing for warm-up and test window, times for individual queries etc. All details of the hardware and software are disclosed.

Test Support Software

The software consists of the data generator and of a test driver. The test driver calls functions supplied by the test sponsor for performing the diverse operations in the test. Source code for any modifications of the test driver is to be published as part of the FDR.

Rules for SUT

Any hardware/software combination — including single machines, clusters, clusters rented from computer providers like Amazon EC2 — is eligible.

The SUT must produce correct answers for the validation queries against the validation data set.

The implementation of the queries is not restricted. These can be any SPARQL or other queries, application server based logic, stored procedures or other, in any language, provided full source code is provided in the FDR.

The data set is provided as serialized RDF. The means of storage are left up to the SUT. The basic intention is to use a triple store of some form, but the specific indexing, use of property tables, materialized views, and so forth, is left up to the test sponsor. All tuning and configuration is to be published in the FDR.

Simulated Workload

For each operation of each mix, the specification shall present:

  1. The logical intent of the operation, the business question, e.g., What is the hot topic among my friends?

  2. The question or update expressed in terms of the data in the data set.

  3. Sample text of a query answering the question or pseudo-code for deriving the answer.

  4. Result set layout, if applicable.

The relative frequencies of the queries are given in the query mix summary.

Browsing Mix

The browsing mix consists of the following operations:

Updates

  • Make a blog post.

  • Make a blog comment.

  • Make a new social contact.

For one new social contact, there are 10 posts and 20 comments.

Queries
  • What are the 10 most recent posts by somebody in my friends or their friends? This would be a typical dashboard item.

  • What are the authoritative bloggers on topic x? This is a moderately complex ad-hoc query. Take posts tagged with the topic, count links to them, take the blogs containing them, show the 10 most cited blogs with the most recent posts with the tag. This would be typical of a stored query, like a parameterizable report.

  • How do I contact person x? Calculate the chain of common acquaintances best for reaching person x. For practicality, we do not do a full walk of anything but just take the distinct persons in 2 steps of the user and in 2 steps of x and see the intersection.

  • Who are the people like me? Find the top 10 people ranked by count of tags in common in the person's tag cloud. The tag cloud is the set of interests and the set of tags in blog posts of the person.

  • Who react to or talk about me? Count of replies to material by the user, grouped by the commenting user and the site of the comment, top 20, sorted by count descending.

  • Who are my fans that I do not know? Same as above, excluding people within 2 steps.

  • Who are my competitors? Most prolific posters on topics of my interest that do not cite me.

  • Where is the action? On forums where I participate, what are the top 5 threads, as measured by posts in the last day. Show count of posts in the last day and the day before that.

  • How do I get there? Who are the people active around both topic x and y? This is defined by a person having participated during the last year in forums of x as well as of y. Forums are tagged by topics. The most active users are first. The ranking is proportional to the sum of the number of posts in x and y.

Analytic Mix

These queries are typical questions about the state of the conversation space as a whole and can for example be published as a weekly summary page.

  • The fastest propagating idea - What is the topic with the most users who have joined in the last day? A user is considered to have joined if the user was not discussing this in the past 10 days.

  • Prime movers - What users start conversations? A conversation is the set of material in reply to or citing a post. The reply distance can be arbitrarily long, the citing distance is a direct link to the original post or a reply there to. The number and extent of conversations contribute towards the score.

  • Geography - Over the last 10 days, for each geographic area, show the top 50 tags. The location is the location of the poster.

  • Social hubs - For each community, get the top 5 people who are central to it in terms of number of links to other members of the same community and in terms of being linked from posts. A community is the set of forums that have a specific topic.