http://www.openlinksw.com/weblog/oerling/
Orri Erling's Weblog
Orri Erling
oerling@openlinksw.com
2024-03-19T10:32:14Z
Virtuoso Universal Server 08.03.3327
http://www.openlinksw.com/weblog/public/images/vbloglogo.gif
New Semantic Publishing Benchmark Record
http://www.openlinksw.com/weblog/oerling/?id=1866
2016-01-11T15:22:09Z
<p>There is a new <a href="http://ldbcouncil.org/developer/spb" id="link-id0x7fd87214c348">SPB (Semantic Publishing Benchmark)</a> 256 Mtriple record with <a href="http://dbpedia.org/resource/Virtuoso_Universal_Server" id="link-id0x7fd870d26a48">Virtuoso</a>.</p> <p>As before, the result has been measured with the <a href="https://github.com/v7fasttrack/virtuoso-opensource/tree/feature/analytics" id="link-id0x7fd870d26be8">feature/analytics</a> branch of the <a href="https://github.com/v7fasttrack/virtuoso-opensource/" id="link-id0x7fd870d26d58">v7fasttrack</a> open source distribution, and it will soon be available as a preconfigured Amazon EC2 image. The updated <a href="http://www.openlinksw.com/weblogs/oerling/?id=1849" id="link-id0x7fd87217aec8">benchmarks AMI</a> with this version of the software will be out there within the next week, to be announced on this blog.</p> <h2>On the Cost of RDF Query Optimization</h2> <p>RDF query optimization is harder than the relational equivalent; first, because there are more joins, hence an NP complete explosion of plan search space, and second, because cardinality estimation is harder and usually less reliable. The work on characteristic sets, pioneered by <a href="https://www.linkedin.com/pub/thomas-neumann/2/35/189" id="link-id0x7fd870c3e918">Thomas Neumann</a> in <a href="https://domino.mpi-inf.mpg.de/intranet/ag5/ag5publ.nsf/0/AD3DBAFA6FB90DD2C1257593002FF3DF/$file/rdf3x.pdf" id="link-id0x7fd870c3eae8">RDF3X</a>, uses regularities in structure for treating properties usually occurring in the same subject as columns of a table. The same idea is applied for tuning physical representation in <a href="http://homepages.cwi.nl/~duc/papers/emergentschema_www15.pdf" id="link-id0x7fd8721c1ea8">the joint Virtuoso / MonetDB work published at WWW 2015</a>.</p> <p>The Virtuoso results discussed here, however, are all based on a single RDF quad table with Virtuoso's default index configuration.</p> <p>Introducing query plan caching raises the Virtuoso score from 80 qps to 144 qps at the 256 Mtriple scale. The SPB queries are not extremely complex; lookups with many more triple patterns exist in actual workloads, e.g., <a href="http://dbpedia.org/resource/OpenPHACTS" id="link-id0x7fd8721313e8">Open PHACTS</a>. In such applications, query optimization indeed dominates execution times. In SPB, data volumes touched by queries grow near linearly with data scale. At the 256 Mtriple scale, nearly half of CPU cycles are spent deciding a query plan. Below are the CPU cycles for execution and compilation per query type, sorted by descending sum of the times, scaled to milliseconds per execution. These are taken from a one minute sample of running at full throughput.</p> <p>Test system is the same used before in the <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x7fd872131738">TPC-H series</a>: dual Xeon E5-2630 Sandy Bridge, 2 x 6 cores x 2 threads, 2.3GHz, 192 GB RAM.</p> <p>We measure the compile and execute times, with and without using hash join. When considering hash join, the throughput is 80 qps. When not considering hash join, the throughput is 110 qps. With query plan caching, the throughput is 145 qps whether or not hash join is considered. Using hash join is not significant for the workload but considering its use in query optimization leads to significant extra work.</p> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <td> <h3>With hash join</h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Compile</th> <th>Execute</th> <th>Total</th> <th>Query</th> </tr> <tr> <td style="text-align:right;"> <code><i> 3156 ms </i></code> </td> <td style="text-align:right;"> <code><i> 1181 ms </i></code> </td> <td style="text-align:right;"> <code><i> 4337 ms </i></code> </td> <td style="text-align:right;"> <code><i> Total </i></code></td> </tr> <tr> <td style="text-align:right;"> <code> 1327 ms </code> </td> <td style="text-align:right;"> <code> 28 ms </code> </td> <td style="text-align:right;"> <code> 1355 ms </code> </td> <td style="text-align:right;"> <code> query 01 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 444 ms </code> </td> <td style="text-align:right;"> <code> 460 ms </code> </td> <td style="text-align:right;"> <code> 904 ms </code> </td> <td style="text-align:right;"> <code> query 08 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 466 ms </code> </td> <td style="text-align:right;"> <code> 54 ms </code> </td> <td style="text-align:right;"> <code> 520 ms </code> </td> <td style="text-align:right;"> <code> query 06 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 123 ms </code> </td> <td style="text-align:right;"> <code> 268 ms </code> </td> <td style="text-align:right;"> <code> 391 ms </code> </td> <td style="text-align:right;"> <code> query 05 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 257 ms </code> </td> <td style="text-align:right;"> <code> 5 ms </code> </td> <td style="text-align:right;"> <code> 262 ms </code> </td> <td style="text-align:right;"> <code> query 11 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 191 ms </code> </td> <td style="text-align:right;"> <code> 59 ms </code> </td> <td style="text-align:right;"> <code> 250 ms </code> </td> <td style="text-align:right;"> <code> query 10 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 9 ms </code> </td> <td style="text-align:right;"> <code> 179 ms </code> </td> <td style="text-align:right;"> <code> 188 ms </code> </td> <td style="text-align:right;"> <code> query 04 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 114 ms </code> </td> <td style="text-align:right;"> <code> 26 ms </code> </td> <td style="text-align:right;"> <code> 140 ms </code> </td> <td style="text-align:right;"> <code> query 07 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 46 ms </code> </td> <td style="text-align:right;"> <code> 62 ms </code> </td> <td style="text-align:right;"> <code> 108 ms </code> </td> <td style="text-align:right;"> <code> query 09 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 71 ms </code> </td> <td style="text-align:right;"> <code> 25 ms </code> </td> <td style="text-align:right;"> <code> 96 ms </code> </td> <td style="text-align:right;"> <code> query 12 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 61 ms </code> </td> <td style="text-align:right;"> <code> 13 ms </code> </td> <td style="text-align:right;"> <code> 74 ms </code> </td> <td style="text-align:right;"> <code> query 03 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 47 ms </code> </td> <td style="text-align:right;"> <code> 2 ms </code> </td> <td style="text-align:right;"> <code> 49 ms </code> </td> <td style="text-align:right;"> <code> query 02 </code></td> </tr> </table> </td> <td style="text-align:center;"> </td> <td style="text-align:center;"> <h3>Without hash join</h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Compile</th> <th>Execute</th> <th>Total</th> <th>Query</th> </tr> <tr> <td style="text-align:right;"> <code><i> 1816 ms </i></code> </td> <td style="text-align:right;"> <code><i> 1019 ms </i></code> </td> <td style="text-align:right;"> <code><i> 2835 ms </i></code> </td> <td style="text-align:right;"> <code><i> Total </i></code></td> </tr> <tr> <td style="text-align:right;"> <code> 197 ms </code> </td> <td style="text-align:right;"> <code> 466 ms </code> </td> <td style="text-align:right;"> <code> 663 ms </code> </td> <td style="text-align:right;"> <code> query 08 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 609 ms </code> </td> <td style="text-align:right;"> <code> 32 ms </code> </td> <td style="text-align:right;"> <code> 641 ms </code> </td> <td style="text-align:right;"> <code> query 01 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 188 ms </code> </td> <td style="text-align:right;"> <code> 293 ms </code> </td> <td style="text-align:right;"> <code> 481 ms </code> </td> <td style="text-align:right;"> <code> query 05 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 275 ms </code> </td> <td style="text-align:right;"> <code> 61 ms </code> </td> <td style="text-align:right;"> <code> 336 ms </code> </td> <td style="text-align:right;"> <code> query 09 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 163 ms </code> </td> <td style="text-align:right;"> <code> 10 ms </code> </td> <td style="text-align:right;"> <code> 173 ms </code> </td> <td style="text-align:right;"> <code> query 03 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 128 ms </code> </td> <td style="text-align:right;"> <code> 38 ms </code> </td> <td style="text-align:right;"> <code> 166 ms </code> </td> <td style="text-align:right;"> <code> query 10 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 102 ms </code> </td> <td style="text-align:right;"> <code> 5 ms </code> </td> <td style="text-align:right;"> <code> 107 ms </code> </td> <td style="text-align:right;"> <code> query 11 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 63 ms </code> </td> <td style="text-align:right;"> <code> 27 ms </code> </td> <td style="text-align:right;"> <code> 90 ms </code> </td> <td style="text-align:right;"> <code> query 12 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 24 ms </code> </td> <td style="text-align:right;"> <code> 57 ms </code> </td> <td style="text-align:right;"> <code> 81 ms </code> </td> <td style="text-align:right;"> <code> query 06 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 47 ms </code> </td> <td style="text-align:right;"> <code> 1 ms </code> </td> <td style="text-align:right;"> <code> 48 ms </code> </td> <td style="text-align:right;"> <code> query 02 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 15 ms </code> </td> <td style="text-align:right;"> <code> 24 ms </code> </td> <td style="text-align:right;"> <code> 39 ms </code> </td> <td style="text-align:right;"> <code> query 07 </code></td> </tr> <tr> <td style="text-align:right;"> <code> 5 ms </code> </td> <td style="text-align:right;"> <code> 5 ms </code> </td> <td style="text-align:right;"> <code> 10 ms </code> </td> <td style="text-align:right;"> <code> query 04 </code></td> </tr> </table> </td> </tr> </table> <p>Considering hash join always slows down compilation, and sometimes improves and sometimes worsens execution. Some improvement in cost-model and plan-space traversal-order is possible, but altogether removing compilation via caching is better still. The results are as expected, since a lookup workload such as SPB has little use for hash join by nature.</p> <p>The rationale for considering hash join in the first place is that analytical workloads rely heavily on this. A good <a href="http://www.tpc.org/tpch/" id="link-id0x7fd8721ce268">TPC-H</a> score is simply unfeasible without this as <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x7fd8720f3f48">previously discussed on this blog</a>. If RDF is to be a serious contender beyond serving lookups, then hash join is indispensable. The decision for using this however depends on accurate cardinality estimates on either side of the join.</p> <p>Previous work (e.g., papers from <a href="http://www.forth.gr/" id="link-id0x7fd8720f8548">FORTH</a> around <a href="http://dbpedia.org/resource/MonetDB" id="link-id0x7fd8721bfe38">MonetDB</a>) advocates doing away with a cost model altogether, since one is hard and unreliable with RDF anyway. The idea is not without its attraction but will lead to missing out of analytics or to relying on query hints for hash join.</p> <p>The present Virtuoso thinking is that going to rule based optimization is not the preferred solution, but rather using characteristic sets for reducing triples into wider tables, which also cuts down on plan search space and increases reliability of cost estimation.</p> <p>When looking at execution alone, we see that actual database operations are low in the profile, with memory management taking the top 19%. This is due to <code>CONSTRUCT</code> queries allocating small blocks for returning graphs, which is entirely avoidable.</p>
Orri Erling
oerling@openlinksw.com
2016-01-11T15:25:01.739223
DBpedia Usage Report, August 2015
http://www.openlinksw.com/weblog/oerling/?id=1864
2015-08-11T16:58:21Z
<p>We recently published the latest <a href="http://bit.ly/1IL35Xu" id="link-id0x2aabbfb1e308">DBpedia Usage Report</a>, covering v3.3 (released July, 2009) to v3.10 (sometimes called "DBpedia 2014"; released September, 2014).</p> <p>The new report has usage data through July 31, 2015, and brought a few surprises to our eyes. What do you think?</p>
Orri Erling
oerling@openlinksw.com
2015-08-11T12:58:21.429167-04:00
Big Data, Part 2: Virtuoso Meets Impala
http://www.openlinksw.com/weblog/oerling/?id=1862
2015-07-15T20:12:27Z
<p>In this article we will look at <a href="http://dbpedia.org/resource/Virtuoso_Universal_Server" id="link-id0x2aac0250bde8">Virtuoso</a> vs. <a href="https://dbpedia.org/resource/Cloudera_Impala" id="link-id0x2aac01e3c6d8">Impala</a> with 100G TPC-H on two R3.8 EC2 instances. We get a single user win for Virtuoso by a factor of 136, and a five user win by a factor of 55. The details and analysis follow.</p> <p>The load setup is the same as ever, with copying from CSV files attached as external tables into <a href="http://www.cloudera.com/content/cloudera/en/documentation/cloudera-impala/v2-0-x/topics/impala_parquet.html" id="link-id0x2aac014886d8">Parquet tables</a>. We get <code>lineitem</code> split over 88 Parquet files, which should provide enough parallelism for the platform. The Impala documentation states that there can be up to one thread per file, and here we wish to see maximum parallelism for a single query stream. We use the schema from the <a href="https://github.com/cloudera/impala" id="link-id0x2aac01cb0518">Impala github</a> checkout, with <code>string</code> for <code>string</code> and <code>date</code> columns, and <code>decimal</code> for <code>numbers</code>. We suppose the authors know what works best.</p> <p>The execution behavior is surprising. Sometimes we get full platform utilization, but quite often only 200% CPU per box. The query plan for Q1, for example, says 2 cores per box. This makes no sense, as the same plan fully well knows the table cardinality. The settings for scanner threads and cores to use (in <code>impala-shell</code>) can be changed, but the behavior does not seem to change.</p> <p>Following are the run times for one query stream.</p> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Query</th> <th>Virtuoso</th> <th>Impala</th> <th>Notes</th> </tr> <tr> <th style="text-align:center;"> — </th> <td style="text-align:right;"> <code> 332 s </code> </td> <td style="text-align:right;"> <code> 841 s </code> </td> <td style="text-align:left;"> Data Load </td> </tr> <tr> <th style="text-align:center;"> Q1 </th> <td style="text-align:right;"> <code> 1.098 s </code> </td> <td style="text-align:right;"> <code> 164.61 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q2 </th> <td style="text-align:right;"> <code> 0.187 s </code> </td> <td style="text-align:right;"> <code> 24.19 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q3 </th> <td style="text-align:right;"> <code> 0.761 s </code> </td> <td style="text-align:right;"> <code> 105.70 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q4 </th> <td style="text-align:right;"> <code> 0.205 s </code> </td> <td style="text-align:right;"> <code> 179.67 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q5 </th> <td style="text-align:right;"> <code> 0.808 s </code> </td> <td style="text-align:right;"> <code> 84.51 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q6 </th> <td style="text-align:right;"> <code> 2.403 s </code> </td> <td style="text-align:right;"> <code> 4.43 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q7 </th> <td style="text-align:right;"> <code> 0.59 s </code> </td> <td style="text-align:right;"> <code> 270.88 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q8 </th> <td style="text-align:right;"> <code> 0.775 s </code> </td> <td style="text-align:right;"> <code> 51.89 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q9 </th> <td style="text-align:right;"> <code> 1.836 s </code> </td> <td style="text-align:right;"> <code> 177.72 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q10 </th> <td style="text-align:right;"> <code> 3.165 s </code> </td> <td style="text-align:right;"> <code> 39.85 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q11 </th> <td style="text-align:right;"> <code> 1.37 s </code> </td> <td style="text-align:right;"> <code> 22.56 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q12 </th> <td style="text-align:right;"> <code> 0.356 s </code> </td> <td style="text-align:right;"> <code> 17.03 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q13 </th> <td style="text-align:right;"> <code> 2.233 s </code> </td> <td style="text-align:right;"> <code> 103.67 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q14 </th> <td style="text-align:right;"> <code> 0.488 s </code> </td> <td style="text-align:right;"> <code> 10.86 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q15 </th> <td style="text-align:right;"> <code> 0.72 s </code> </td> <td style="text-align:right;"> <code> 11.49 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q16 </th> <td style="text-align:right;"> <code> 0.814 s </code> </td> <td style="text-align:right;"> <code> 23.93 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q17 </th> <td style="text-align:right;"> <code> 0.681 s </code> </td> <td style="text-align:right;"> <code> 276.06 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q18 </th> <td style="text-align:right;"> <code> 1.324 s </code> </td> <td style="text-align:right;"> <code> 267.13 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q19 </th> <td style="text-align:right;"> <code> 0.417 s </code> </td> <td style="text-align:right;"> <code> 368.80 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q20 </th> <td style="text-align:right;"> <code> 0.792 s </code> </td> <td style="text-align:right;"> <code> 60.45 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q21 </th> <td style="text-align:right;"> <code> 0.720 s </code> </td> <td style="text-align:right;"> <code> 418.09 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q22 </th> <td style="text-align:right;"> <code> 0.155 s </code> </td> <td style="text-align:right;"> <code> 40.59 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Total </th> <td style="text-align:right;"> <code> 20 s </code> </td> <td style="text-align:right;"> <code> 2724 s </code> </td> <td style="text-align:left;"> </td> </tr> </table> <p>Because the platform utilization was often low, we made a second experiment running the same queries in five parallel sessions. We show the average execution time for each query. We then compare this with the Virtuoso throughput run average times. We permute the single query stream used in the first tests in 5 different orders, as per the TPC-H spec. The results are not entirely comparable, because Virtuoso is doing the refreshes in parallel. According to Impala documentation, there is no random delete operation, so the refreshes cannot be implemented.</p> <p>Just to establish a baseline, we do <code>SELECT COUNT (*) FROM lineitem</code>. This takes 20s when run by itself. When run in five parallel sessions, the fastest terminates in 64s and the slowest in 69s. Looking at <code>top</code>, the platform utilization is indeed about 5x more in CPU%, but the concurrency does not add much to throughput. This is odd, considering that there is no synchronization requirement worth mentioning between the operations.</p> <p>Following are the average times for each query in the 5 stream experiment.</p> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Query</th> <th>Virtuoso</th> <th>Impala</th> <th>Notes</th> </tr> <tr> <th style="text-align:center;"> Q1 </th> <td style="text-align:right;"> <code> 1.95 s </code> </td> <td style="text-align:right;"> <code> 191.81 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q2 </th> <td style="text-align:right;"> <code> 0.70 s </code> </td> <td style="text-align:right;"> <code> 40.40 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q3 </th> <td style="text-align:right;"> <code> 2.01 s </code> </td> <td style="text-align:right;"> <code> 95.67 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q4 </th> <td style="text-align:right;"> <code> 0.71 s </code> </td> <td style="text-align:right;"> <code> 345.11 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q5 </th> <td style="text-align:right;"> <code> 2.93 s </code> </td> <td style="text-align:right;"> <code> 112.29 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q6 </th> <td style="text-align:right;"> <code> 4.76 s </code> </td> <td style="text-align:right;"> <code> 14.41 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q7 </th> <td style="text-align:right;"> <code> 2.08 s </code> </td> <td style="text-align:right;"> <code> 329.25 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q8 </th> <td style="text-align:right;"> <code> 3.00 s </code> </td> <td style="text-align:right;"> <code> 98.91 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q9 </th> <td style="text-align:right;"> <code> 5.58 s </code> </td> <td style="text-align:right;"> <code> 250.88 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q10 </th> <td style="text-align:right;"> <code> 8.23 s </code> </td> <td style="text-align:right;"> <code> 55.23 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q11 </th> <td style="text-align:right;"> <code> 4.26 s </code> </td> <td style="text-align:right;"> <code> 27.84 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q12 </th> <td style="text-align:right;"> <code> 1.74 s </code> </td> <td style="text-align:right;"> <code> 37.66 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q13 </th> <td style="text-align:right;"> <code> 6.07 s </code> </td> <td style="text-align:right;"> <code> 147.69 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q14 </th> <td style="text-align:right;"> <code> 1.73 s </code> </td> <td style="text-align:right;"> <code> 23.91 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q15 </th> <td style="text-align:right;"> <code> 2.27 s </code> </td> <td style="text-align:right;"> <code> 23.79 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q16 </th> <td style="text-align:right;"> <code> 2.41 s </code> </td> <td style="text-align:right;"> <code> 34.76 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q17 </th> <td style="text-align:right;"> <code> 3.92 s </code> </td> <td style="text-align:right;"> <code> 362.43 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q18 </th> <td style="text-align:right;"> <code> 3.02 s </code> </td> <td style="text-align:right;"> <code> 348.08 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q19 </th> <td style="text-align:right;"> <code> 2.27 s </code> </td> <td style="text-align:right;"> <code> 443.94 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q20 </th> <td style="text-align:right;"> <code> 3.05 s </code> </td> <td style="text-align:right;"> <code> 92.50 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q21 </th> <td style="text-align:right;"> <code> 2.00 s </code> </td> <td style="text-align:right;"> <code> 623.69 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q22 </th> <td style="text-align:right;"> <code> 0.37 s </code> </td> <td style="text-align:right;"> <code> 61.36 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Total for <br /> Slowest Stream </th> <td style="text-align:right;"> <code> 67 s </code> </td> <td style="text-align:right;"> <code> 3740 s </code> </td> <td style="text-align:left;"> </td> </tr> </table> <p>There are 4 queries in Impala that terminated with an error (<code>memory limit exceeded</code>). These were two Q21s, one Q19, one Q4. One stream executed without errors, so this stream is reported as the slowest stream. Q21 will, in the absence of indexed access, do a hash build side of half of <code>lineitem</code>, which explains running out of memory. Virtuoso does Q21 mostly by index.</p> <p>Looking at the 5 streams, we see CPU between 1000% and 2000% on either box. This looks about 5x more than the 250% per box that we were seeing with, for instance, Q1. The process sizes for <code>impalad</code> are over 160G, certainly enough to have the working set in memory. <code>iostat</code> also does not show any <code>I</code>, so we seem to be running from memory, as intended.</p> <p>We observe that Impala does not store tables in any specific order. Therefore a merge join of <code>orders</code> and <code>lineitem</code> is not possible. Thus we always get a hash join with a potentially large build side, e.g., half of <code>orders</code> and half of <code>lineitem</code> in Q21, and all <code>orders</code> in Q9. This explains in part why these take so long. <a href="http://www.tpc.org/tpcds/" id="link-id0x2aac01486778">TPC-DS</a> does not pose this particular problem though, as there are no tables in the DS schema where the primary key of one would be the prefix of that of another.</p> <p>However, the <code>lineitem/orders</code> join does not explain the scores on Q1, Q20, or Q19. A simple hash join of <code>lineitem</code> and <code>part</code> was about 90s, with a replicated <code>part</code> hash table. In the profile, the hash probe was 74s, which seems excessive. One would have to single-step through the hash probe to find out what actually happens. Maybe there are prohibitive numbers of collisions, which would throw off the results across the board. We would have to ask the Impala community about this.</p> <p>Anyway, <i><b>Impala experts out there are invited to set the record straight.</b></i> We have attached the results and the output of the Impala <code>profile</code> statement for each query for the single stream run. <code><a href="http://www.openlinksw.com/weblog/oerling/media/impala_stream0.zip" title="impala_stream0.zip" alt="impala_stream0.zip" id="link-id0x2aac0170e038">impala_stream0.zip</a></code> contains the evidence for the single-stream run; <code><a href="http://www.openlinksw.com/weblog/oerling/media/impala-stream1-5.zip" title="impala-stream1-5.zip" alt="impala-stream1-5.zip" id="link-id0x2aac0170e258">impala-stream1-5.zip</a></code> holds the 5-stream run.</p> <p>To be more Big Data-like, we should probably run with significantly larger data than memory; for example, 3T in 0.5T RAM. At EC2, we could do this with 2 I3.8 instances (6.4T SSD each). With Virtuoso, we'd be done in 8 hours or so, counting 2x for the I/O and 30x for the greater scale (the 100G experiment goes in 8 minutes or so, all included). With Impala, we could be running for weeks, so at the very least we'd like to do this with an Impala expert, to make sure things are done right and will not have to be retried. Some of the hash joins would have to be done in multiple passes and with partitioning.</p> <p>In subsequent articles, we will look at other players in this space, and possibly some other benchmarks, like the TPC-DS subset that <a href="http://www.actian.com/" id="link-id0x2aac0045dc78">Actian</a> uses to beat Impala.</p>
Orri Erling
oerling@openlinksw.com
2015-07-15T16:12:27.639584-04:00
Vectored Execution in Column/Row Stores
http://www.openlinksw.com/weblog/oerling/?id=1860
2015-07-13T17:46:03Z
<p>This article discusses the relationship between vectored execution and column- and row-wise data representations. <a href="http://dbpedia.org/resource/Column-oriented_DBMS" id="link-id0x2aab8a00d208">Column stores</a> are traditionally considered to be good for big scans but poor at indexed access. This is not necessarily so, though. We take <a href="http://www.openlinksw.com/weblog/oerling/?id=1789" id="link-id0x2aab8a00d398">TPC-H Q9</a> as a starting point, working with different row- and column-wise data representations and index choices. The goal of the article is to provide a primer on the performance implications of different physical designs.</p> <p>All the experiments are against the TPC-H 100G dataset hosted in Virtuoso on the test system used before in the <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x2aab8bcec9d8">TPC-H series</a>: dual Xeon E5-2630, 2x6 cores x 2 threads, 2.3GHz, 192 GB RAM. The Virtuoso version corresponds to the <a href="https://github.com/v7fasttrack/virtuoso-opensource/tree/feature/analytics" id="link-id0x2aab8bcecbd8">feature/analytics branch</a> in the <a href="https://github.com/v7fasttrack/virtuoso-opensource/" id="link-id0x2aab8bceccb8">v7fasttrack github project</a>. All run times are from memory, and queries generally run at full platform, 24 concurrent threads.</p> <p>We note that RDF stores and graph databases usually do not have secondary indices with multiple key parts. However, these do predominantly index-based access as opposed to big scans and hash joins. To explore the impact of this, we have decomposed the tables into projections with a single dependent column, which approximates a triple store or a vertically-decomposed graph database like <a href="http://dbpedia.org/resource/DEX_(Graph_database)" id="link-id0x2aab8bced038">Sparksee</a>.</p> <p>So, in these experiments, we store the relevant data four times over, as follows:</p> <ul> <li> <p>100G TPC-H dataset in the column-wise schema as discussed in the TPC-H series, now complemented with indices on <code>l_partkey</code> and on <code>l_partkey, l_suppkey</code> </p> </li> <li> <p>The same in row-wise data representation </p> </li> <li> <p>Column-wise tables with a single dependent column for <code>l_partkey, l_suppkey, l_extendedprice, l_quantity, l_discount, ps_supplycost, s_nationkey, p_name</code>. These all have the original tables primary key, e.g., <code>l_orderkey, l_linenumber</code> for the <code>l_ prefixed tables</code> </p> </li> <li> <p>The same with row-wise tables</p> </li> </ul> <p>The column-wise structures are in the <code>DB</code> qualifier, and the row-wise are in the <code>R</code> qualifier. There is a summary of space consumption at the end of the article. This is relevant for scalability, since even if row-wise structures can be faster for scattered random access, they will fit less data in RAM, typically 2 to 3x less. Thus, if "faster" rows cause the working set not to fit, "slower" columns will still win.</p> <p>As a starting point, we know that the best Q9 is the one in the Virtuoso TPC-H implementation which is described in <a href="http://www.openlinksw.com/weblog/oerling/?id=1789" id="link-id0x2aab8a78d208">Part 10 of the TPC-H blog series</a>. This is a scan of <code>lineitem</code> with a selective hash join followed ordered index access of <code>orders</code>, then hash joins against the smaller tables. There are special tricks to keep the hash tables small by propagating restrictions from the probe side to the build side.</p> <p>The query texts are <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/" id="link-id0x2aab8bc59538">available here</a>, along with the table declarations and scripts for populating the single-column projections. <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/rs.sql" id="link-id0x2aab8bc59778">rs.sql</a></code> makes the tables and indices, <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/rsload.sql" id="link-id0x2aab8bc59918">rsload.sql</a></code> copies the data from the TPC-H tables.</p> <p>The business question is to calculate the profit from sale of selected <code>parts</code> grouped by <code>year</code> and <code>country</code> of the <code>supplier</code>. This touches most of the tables, aggregates over 1/17 of all sales, and touches at least every page of the tables concerned, if not every row.</p> <blockquote> <code><pre>SELECT n_name AS nation, EXTRACT(year FROM o_orderdate) AS o_year, SUM (l_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity) AS sum_profit FROM lineitem, part, partsupp, orders, supplier, nation WHERE s_suppkey = l_suppkey AND ps_suppkey = l_suppkey AND ps_partkey = l_partkey AND p_partkey = l_partkey AND o_orderkey = l_orderkey AND s_nationkey = n_nationkey AND p_name LIKE '%green%' GROUP BY nation, o_year ORDER BY nation, o_year DESC </pre> </code> </blockquote> <h2>Query Variants</h2> <p>The query variants discussed here are:</p> <ol> <li> <p>Hash based, the best plan -- <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9h.sql" id="link-id0x2aab8bd04788">9h.sql</a></code> </p> </li> <li> <p>Index based with multicolumn rows, with <code>lineitem</code> index on <code>l_partkey</code> -- <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9i.sql" id="link-id0x2aab8bd04bb8">9i.sql</a>, <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9ir.sql" id="link-id0x2aab88894bf8">9ir.sql</a></code> </p> </li> <li> <p>Index based with multicolumn rows, <code>lineitem</code> index on <code>l_partkey, l_suppkey</code> -- <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9ip.sql" id="link-id0x2aab88894fc8">9ip.sql</a>, <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9ipr.sql" id="link-id0x2aab888950c8">9ipr.sql</a></code> </p> </li> <li> <p>Index based with one table per dependent column, index on <code>l_partkey</code> -- <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9p.sql" id="link-id0x2aab8a4e42c8">9p.sql</a></code> </p> </li> <li> <p>index based with one table per dependent column, with materialized <code>l_partkey, l_suppkey</code> -> <code>l_orderkey, l_minenumber</code> -- <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9pp.sql" id="link-id0x2aab8a4e4708">9pp.sql</a>, <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150713VectoredExecution/9ppr.sql" id="link-id0x2aab8a4e4808">9ppr.sql</a></code> </p> </li> </ol> <p>These are done against row- and column-wise data representations with 3 different vectorization settings. The dynamic vector size starts at 10,000 values in a vector, and adaptively upgrades this to 1,000,000 if it finds that index access is too sparse. Accessing rows close to each other is more efficient than widely scattered rows in vectored index access, so using a larger vector will likely cause a denser, hence more efficient, access pattern.</p> <p>The 10K vector size corresponds to running with a fixed vector size. The Vector 1 sets vector size to 1, effectively running a tuple at a time, which corresponds to a non-vectorized engine.</p> <p>We note that <code>lineitem</code> and its single column projections contain 600M rows. So, a vector of 10K values will hit, on the average, every 60,000th row. A vector of 1,000,000 will thus hit every 600th. This is when doing random lookups that are in no specific order, e.g., getting <code>lineitems</code> by a secondary index on <code>l_partkey</code>.</p> <h3>1 — Hash-based plan</h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Vector</th> <th>Dynamic</th> <th>10k</th> <th>1</th> </tr> <tr> <th style="text-align:left;">Column-wise </th> <td style="text-align:right;"><code>4.1 s</code> </td> <td style="text-align:right;"><code>4.1 s</code> </td> <td style="text-align:right;"><code>145 s</code> </td> </tr> <tr> <th style="text-align:left;">Row-wise </th> <td style="text-align:right;"><code>25.6 s</code> </td> <td style="text-align:right;"><code>25.9 s</code> </td> <td style="text-align:right;"><code>45.4 s</code> </td> </tr> </table> <p>Dynamic vector size has no effect here, as there is no indexed access that would gain from more locality. The column store is much faster because of less memory access (just scan the <code>l_partkey</code> column, and filter this with a Bloom filter; and then hash table lookup to pick only items with the desired <code>part</code>). The other columns are accessed only for the matching rows. The hash lookup is vectored since there are hundreds of compressed <code>l_partkey</code> values available at each time. The row store does the hash lookup row by row, hence losing cache locality and instruction-level parallelism.</p> <p>Without vectorization, we have a situation where the <code>lineitem</code> scan emits one row at a time. Restarting the scan with the column store takes much longer, since 5 buffers have to be located and pinned instead of one for the row store. The row store is thus slowed down less, but it too suffers almost a factor of 2 from interpretation overhead.</p> <h3>2 — Index-based, <code>lineitem</code> indexed on <code>l_partkey</code> </h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Vector</th> <th>Dynamic</th> <th>10k</th> <th>1</th> </tr> <tr> <th style="text-align:left;">Column-wise </th> <td style="text-align:right;"><code> 30.4 s </code> </td> <td style="text-align:right;"><code> 62.3 s </code> </td> <td style="text-align:right;"><code> 321 s</code> </td> </tr> <tr> <th style="text-align:left;">Row-wise </th> <td style="text-align:right;"><code> 31.8 s</code> </td> <td style="text-align:right;"><code> 27.7 s</code> </td> <td style="text-align:right;"><code> 122 s </code> </td> </tr> </table> <p>Here the plan scans <code>part</code>, then <code>partsupp</code>, which shares ordering with <code>part</code>; both are ordered on <code>partkey</code>. Then <code>lineitem</code> is fetched by a secondary index on <code>l_partkey</code>. This produces <code>l_orderkey, l_lineitem</code>, which are used to get the <code>l_suppkey</code>. We then check if the <code>l_suppkey</code> matches the <code>ps_suppkey</code> from <code>partsupp</code>, which drops 3/4 of the rows. The next join is on <code>orders</code>, which shares ordering with <code>lineitem</code>; both are ordered on <code>orderkey</code>.</p> <p>There is a narrow win for columns with dynamic vector size. When access becomes scattered, rows win by 2.5x, because there is only one page to access instead of 1 + 3 for columns. This is compensated for if the next item is found on the same page, which happens if the access pattern is denser. </p> <h3>3 — Index-based, <code>lineitem</code> indexed on <code>L_partkey, l_suppkey</code> </h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Vector</th> <th>Dynamic</th> <th>10k</th> <th>1</th> </tr> <tr> <th style="text-align:left;">Column-wise </th> <td style="text-align:right;"><code> 16.9 s</code> </td> <td style="text-align:right;"><code> 47.2 s </code> </td> <td style="text-align:right;"><code> 151 s </code> </td> </tr> <tr> <th style="text-align:left;">Row-wise </th> <td style="text-align:right;"><code> 22.4 s</code> </td> <td style="text-align:right;"><code> 20.7 s</code> </td> <td style="text-align:right;"><code> 89 s </code> </td> </tr> </table> <p>This is similar to the previous, except that now only <code>lineitems</code> that match <code>ps_partkey, ps_suppkey</code> are accessed, as the secondary index has two columns. Access is more local. Columns thus win more with dynamic vector size.</p> <h3>4 — Decomposed, index on <code>l_partkey</code> </h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Vector</th> <th>Dynamic</th> <th>10k</th> <th>1</th> </tr> <tr> <th style="text-align:left;">Column-wise </th> <td style="text-align:right;"><code> 35.7 s </code> </td> <td style="text-align:right;"><code> 170 s </code> </td> <td style="text-align:right;"><code> 601 s </code> </td> </tr> <tr> <th style="text-align:left;">Row-wise </th> <td style="text-align:right;"><code> 44.5 s </code> </td> <td style="text-align:right;"><code> 56.2 s </code> </td> <td style="text-align:right;"><code> 130 s </code> </td> </tr> </table> <p>Now, each of the <code>l_extendedprice, l_discount, l_quantity</code> and <code>l_suppkey</code> is a separate index lookup. The times are slightly higher but the dynamic is the same.</p> <p>The non-vectored columns case is hit the hardest.</p> <h3>5 — Decomposed, index on <code>l_partkey, l_suppkey</code> </h3> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Vector</th> <th>Dynamic</th> <th>10k</th> <th>1</th> </tr> <tr> <th style="text-align:left;">Column-wise </th> <td style="text-align:right;"><code> 19.6 s </code> </td> <td style="text-align:right;"><code> 111 s </code> </td> <td style="text-align:right;"><code> 257 s </code> </td> </tr> <tr> <th style="text-align:left;">Row-wise </th> <td style="text-align:right;"><code> 32.0 s </code> </td> <td style="text-align:right;"><code> 37 s </code> </td> <td style="text-align:right;"><code> 74.9 s </code> </td> </tr> </table> <p>Again, we see the same dynamic as with a multicolumn table. Columns win slightly more at long vector sizes because of overall better index performance in the presence of locality.</p> <h2>Space Utilization </h2> <p>The following tables list the space consumption in megabytes of allocated pages. Unallocated space in database files is not counted.</p> <p>The row-wise table also contains entries for column-wise structures (<code>DB.*</code>) since these have a row-wise sparse index. The size of this is however negligible, under 1% of the column-wise structures.</p> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr> <th style="text-align: center;">Row-Wise</th> <th> </th> <th style="text-align: center;"> Column-Wise</th> </tr> <tr> <td style="vertical-align:top;"> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>MB</th> <th>structure</th> </tr> <tr> <td style="text-align:right;"><code>73515</code> </td> <td style="text-align:left;"><code>R.DBA.LINEITEM</code> </td> </tr> <tr> <td style="text-align:right;"><code>14768</code> </td> <td style="text-align:left;"><code>R.DBA.ORDERS</code> </td> </tr> <tr> <td style="text-align:right;"><code>11728</code> </td> <td style="text-align:left;"><code>R.DBA.PARTSUPP</code> </td> </tr> <tr> <td style="text-align:right;"><code>10161</code></td> <td style="text-align:left;"><code>r_lpk_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>10003</code></td> <td style="text-align:left;"><code>r_l_pksk</code> </td> </tr> <tr> <td style="text-align:right;"><code>9908</code></td> <td style="text-align:left;"><code>R.DBA.l_partkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>8761</code></td> <td style="text-align:left;"><code>R.DBA.l_extendedprice</code> </td> </tr> <tr> <td style="text-align:right;"><code>8745</code></td> <td style="text-align:left;"><code>R.DBA.l_discount</code> </td> </tr> <tr> <td style="text-align:right;"><code>8738</code></td> <td style="text-align:left;"><code>r_l_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>8713</code></td> <td style="text-align:left;"><code>R.DBA.l_suppkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>6267</code></td> <td style="text-align:left;"><code>R.DBA.l_quantity</code> </td> </tr> <tr> <td style="text-align:right;"><code>2223</code></td> <td style="text-align:left;"><code>R.DBA.CUSTOMER</code> </td> </tr> <tr> <td style="text-align:right;"><code>2180</code></td> <td style="text-align:left;"><code>R.DBA.o_orderdate</code> </td> </tr> <tr> <td style="text-align:right;"><code>2041</code></td> <td style="text-align:left;"><code>r_O_CK</code> </td> </tr> <tr> <td style="text-align:right;"><code>1911</code></td> <td style="text-align:left;"><code>R.DBA.PART</code> </td> </tr> <tr> <td style="text-align:right;"><code>1281</code></td> <td style="text-align:left;"><code>R.DBA.ps_supplycost</code> </td> </tr> <tr> <td style="text-align:right;"><code>811</code></td> <td style="text-align:left;"><code>R.DBA.p_name</code> </td> </tr> <tr> <td style="text-align:right;"><code>127</code></td> <td style="text-align:left;"><code>R.DBA.SUPPLIER</code> </td> </tr> <tr> <td style="text-align:right;"><code>88</code></td> <td style="text-align:left;"><code>DB.DBA.LINEITEM</code> </td> </tr> <tr> <td style="text-align:right;"><code>24</code></td> <td style="text-align:left;"><code>DB.DBA.ORDERS</code> </td> </tr> <tr> <td style="text-align:right;"><code>11</code></td> <td style="text-align:left;"><code>DB.DBA.PARTSUPP</code> </td> </tr> <tr> <td style="text-align:right;"><code>9</code></td> <td style="text-align:left;"><code>R.DBA.s_nationkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>5</code></td> <td style="text-align:left;"><code>l_pksk</code> </td> </tr> <tr> <td style="text-align:right;"><code>4</code></td> <td style="text-align:left;"><code>DB.DBA.l_partkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>4</code></td> <td style="text-align:left;"><code>lpk_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>4</code></td> <td style="text-align:left;"><code>DB.DBA.l_extendedprice</code> </td> </tr> <tr> <td style="text-align:right;"><code>3</code></td> <td style="text-align:left;"><code>l_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>3</code></td> <td style="text-align:left;"><code>DB.DBA.l_suppkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>2</code></td> <td style="text-align:left;"><code>DB.DBA.CUSTOMER</code> </td> </tr> <tr> <td style="text-align:right;"><code>2</code></td> <td style="text-align:left;"><code>DB.DBA.l_quantity</code> </td> </tr> <tr> <td style="text-align:right;"><code>1</code></td> <td style="text-align:left;"><code>DB.DBA.PART</code> </td> </tr> <tr> <td style="text-align:right;"><code>1</code></td> <td style="text-align:left;"><code>O_CK</code> </td> </tr> <tr> <td style="text-align:right;"><code>1</code></td> <td style="text-align:left;"><code>DB.DBA.l_discount</code> </td> </tr> </table> </td> <td> </td> <td style="vertical-align:top;"> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>MB</th> <th>structure</th> </tr> <tr> <td style="text-align:right;"><code>36482</code> </td> <td style="text-align:left;"><code>DB.DBA.LINEITEM</code> </td> </tr> <tr> <td style="text-align:right;"><code>13087</code></td> <td style="text-align:left;"><code>DB.DBA.ORDERS</code> </td> </tr> <tr> <td style="text-align:right;"><code>11587</code></td> <td style="text-align:left;"><code>DB.DBA.PARTSUPP</code> </td> </tr> <tr> <td style="text-align:right;"><code>5181</code></td> <td style="text-align:left;"><code>DB.DBA.l_extendedprice</code> </td> </tr> <tr> <td style="text-align:right;"><code>4431</code></td> <td style="text-align:left;"><code>l_pksk</code> </td> </tr> <tr> <td style="text-align:right;"><code>3072</code></td> <td style="text-align:left;"><code>DB.DBA.l_partkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>2958</code></td> <td style="text-align:left;"><code>lpk_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>2918</code></td> <td style="text-align:left;"><code>l_pk</code> </td> </tr> <tr> <td style="text-align:right;"><code>2835</code></td> <td style="text-align:left;"><code>DB.DBA.l_suppkey</code> </td> </tr> <tr> <td style="text-align:right;"><code>2067</code></td> <td style="text-align:left;"><code>DB.DBA.CUSTOMER</code> </td> </tr> <tr> <td style="text-align:right;"><code>1618</code></td> <td style="text-align:left;"><code>DB.DBA.PART</code> </td> </tr> <tr> <td style="text-align:right;"><code>1156</code></td> <td style="text-align:left;"><code>DB.DBA.l_quantity</code> </td> </tr> <tr> <td style="text-align:right;"><code>961</code></td> <td style="text-align:left;"><code>DB.DBA.ps_supplycost</code> </td> </tr> <tr> <td style="text-align:right;"><code>814</code></td> <td style="text-align:left;"><code>O_CK</code> </td> </tr> <tr> <td style="text-align:right;"><code>798</code></td> <td style="text-align:left;"><code>DB.DBA.l_discount</code> </td> </tr> <tr> <td style="text-align:right;"><code>724</code></td> <td style="text-align:left;"><code>DB.DBA.p_name</code> </td> </tr> <tr> <td style="text-align:right;"><code>436</code></td> <td style="text-align:left;"><code>DB.DBA.o_orderdate</code> </td> </tr> <tr> <td style="text-align:right;"><code>126</code></td> <td style="text-align:left;"><code>DB.DBA.SUPPLIER</code> </td> </tr> <tr> <td style="text-align:right;"><code>1</code></td> <td style="text-align:left;"><code>DB.DBA.s_nationkey</code> </td> </tr> </table> </td> </tr> </table> <p>In both cases, the large tables are on top, but the column-wise case takes only half the space due to compression. </p> <p>We note that the single column projections are smaller column-wise. The <code>l_extendedprice</code> is not very compressible hence column-wise takes much more space than <code>l_quantity</code>; the row-wise difference is less. Since the leading key parts <code>l_orderkey, l_linenumber</code> are ordered and very compressible, the column-wise structures are in all cases noticeably more compact.</p> <p>The same applies to the multipart index <code>l_pksk</code> and <code>r_l_pksk</code> (<code>l_partkey, l_suppkey, l_orderkey, l_linenumber</code>) in column- and row-wise representations.</p> <p>Note that <code>STRING</code> columns (e.g., <code>l_comment</code>) are not compressed. If they were, the overall space ratio would be even more to the advantage of the column store.</p> <h2>Conclusions </h2> <p>Column stores and vectorization inextricably belong together. Column-wise compression yields great gains also for indices, since sorted data is easy to compress. Also for non-sorted data, adaptive use of dictionaries, run lengths, etc., produce great space savings. Columns also win with indexed access if there is locality.</p> <p>Row stores have less dependence on locality, but they also will win by a factor of 3 from dropping interpretation overhead and exploiting join locality.</p> <p>For point lookups, columns lose by 2+x but considering their better space efficiency, they will still win if space savings prevent going to secondary storage. For bulk random access, like in graph analytics, columns will win because of being able to operate on a large vector of keys to fetch.</p> <p>For many workloads, from TPC-H to LDBC social network, multi-part keys are a necessary component of physical design for performance if indexed access predominates. Triple stores and most graph databases do not have such and are therefore at a disadvantage. Self-joining, like in RDF or other vertically decomposed structures, can cost up to a factor of 10-20 over a column-wise multicolumn table. This depends however on the density of access.</p> <p>For analytical workloads, where the dominant join pattern is the scan with selective hash join, column stores are unbeatable, as per common wisdom. There are good physical reasons for this and the row store even with well implemented vectorization loses by a factor of 5.</p> <p>For decomposed structures, like RDF quads or single column projections of tables, column stores are relatively more advantageous because the key columns are extensively repeated, and these compress better with columns than with rows. In all the RDF workloads we have tried, columns never lose, but there is often a draw between rows and columns for lookup workloads. The longer the query, the more columns win.</p>
Orri Erling
oerling@openlinksw.com
2015-07-13T13:56:22.323548-04:00
Virtuoso at SIGMOD 2015
http://www.openlinksw.com/weblog/oerling/?id=1858
2015-07-13T16:52:45Z
<p>Two papers presented at <a href="http://dbpedia.org/resource/SIGMOD" id="link-id0x10f5b4448">SIGMOD</a> <a href="http://www.sigmod2015.org/" id="link-id0x10f84b6e8">2015</a> have been added to the <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtuosoScienceLibrary" id="link-id0x10f84b8c8">Virtuoso Science Library</a>.</p> <ul> <li> <p> <b>Orri Erling (OpenLink Software); Alex Averbuch (Neo Technology); Josep Larriba-Pey (Sparsity Technologies); Hassan Chafi (Oracle Labs); Andrey Gubichev (TU Munich); Arnau Prat-Pérez (Universitat Politècnica de Catalunya); Minh-Duc Pham (VU University Amsterdam); Peter Boncz (CWI): <a href="http://dl.acm.org/authorize.cfm?key=N97179" id="link-id0x10f7147b8">The LDBC Social Network Benchmark: Interactive Workload</a>. <a href="http://www.sigmod2015.org/toc_sigmod.shtml" id="link-id0x10f7148d8">Proceedings of SIGMOD 2015, Melbourne</a>.</b> </p> <p>This paper is an overview of the challenges posed in the LDBC social network benchmark, from data generation to the interactive workload.</p> </li> <li> <p> <b>Mihai Capotă (Delft University of Technology), Tim Hegeman (Delft University of Technology), Alexandru Iosup (Delft University of Technology), Arnau Prat-Pérez (Universitat Politècnica de Catalunya), Orri Erling (OpenLink Software), Peter Boncz (CWI): <a href="http://dl.acm.org/authorize.cfm?key=N97204" id="link-id0x110864288">Graphalytics: A Big Data Benchmark for Graph-Processing Platforms</a>. <a href="http://www.sigmod2015.org/toc_grades.html" id="link-id0x1108643a8">Sigmod GRADES 2015</a>.</b> </p> <p>This paper discusses the future evolution of the LDBC Social Network Benchmark and gives a preview of Virtuoso graph traversal performance.</p> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-07-13T12:52:45.067773-04:00
Big Data, Part 1: Virtuoso Meets Hive
http://www.openlinksw.com/weblog/oerling/?id=1856
2015-07-13T16:16:31Z
<p>In this series, we will look at <a href="http://dbpedia.org/resource/Virtuoso_Universal_Server" id="link-id0x2aab3b3c1df8">Virtuoso</a> and some of the big data technologies out there. <a href="http://dbpedia.org/resource/SQL" id="link-id0x2aab3b3c1f08">SQL</a> on <a href="http://dbpedia.org/resource/Apache_Hadoop" id="link-id0x2aab3b3c2028">Hadoop</a> is of interest, as well as <a href="http://dbpedia.org/resource/NoSQL" id="link-id0x2aab3b3c2148">NoSQL</a> technologies.</p> <p>We begin at the beginning, with <a href="http://dbpedia.org/resource/Apache_Hive" id="link-id0x2aab3b3c2168">Hive</a>, the grand-daddy of SQL on Hadoop.</p> <p>The test platform is two Amazon R3.8 <a href="http://dbpedia.org/resource/Amazon_Machine_Image" id="link-id0x2aab62f44bf8">AMI</a> instances. We compared Hive with the Virtuoso 100G TPC-H experiment on the same platform, <a href="http://www.openlinksw.com/weblog/oerling/?id=1845" id="link-id0x2aab62f44d78">published earlier on this blog</a>. The runs follow a bulk load in both cases, with all data served from memory. The platform has 2x244GB RAM with only 40GB or so of working set.</p> <p>The Virtuoso version and settings are as in the <a href="http://www.openlinksw.com/weblogs/oerling/?id=1849" id="link-id0x2aab3be6df88">Virtuoso Cluster test AMI</a>.</p> <p>The Hive version is <a href="http://hortonworks.com/blog/announcing-apache-hive-0-14/" id="link-id0x2aab3be6e148">0.14</a> from the <a href="http://dbpedia.org/resource/Hortonworks" id="link-id0x2aab3be6e298">Hortonworks</a> <a href="http://hortonworks.com/hdp/downloads/" id="link-id0x2aab3b149c48">HDP 2.2 distribution>. The Hive schema and query formulations are the ones from <a href="https://github.com/hortonworks/hive-testbench" id="link-id0x2aab3b1498c8"><code>hive-testbench</code> on GitHub</a>. The Hive configuration parameters are as set by <a href="https://cwiki.apache.org/confluence/display/AMBARI/Installation+Guide+for+Ambari+2.0.1" id="link-id0x2aab3b149aa8">Ambari 2.0.1</a>. These are different from the ones in <code>hive-testbench</code>, but the Ambari choices offer higher performance on the platform. We did run statistics with Hive and did not specify any settings not in the <code>hive-testbench</code>. Thus we suppose the query plans were as good as Hive will make them. Platform utilization was even across both machines, and varied between 30% and 100% of the 2 x 32 hardware threads.</a> </p> <p>Load time with Hive was 742 seconds against 232 seconds with Virtuoso. In both cases, this was a copy from 32 <a href="http://dbpedia.org/resource/Comma-separated_values" id="link-id0x2aab3be6e328">CSV</a> files into native database format; for Hive, this is <a href="https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC" id="link-id0x2aab3b149868">ORC (Optimized Row Columnar)</a>. In Virtuoso, there is one index, (<code>o_custkey</code>); in Hive, there are no indices.</p> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Query</th> <th>Virtuoso</th> <th>Hive</th> <th>Notes</th> </tr> <tr> <th style="text-align:center;"> — </th> <td style="text-align:right;"> <code> 332 s </code> </td> <td style="text-align:right;"> <code> 742 s </code> </td> <td style="text-align:left;"> Data Load </td> </tr> <tr> <th style="text-align:center;"> Q1 </th> <td style="text-align:right;"> <code> 1.098 s </code> </td> <td style="text-align:right;"> <code> 296.636 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q2 </th> <td style="text-align:right;"> <code> 0.187 s </code> </td> <td style="text-align:right;"> <code> >3600 s </code> </td> <td style="text-align:left;"> Hive Timeout </td> </tr> <tr> <th style="text-align:center;"> Q3 </th> <td style="text-align:right;"> <code> 0.761 s </code> </td> <td style="text-align:right;"> <code> 98.652 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q4 </th> <td style="text-align:right;"> <code> 0.205 s </code> </td> <td style="text-align:right;"> <code> 147.867 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q5 </th> <td style="text-align:right;"> <code> 0.808 s </code> </td> <td style="text-align:right;"> <code> 114.782 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q6 </th> <td style="text-align:right;"> <code> 2.403 s </code> </td> <td style="text-align:right;"> <code> 71.789 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q7 </th> <td style="text-align:right;"> <code> 0.59 s </code> </td> <td style="text-align:right;"> <code> 394.201 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q8 </th> <td style="text-align:right;"> <code> 0.775 s </code> </td> <td style="text-align:right;"> <code> >3600 s </code> </td> <td style="text-align:left;"> Hive Timeout </td> </tr> <tr> <th style="text-align:center;"> Q9 </th> <td style="text-align:right;"> <code> 1.836 s </code> </td> <td style="text-align:right;"> <code> >3600 s </code> </td> <td style="text-align:left;"> Hive Timeout </td> </tr> <tr> <th style="text-align:center;"> Q10 </th> <td style="text-align:right;"> <code> 3.165 s </code> </td> <td style="text-align:right;"> <code> 179.646 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q11 </th> <td style="text-align:right;"> <code> 1.37 s </code> </td> <td style="text-align:right;"> <code> 43.094 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q12 </th> <td style="text-align:right;"> <code> 0.356 s </code> </td> <td style="text-align:right;"> <code> 101.193 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q13 </th> <td style="text-align:right;"> <code> 2.233 s </code> </td> <td style="text-align:right;"> <code> 208.476 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q14 </th> <td style="text-align:right;"> <code> 0.488 s </code> </td> <td style="text-align:right;"> <code> 89.047 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q15 </th> <td style="text-align:right;"> <code> 0.72 s </code> </td> <td style="text-align:right;"> <code> 136.431 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q16 </th> <td style="text-align:right;"> <code> 0.814 s </code> </td> <td style="text-align:right;"> <code> 105.652 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q17 </th> <td style="text-align:right;"> <code> 0.681 s </code> </td> <td style="text-align:right;"> <code> 255.848 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q18 </th> <td style="text-align:right;"> <code> 1.324 s </code> </td> <td style="text-align:right;"> <code> 337.921 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q19 </th> <td style="text-align:right;"> <code> 0.417 s </code> </td> <td style="text-align:right;"> <code> >3600 s </code> </td> <td style="text-align:left;"> Hive Timeout </td> </tr> <tr> <th style="text-align:center;"> Q20 </th> <td style="text-align:right;"> <code> 0.792 s </code> </td> <td style="text-align:right;"> <code> 193.965 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q21 </th> <td style="text-align:right;"> <code> 0.720 s </code> </td> <td style="text-align:right;"> <code> 670.718 s </code> </td> <td style="text-align:left;"> </td> </tr> <tr> <th style="text-align:center;"> Q22 </th> <td style="text-align:right;"> <code> 0.155 s </code> </td> <td style="text-align:right;"> <code> 68.462 s </code> </td> <td style="text-align:left;"> </td> </tr> </table> <p>Hive does relatively best on bulk load. This is understandable since this is a sequential read of many files in parallel with just compression to do.</p> <p>Hive's query times are obviously affected by not having a persistent memory image of the data, as this is always streamed from the storage files into other files as <a href="http://dbpedia.org/resource/MapReduce" id="link-id0x2aab3b57ad38">MapReduce</a> intermediate results. This seems to be an operator-at-a-time business as opposed to Virtuoso's vectorized streaming.</p> <p>The queries that would do partitioned <a href="http://dbpedia.org/resource/Hash_join" id="link-id0x2aab3bec82d8">hash joins</a> (e.g., Q9) did not finish under an hour in Hive, so we do not have a good metric of a cross-partition hash join.</p> <p>One could argue that one should benchmark Hive only in disk-bound circumstances. We may yet get to this.</p> <p>Our next stop will probably be <a href="https://en.wikipedia.org/wiki/Cloudera_Impala" id="link-id0x2aab3b4f7bc8">Impala</a>, which ought to do much better than Hive, as it dose not have the MapReduce overheads.</p> <p> <i><b>If you are a Hive expert</b> and believe that Hive should have done much better, please let us know how to improve the Hive scores, and we will retry.</i> </p>
Orri Erling
oerling@openlinksw.com
2015-07-13T12:16:31.280965-04:00
Rethink Big and Europe?s Position in Big Data
http://www.openlinksw.com/weblog/oerling/?id=1854
2015-06-29T19:36:03Z
<p>I will here take a break from core database and talk a bit about <a href="http://dbpedia.org/page/European_Union" id="link-id0x2aab76cf53a8">EU</a> policies for research funding.</p> <p>I had lunch with <a href="http://homepages.cwi.nl/~manegold/" id="link-id0x2aab74e7ff38">Stefan Manegold</a> of <a href="http://www.cwi.nl/" id="link-id0x2aab76fdf4a8">CWI</a> last week, where we talked about where European research should go. Stefan is involved in <i><a href="http://www.rethinkbig-project.eu/" id="link-id0x2aab767d4d88">RETHINK big</a>,</i> a European research project for compiling policy advice regarding big data for EC funding agencies. As part of this, he is interviewing various stakeholders such as end user organizations and developers of technology.</p> <p> <i>RETHINK big</i> wants to come up with a research agenda primarily for hardware, anything from faster networks to greener data centers. CWI represents software expertise in the consortium.</p> <p>So, we went through a regular questionnaire about how we see the landscape. I will summarize this below, as this is anyway informative.</p> <h3>Core competence</h3> <p>My own core competence is in core database functionality, specifically in high performance query processing, scale-out, and managing schema-less data. Most of the <a href="http://dbpedia.org/page/Virtuoso_Universal_Server" id="link-id0x2aab766406e8">Virtuoso</a> installed base is in the <a href="http://dbpedia.org/page/Resource_Description_Framework" id="link-id0x2aab74871c28">RDF</a> space, but most potential applications are in fact outside of this niche.</p> <h3>User challenges</h3> <p>The life sciences vertical is the one in which I have the most application insight, from going to <a href="http://dbpedia.org/resource/OpenPHACTS" id="link-id0x2aab765598c8">Open PHACTS</a> meetings and holding extensive conversations with domain specialists. We have users in many other verticals, from manufacturing to financial services, but there I do not have as much exposure to the actual applications.</p> <p>Having said this, the challenges throughout tend to be in diversity of data. Every researcher has their <a href="http://dbpedia.org/page/MySQL" id="link-id0x2aab76bff778">MySQL</a> database or <a href="http://dbpedia.org/page/Spreadsheet" id="link-id0x2aab75ceeaa8">spreadsheet</a>, and there may not even be a top level catalogue of everything. Data formats are diverse. Some people use <a href="http://dbpedia.org/resource/Linked_data" id="link-id0x2aab74e675e8">linked data</a> (most commonly RDF) as a top level metadata format. The application data, such as gene sequences or microarray assays, reside in their native file formats and there is little point in RDF-izing these.</p> <p>There are also public data resources that are published in RDF serializations as vendor-neutral, self-describing format. Having everything as triples, without <i>a priori</i> schema, makes things easier to integrate and in some cases easier to describe and query.</p> <p>So, the challenge is in the labor intensive nature of data integration. Data comes with different levels of quantity and quality, from hand-curated to <a href="http://dbpedia.org/resource/Natural_language_processing" id="link-id0x2aab76dfe0c8">NLP</a> extractions. Querying in the single- or double-digit terabyte range with RDF is quite possible, as we have shown many times on this blog, but most use cases do not even go that far. Anyway, what we see on the field is primarily a data diversity game. The scenario is data integration; the technology we provide is database. The data transformation proper, data cleansing, units of measure, entity de-duplication, and such core data-integration functions are performed using diverse, user-specific means.</p> <p> <a href="https://ch.linkedin.com/in/jervenbolleman" id="link-id0x2aab754e5e38">Jerven Bolleman</a> of the <a href="https://www.linkedin.com/company/sib-swiss-institute-of-bioinformatics" id="link-id0x2aab75d891d8">Swiss Institute of Bioinformatics</a> is a user of ours with whom we have long standing discussions on the virtues of federated data and querying. I advised Stefan to go talk to him; he has fresh views about the volume challenges with unexpected usage patterns. Designing for performance is tough if the usage pattern is out of the blue, like correlating air humidity on the day of measurement with the presence of some genomic patterns. Building a warehouse just for that might not be the preferred choice, so the problem field is not exhausted. Generally, I’d go for warehousing though.</p> <h3>What technology would you like to have? Network or power efficiency?</h3> <p>OK. Even a fast network is a network. A set of processes on a single shared-memory box is also a kind of network. InfiniBand is maybe half the throughput and 3x the latency of single threaded interprocess communication within one box. The operative word is latency. Making large systems always involves a network or something very much like one in large scale-up scenarios.</p> <p>On the software side, next to nobody understands latency and contention; yet these are the one core factor in any pursuit of scalability. Because of this situation, paradigms like <a href="http://dbpedia.org/page/MapReduce" id="link-id0x2aab76bea288">MapReduce</a> and <a href="http://dbpedia.org/page/Bulk_synchronous_parallel" id="link-id0x2aab75907c48">bulk synchronous parallel (BSP)</a> processing have become popular because these take the communication out of the program flow, so the programmer cannot muck this up, as otherwise would happen with the inevitability of destiny. Of course, our beloved SQL or declarative query in general does give scalability in many tasks without programmer participation. <a href="http://dbpedia.org/resource/Datalog" id="link-id0x2aab75e63cc8">Datalog</a> has also been used as a means of shipping computation around, as in the the work of <a href="http://www.linkedin.com/in/joehellerstein" id="link-id0x2aab75483c38">Hellerstein</a>.</p> <p>There are no easy solutions. We have built scale-out conscious, vectorized extensions to SQL procedures where one can express complex parallel, distributed flows, but people do not use or understand these. These are very useful, even indispensable, but only on the inside, not as a programmer-facing construct. MapReduce and BSP are the limit of what a development culture will absorb. MapReduce and BSP do not hide the fact of distributed processing. What about things that do? Parallel, partitioned extensions to <a href="http://dbpedia.org/resource/Fortran" id="link-id0x2aab76254d78">Fortran</a> <a href="http://dbpedia.org/page/Array_programming" id="link-id0x2aab75db6098">arrays</a>? <a href="http://dbpedia.org/page/Functional_programming" id="link-id0x2aab75ffb7b8">Functional languages</a>? I think that all the obvious aids to parallel/distributed programming have been conceived of. No silver bullet; just hard work. And above all the discernment of what paradigm fits what problem. Since these are always changing, there is no finite set of rules, and no substitute for understanding and insight, and the latter are vanishingly scarce. "<a href="http://dissoiblogoi.blogspot.com/2005/11/aristotles-paradigmatism.html" id="link-id0x2aab764f6738">Paradigmatism</a>," i.e., the belief that one particular programming model is a panacea outside of its original niche, is a common source of complexity and inefficiency. This is a common form of enthusiastic naïveté. </p> <p>If you look at power efficiency, the clusters that are the easiest to program consist of relatively few high power machines and a fast network. A typical node size is 16+ cores and 256G or more RAM. Amazon has these in entirely workable configurations, as <a href="http://www.openlinksw.com/weblogs/oerling/?id=1843" id="link-id0x2aab76396be8">documented earlier on this blog</a>. The leading edge in power efficiency is in larger number of smaller units, which makes life again harder. This exacerbates latency and forces one to partition the data more often, whereas one can play with replication of key parts of data more freely if the node size is larger.</p> <p>One very specific item where research might help without having to rebuild the hardware stack would be better, lower-latency exposure of networks to software. Lightweight threads and user-space access, bypassing slow protocol stacks, etc. <a href="https://en.wikipedia.org/wiki/Message_Passing_Interface" id="link-id0x2aab754d0208">MPI</a> has some of this, but maybe more could be done.</p> <p>So, I will take a cluster of such 16-core, 256GB machines on a faster network, over a cluster of 1024 x 4G mobile phones connected via USB. Very selfish and unecological, but one has to stay alive and life is tough enough as is.</p> <h3>Are there pressures to adapt business models based on big data?</h3> <p>The transition from <a href="http://dbpedia.org/page/Capital_expenditure" id="link-id0x2aab767114e8">capex</a> to <a href="http://dbpedia.org/page/Operating_expense" id="link-id0x2aab76254b28">opex</a> may be approaching maturity, as there have been workable cloud configurations for the past couple of years. The <a href="http://dbpedia.org/page/Amazon_Elastic_Compute_Cloud" id="link-id0x2aab75235fa8">EC2</a> from way back, with at best a 4 core 16G VM and a horrible network for $2/hr, is long gone. It remains the case that 4 months of 24x7 rent in the cloud equals the purchase price of physical hardware. So, for this to be economical long-term at scale, the average utilization should be about 10% of the peak, and peaks should not be on for more than 10% of the time.</p> <p>So, database software should be rented by the hour. A 100-150% markup for the $2.80 a large EC2 instance costs would be reasonable. Consider that 70% of the cost in TPC benchmarks is database software.</p> <p>There will be different pricing models combining different up-front and per-usage costs, just as there are for clouds now. If the platform business goes that way and the market accepts this, then systems software will follow. Price/performance quotes should probably be expressed as speed/price/hour instead of speed/price.</p> <p>The above is rather uncontroversial but there is no harm restating these facts. Reinforce often.</p> <h3>Well, the question is raised, what should Europe do that would have tangible impact in the next 5 years?</h3> <p>This is a harder question. There is some European business in wide area and mobile infrastructures. Competing against <a href="http://dbpedia.org/resource/Huawei" id="link-id0x2aab76c8a288">Huawei</a> will keep them busy. <a href="http://dbpedia.org/page/Intel" id="link-id0x2aab752b4178">Intel</a> and <a href="http://dbpedia.org/page/Mellanox_Technologies" id="link-id0x2aab753a84b8">Mellanox</a> will continue making faster networks regardless of European policies. Intel will continue building denser compute nodes, e.g., integrated Knight’s Corner with dual IB network and 16G fast RAM on chip. Clouds will continue making these available on demand once the technology is in mass production.</p> <p>What’s the next big innovation? <a href="http://dbpedia.org/page/Neuromorphic_engineering" id="link-id0x2aab76156358">Neuromorphic computing</a>? <a href="http://dbpedia.org/page/Quantum_computer" id="link-id0x2aab74efc458">Quantum computing</a>? Maybe. For now, I’d just do more engineering along the core competence discussed above, with emphasis on good marketing and scalable execution. By this I mean trained people who know something about deployment. There is a huge training gap. In the would-be "Age of Data," knowledge of how things actually work and scale is near-absent. I have offered to do some courses on this to partners and public alike, but I need somebody to drive this show; I have other things to do.</p> <p>I have been to many, many project review meetings, mostly as a project partner but also as reviewer. For the past year, the <a href="http://dbpedia.org/resource/European_Commission" id="link-id0x2aab7628cb88">EC</a> has used an innovation questionnaire at the end of the meetings. It is quite vague, and I don’t think it delivers much actionable intelligence.</p> <p>What would deliver this would be a venture capital type activity, with well-developed networks and active participation in developing a business. The EC is not now set up to perform this role, though. But the EC is a fairly large and wealthy entity, so it could invest some money via this type of channel. Also there should be higher individual incentives and rewards for speed and excellence. Getting the next Horizon 2020 research grant may be good, but better exists. The grants are competitive enough and the calls are not bad; they follow the times.</p> <p>In the projects I have seen, productization does get some attention, e.g., the <a href="http://stack.lod2.eu/blog/" id="link-id0x2aab768be028">LOD2 stack</a>, but it is not something that is really ongoing or with dedicated commercial backing. It may also be that there is no market to justify such dedicated backing. Much of the RDF work has been "me, too" — let’s do what the real database and data integration people do, but let’s just do this with triples. Innovation? Well, I took the best of the real DB world and adapted this to RDF, which did produce a competent piece of work with broad applicability, extending outside RDF. Is there better than this? Well, some of the data integration work (e.g., <a href="http://svn.aksw.org/papers/2011/WWW_LIMES/public.pdf" id="link-id0x2aab76423568">LIMES</a>) is not bad, and it might be picked up by some of the players that do this sort of thing in the broader world, e.g., <a href="http://dbpedia.org/resource/Informatica" id="link-id0x2aab753c3e68">Informatica</a>, the DI suites of big DB vendors, <a href="https://www.crunchbase.com/organization/tamr" id="link-id0x2aab76fc6e78">Tamr</a>, etc. I would not know if this in fact adds value to the non-RDF equivalents; I do not know the field well enough, but there could be a possibility.</p> <p>The recent emphasis for benchmarking, spearheaded by <a href="https://www.linkedin.com/in/stefanobertolo" id="link-id0x2aab76c45018">Stefano Bertolo</a> is good, as exemplified by the <a href="http://ldbcouncil.org/industry/organization/origins" id="link-id0x2aab75f4e288">LDBC FP7</a>. There should probably be one or two projects of this sort going at all times. These make challenges known and are an effective means of guiding research, with a large multiplier: Once a benchmark gets adopted, infinitely more work goes into solving the problem than in stating it in the first place.</p> <p>The aims and calls are good. The execution by projects is variable. For 1% of excellence, there apparently must be 99% of so-and-so, but this is just a fact of life and not specific to this context. The projects are rather diffuse. There is not a single outcome that gets all the effort. In this, the level of engagement of participants is less and focus is much more scattered than in startups. A really hungry, go-getter mood is mostly absent. I am a believer in core competence. Well, most people will agree that core competence is nice. But the projects I have seen do not drive for it hard enough.</p> <p>It is hard to say exactly what kinds of incentives could be offered to encourage truly exceptional work. The American startup scene does offer high rewards and something of this could be transplanted into the EC project world. I would not know exactly what form this could take, though.</p>
Orri Erling
oerling@openlinksw.com
2015-06-29T15:36:03.387872-04:00
Virtuoso Elastic Cluster Benchmarks AMI on Amazon EC2
http://www.openlinksw.com/weblog/oerling/?id=1849
2015-06-16T21:53:29Z
<p>We have another new Amazon machine image, this time for deploying your own Virtuoso Elastic Cluster on the cloud. The <a href="http://www.openlinksw.com/weblog/oerling/?id=1845" id="link-id0x10cd1dc38">previous post</a> gave a summary of running TPC-H on this image. This post is about what the AMI consists of and how to set it up.</p> <p> <i><b>Note:</b> This AMI is running a pre-release build of Virtuoso 7.5, Commercial Edition. Features are subject to change, and this build is not licensed for any use other than the AMI-based benchmarking described herein.</i> </p> <p>There are two preconfigured cluster setups; one is for two (2) machines/instances and one is for four (4). Generation and loading of TPC-H data, as well as the benchmark run itself, is preconfigured, so you can do it by entering just a few commands. The whole sequence of doing a terabyte (1000G) scale TPC-H takes under two hours, with 30 minutes to generate the data, 35 minutes to load, and 35 minutes to do three benchmark runs. The 100G scale is several times faster still.</p> <p>To experiment with this AMI, you will need a set of license files, one per machine/instance, which <a href="http://www.openlinksw.com/contact/" id="link-id0x10ca15da8">our Sales Team can provide</a>.</p> <p>Detailed instructions are on the AMI, in <code>/home/ec2-user/<a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150616ElasticClusterSetup/cluster_instructions.txt" title="cluster_instructions.txt" alt="Detailed Cluster Setup Instructions" id="link-id0x2aab9a4158b8">cluster_instructions.txt</a></code>, but the basic steps to get up and running are as follows:</p> <ol> <li> <p>Instantiate machine image <b>ami-811becea)</b> (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs"; this one is short-named <code>virtuoso-bench-cl</code>) with two or four (2 or 4) R3.8xlarge instances within one virtual private cluster and placement group. Make sure the VPC security is set to allow all connections.</p> </li> <li> <p>Log in to the first, and fill in the configuration file with the internal IP addresses of all machines instantiated in step 1.</p> </li> <li> <p>Distribute the license files to the instances, and start the OpenLink License Manager on each machine.</p> </li> <li> <p>Run 3 shell commands to set up the file systems and the Virtuoso configuration files.</p> </li> <li> <p>If you do not plan to run one of these benchmarks, you can simply start and work with the Virtuoso cluster now. It is ready for use with an empty database.</p> </li> <li> <p>Before running one of these benchmark, generate the appropriate dataset with the <code>dbgen.sh</code> command.</p> </li> <li> <p>Bulk load the data with <code>load.sh</code>.</p> </li> <li> <p>Run the benchmark with <code>run.sh</code>.</p> </li> </ol> <p>Right now the cluster benchmarks are limited to TPC-H but cluster versions of the LDBC Social Network and Semantic Publishing benchmarks will follow soon.</p>
Orri Erling
oerling@openlinksw.com
2015-06-17T10:13:37.325753-04:00
In Hoc Signo Vinces (part 21 of n): Running TPC-H on Virtuoso Elastic Cluster on Amazon EC2
http://www.openlinksw.com/weblog/oerling/?id=1845
2015-06-10T16:03:13Z
<p>We have made an Amazon EC2 deployment of <a href="http://virtuoso.openlinksw.com/features-comparison-matrix/#cluster" id="link-id0x2aabd14557d8">Virtuoso 7 Commercial Edition, configured to use the Elastic Cluster Module</a> with TPC-H preconfigured, similar to the <a href="http://www.openlinksw.com/weblogs/oerling/?id=1843" id="link-id0x2aabd235a378">recently published OpenLink Virtuoso Benchmark AMI</a> running the Open Source Edition. The details of the new Elastic Cluster AMI and steps to use it will be published in a forthcoming post. Here we will simply look at results of running TPC-H 100G scale on two machines, and 1000G scale on four machines. This shows how Virtuoso provides great performance on a cloud platform. The extremely fast bulk load — 33 minutes for a terabyte! — means that you can get straight to work even with on-demand infrastructure.</p> <p>In the following, the Amazon instance type is R3.8xlarge, each with dual Xeon E5-2670 v2, 244G RAM, and 2 x 300G SSD. The image is made from the Amazon Linux with built-in network optimization. We first tried a RedHat image without network optimization and had considerable trouble with the interconnect. Using network-optimized Amazon Linux images inside a virtual private cloud has resolved all these problems.</p> <p>The network optimized 10GE interconnect at Amazon offers throughput close to the QDR InfiniBand running TCP-IP; thus the Amazon platform is suitable for running cluster databases. The execution that we have seen is not seriously network bound.</p> <h3>100G on 2 machines, with a total of 32 cores, 64 threads, 488 GB RAM, 4 x 300 GB SSD</h3> <b>Load time:</b> 3m 52s <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Run</th> <th>Power</th> <th>Throughput</th> <th>Composite</th> </tr> <tr> <th style="text-align:center;"><code>1</code> </th> <td style="text-align:right;"><code>523,554.3</code> </td> <td style="text-align:right;"><code>590,692.6</code> </td> <td style="text-align:right;"><code>556,111.2</code> </td> </tr> <tr> <th style="text-align:center;"><code>2</code> </th> <td style="text-align:right;"><code>565,353.3</code> </td> <td style="text-align:right;"><code>642,503.0</code> </td> <td style="text-align:right;"><code>602,694.9</code> </td> </tr> </table> <h3>1000G on 4 machines, with a total of 64 cores, 128 threads, 976 GB RAM, 8 x 300 GB SSD</h3> <b>Load time:</b> 32m 47s <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>Run</th> <th>Power</th> <th>Throughput</th> <th>Composite</th> </tr> <tr> <th style="text-align:center;"><code>1</code> </th> <td style="text-align:right;"><code>592,013.9</code> </td> <td style="text-align:right;"><code>754,107.6</code> </td> <td style="text-align:right;"><code>668,163.3</code> </td> </tr> <tr> <th style="text-align:center;"><code>2</code> </th> <td style="text-align:right;"><code>896,564.1</code> </td> <td style="text-align:right;"><code>828,265.4</code> </td> <td style="text-align:right;"><code>861,738.4</code> </td> </tr> <tr> <th style="text-align:center;"><code>3</code> </th> <td style="text-align:right;"><code>883,736.9</code> </td> <td style="text-align:right;"><code>829,609.0</code> </td> <td style="text-align:right;"><code>856,245.3</code> </td> </tr> </table> <p>For the larger scale we did 3 sets of power + throughput tests to measure consistency of performance. By the TPC-H rules, the worst (first) score should be reported. Even after bulk load, this is markedly less than the next power score due to working set effects. This is seen to a lesser degree with the first throughput score also.</p> <p>The numerical quantities summaries are available <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report.zip" id="link-id0x2aabafe59058">in a report.zip file</a>, or individually --</p> <ul> <li> <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report-100-1.txt" style="wikiautogen" id="link-id0x2aab623aa418">report-100-1.txt</a> </code> </li> <li> <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report-100-2.txt" style="wikiautogen" id="link-id0x2aac0dc9e6d8">report-100-2.txt</a> </code> </li> <li> <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report-1000-1.txt" style="wikiautogen" id="link-id0x2aab63598398">report-1000-1.txt</a> </code> </li> <li> <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report-1000-2.txt" style="wikiautogen" id="link-id0x2aac0c70f3a8">report-1000-2.txt</a> </code> </li> <li> <code><a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/BlogFiles20150610TPCHonClusterAMI/report-1000-3.txt" style="wikiautogen" id="link-id0x2aac0d939808">report-1000-3.txt</a> </code> </li> </ul> <p>Subsequent posts will explain how to deploy Virtuoso Elastic Clusters on AWS.</p> <h3> <i>In Hoc Signo Vinces</i> (TPC-H) Series</h3> <ul> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x2aab52abc0b8"> In Hoc Signo Vinces (part 1): Virtuoso meets TPC-H</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1741" id="link-id0x2aab53935178"> In Hoc Signo Vinces (part 2): TPC-H Schema Choices</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1742" id="link-id0x2aab537ea348"> In Hoc Signo Vinces (part 3): Benchmark Configuration Settings</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1744" id="link-id0x2aab516a28b8"> In Hoc Signo Vinces (part 4): Bulk Load and Refresh</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1747" id="link-id0x2aab514b53d8"> In Hoc Signo Vinces (part 5): The Return of SQL Federation</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1753" id="link-id0x2aaba8e12aa8"> In Hoc Signo Vinces (part 6): TPC-H Q1 and Q3: An Introduction to Query Plans</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1755" id="link-id0x2aab537ea308"> In Hoc Signo Vinces (part 7): TPC-H Q13: The Good and the Bad Plans</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1756" id="link-id0x2aaba97e18d8"> In Hoc Signo Vinces (part 8): TPC-H: INs, Expressions, ORs</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1779" id="link-id0x2aab52a6add8"> In Hoc Signo Vinces (part 9): TPC-H Q18, Ordered Aggregation, and Top K</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1789" id="link-id0x2aaba9725b78"> In Hoc Signo Vinces (part 10): TPC-H Q9, Q17, Q20 - Predicate Games </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1793" id="link-id0x2aab51109e68"> In Hoc Signo Vinces (part 11): TPC-H Q2, Q10 - Late Projection </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1796" id="link-id0x2aaba97e1898"> In Hoc Signo Vinces (part 12): TPC-H: Result Preview </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1798" id="link-id0x2aab51a706a8"> In Hoc Signo Vinces (part 13): Virtuoso TPC-H Kit Now on V7 Fast Track </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1800" id="link-id0x2aaba97e18b8"> In Hoc Signo Vinces (part 14): Virtuoso TPC-H Implementation Analysis </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1802" id="link-id0x2aaba9725b98"> In Hoc Signo Vinces (part 15): TPC-H and the Science of Hash </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1816" id="link-id0x2aab52abc098"> In Hoc Signo Vinces (part 16): Introduction to Scale-Out </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1818" id="link-id0x2aab5130e068"> In Hoc Signo Vinces (part 17): 100G and 300G Runs on Dual Xeon E5 2650v2 </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1819" id="link-id0x2aab52979f88"> In Hoc Signo Vinces (part 18): Cluster Dynamics </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1822" id="link-id0x2aab53015408"> In Hoc Signo Vinces (part 19): Scalability, 1000G, and 3000G </a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1822" id="link-id0x2aab53015408"> In Hoc Signo Vinces (part 20): 100G and 1000G With Cluster; When is Cluster Worthwhile; Effects of I/O </a> </li> <li> In Hoc Signo Vinces (part 21): Running TPC-H on Virtuoso Cluster on Amazon EC2<i> (this post)</i> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-06-10T12:49:39.451863-04:00
Introducing the OpenLink Virtuoso Benchmarks AMI on Amazon EC2
http://www.openlinksw.com/weblog/oerling/?id=1843
2015-06-09T15:51:22Z
<p>The <b>OpenLink Virtuoso Benchmarks</b> AMI is an Amazon EC2 machine image with the latest Virtuoso open source technology preconfigured to run —</p> <ul> <li> <p> <i><a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0xf6c2bae8">TPC-H</a> </i>, the classic of SQL data warehousing</p> </li> <li> <p> <i><a href="http://www.openlinksw.com/weblog/oerling/?id=1834" id="link-id0xfdfb2aa8">LDBC SNB</a>,</i> the new Social Network Benchmark from the Linked Data Benchmark Council</p> </li> <li> <p> <i><a href="http://www.openlinksw.com/weblog/oerling/?id=1831" id="link-id0x104b01368">LDBC SPB</a>,</i> the RDF/SPARQL Semantic Publishing Benchmark from LDBC</p> </li> </ul> <p>This package is ideal for technology evaluators and developers interested in getting the most performance out of Virtuoso. This is also an all-in-one solution to any questions about reproducing claimed benchmark results. All necessary tools for building and running are included; thus any developer can use this model installation as a starting point. The benchmark drivers are preconfigured with appropriate settings, and benchmark qualification tests can be run with a single command.</p> <p>The Benchmarks AMI includes a precompiled, preconfigured checkout of the <a href="https://github.com/v7fasttrack/virtuoso-opensource/" id="link-id0x1051c7358">v7fasttrack github repository</a>, checkouts of the github repositories of the benchmarks, and a number of running directories with all configuration files preset and optimized. The image is intended to be instantiated on a R3.8xlarge Amazon instance with 244G RAM, dual Xeon E5-2670 v2, and 600G SSD.</p> <p>Benchmark datasets and preloaded database files can be downloaded from S3 when large, and generated as needed on the instance when small. As an alternative, the instance is also set up to do all phases of data generation and database bulk load.</p> <p>The following benchmark setups are included:</p> <ul> <li>TPC-H 100G</li> <li>TPC-H 300G</li> <li>LDBC SNB Validation </li> <li>LDBC SNB Interactive 100G</li> <li>LDBC SNB Interactive 300G (SF3)</li> <li>LDBC SPB Validation </li> <li>LDBC SPB Basic 256 Mtriples (SF5)</li> <li>LDBC SPB Basic 1 Gtriple</li> </ul> <p>The AMI will be expanded as new benchmarks are introduced, for example, the <i>LDBC Social Network Business Intelligence</i> or <i>Graph Analytics.</i> </p> <p>To get started: </p> <ol> <li> <p>Instantiate machine image <b>ami-eb789280</b> (AMI ID is subject to change; you should be able to find the latest by searching for "OpenLink Virtuoso Benchmarks" in "Community AMIs"; this one is short-named <code>virtuoso-bench-6</code>) with a R3.8xlarge instance. </p> </li> <li> <p>Connect via <code>ssh</code>.</p> </li> <li> <p>See the <b><a href="https://github.com/v7fasttrack/virtuoso-opensource/blob/feature/analytics/binsrc/samples/ldami/ec2-user/README" id="link-id0xfca2bab8">README</a></b> (also found in the <b><code>ec2-user</code></b>'s home directory) for full instructions on getting up and running.</p> </li> </ol>
Orri Erling
oerling@openlinksw.com
2015-06-18T14:55:56.642858-04:00
SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso
http://www.openlinksw.com/weblog/oerling/?id=1841
2015-06-09T15:24:58Z
<p>In this post we will look at running the <a href="http://ldbcouncil.org/developer/snb" id="link-id0x2aab62c37078">LDBC SNB</a> on <a href="http://virtuoso.openlinksw.com/" id="link-id0x2aab629da5c8">Virtuoso</a>.</p> <p>First, let's recap what the benchmark is about: </p> <ol> <li>fairly frequent short updates, with no update contention worth mentioning</li> <li>short random lookups</li> <li>medium complex queries centered around a person's social environment</li> </ol> <p>The updates exist so as to invalidate strategies that rely too heavily on precomputation. The short lookups exist for the sake of realism; after all, an online social application does lookups for the most part. The medium complex queries are to challenge the DBMS.</p> <p>The DBMS challenges have to do firstly with query optimization, and secondly with execution with a lot of non-local random access patterns. Query optimization is not a requirement, <i>per se,</i> since imperative implementations are allowed, but we will see that these are no more free of the laws of nature than the declarative ones.</p> <p>The workload is arbitrarily parallel, so intra-query parallelization is not particularly useful, if also not harmful. There are latency constraints on operations which strongly encourage implementations to stay within a predictable time envelope regardless of specific query parameters. The parameters are a combination of person and date range, and sometimes tags or countries. The hardest queries have the potential to access all content created by people within 2 steps of a central person, so possibly thousands of people, times 2000 posts per person, times up to 4 tags per post. We are talking in the millions of key lookups, aiming for sub-second single-threaded execution.</p> <p>The test system is the same as used in the <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x2aab62e7f368">TPC-H series</a>: dual Xeon E5-2630, 2x6 cores x 2 threads, 2.3GHz, 192 GB RAM. The software is the <a href="https://github.com/v7fasttrack/virtuoso-opensource/tree/feature/analytics" id="link-id0x2aab62bf6de8">feature/analytics branch</a> of <a href="https://github.com/v7fasttrack/virtuoso-opensource/" id="link-id0x2aab623b1358">v7fasttrack, available from www.github.com</a>.</p> <p>The dataset is the SNB 300G set, with:</p> <blockquote> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr> <td style="text-align:right;"><code>1,136,127</code> </td> <td> <b><code>persons</code> </b> </td> </tr> <tr> <td style="text-align:right;"><code>125,249,604</code> </td> <td> <b><code>knows</code> </b> edges</td> </tr> <tr> <td style="text-align:right;"><code>847,886,644</code> </td> <td> <b><code>posts</code> </b>, including replies</td> </tr> <tr> <td style="text-align:right;"><code>1,145,893,841</code> </td> <td> <b><code>tags</code> </b> of posts or replies</td> </tr> <tr> <td style="text-align:right;"><code>1,140,226,235</code> </td> <td> <b><code>likes</code> </b> of posts or replies</td> </tr> </table> </blockquote> <p>As an initial step, we run the benchmark as fast as it will go. We use 32 threads on the driver side for 24 hardware threads.</p> <p>Below are the numerical quantities for a 400K operation run after 150K operations worth of warmup.</p> <blockquote> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr> <td style="text-align:right;"><b>Duration:</b> </td> <td><code>10:41.251</code> </td> </tr> <tr> <td style="text-align:right;"><b>Throughput:</b> </td> <td><code>623.71 (op/s)</code> </td> </tr> </table> </blockquote> <p>The statistics that matter are detailed below, with operations ranked in order of descending client-side wait-time. All times are in milliseconds.</p> <blockquote> <table style="padding:10px;border-spacing:10px;margin-left:auto;margin-right:auto;"> <tr style="text-align:center;"> <th>% of total</th> <th>total_wait</th> <th>name</th> <th>count</th> <th>mean</th> <th>min</th> <th>max</th> </tr> <tr> <td style="text-align:right;"><code>20 %</code> </td> <td style="text-align:right;"><code>4,231,130</code> </td> <td><code>LdbcQuery5 </code> </td> <td style="text-align:right;"><code> 656</code> </td> <td style="text-align:right;"><code>6,449.89 </code> </td> <td style="text-align:right;"><code> 245</code> </td> <td style="text-align:right;"><code>10,311</code> </td> </tr> <tr> <td style="text-align:right;"><code>11 %</code> </td> <td style="text-align:right;"><code>2,272,954</code> </td> <td><code>LdbcQuery8 </code> </td> <td style="text-align:right;"><code>18,354</code> </td> <td style="text-align:right;"><code> 123.84 </code> </td> <td style="text-align:right;"><code> 14</code> </td> <td style="text-align:right;"><code> 2,240</code> </td> </tr> <tr> <td style="text-align:right;"><code>10 %</code> </td> <td style="text-align:right;"><code>2,200,718</code> </td> <td><code>LdbcQuery3 </code> </td> <td style="text-align:right;"><code> 388</code> </td> <td style="text-align:right;"><code>5,671.95 </code> </td> <td style="text-align:right;"><code> 468</code> </td> <td style="text-align:right;"><code>17,368</code> </td> </tr> <tr> <td style="text-align:right;"><code> 7.3 %</code> </td> <td style="text-align:right;"><code>1,561,382</code> </td> <td><code>LdbcQuery14 </code> </td> <td style="text-align:right;"><code> 1,124</code> </td> <td style="text-align:right;"><code>1,389.13 </code> </td> <td style="text-align:right;"><code> 4</code> </td> <td style="text-align:right;"><code> 5,724</code> </td> </tr> <tr> <td style="text-align:right;"><code> 6.7 %</code> </td> <td style="text-align:right;"><code>1,441,575</code> </td> <td><code>LdbcQuery12 </code> </td> <td style="text-align:right;"><code> 1,252</code> </td> <td style="text-align:right;"><code>1,151.42 </code> </td> <td style="text-align:right;"><code> 15</code> </td> <td style="text-align:right;"><code> 3,273</code> </td> </tr> <tr> <td style="text-align:right;"><code> 6.5 %</code> </td> <td style="text-align:right;"><code>1,396,932</code> </td> <td><code>LdbcQuery10 </code> </td> <td style="text-align:right;"><code> 1,252</code> </td> <td style="text-align:right;"><code>1,115.76 </code> </td> <td style="text-align:right;"><code> 13</code> </td> <td style="text-align:right;"><code> 4,743</code> </td> </tr> <tr> <td style="text-align:right;"><code> 5 %</code> </td> <td style="text-align:right;"><code>1,064,457</code> </td> <td><code>LdbcShortQuery3PersonFriends </code> </td> <td style="text-align:right;"><code>46,285</code> </td> <td style="text-align:right;"><code> 22.9979 </code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,287</code> </td> </tr> <tr> <td style="text-align:right;"><code> 4.9 %</code> </td> <td style="text-align:right;"><code>1,047,536</code> </td> <td><code>LdbcShortQuery2PersonPosts </code> </td> <td style="text-align:right;"><code>46,285</code> </td> <td style="text-align:right;"><code> 22.6323 </code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,156</code> </td> </tr> <tr> <td style="text-align:right;"><code> 4.1 %</code> </td> <td style="text-align:right;"><code> 885,102</code> </td> <td><code>LdbcQuery6 </code> </td> <td style="text-align:right;"><code> 1,721</code> </td> <td style="text-align:right;"><code> 514.295 </code> </td> <td style="text-align:right;"><code> 8</code> </td> <td style="text-align:right;"><code> 5,227</code> </td> </tr> <tr> <td style="text-align:right;"><code> 3.3 %</code> </td> <td style="text-align:right;"><code> 707,901</code> </td> <td><code>LdbcQuery1 </code> </td> <td style="text-align:right;"><code> 2,117</code> </td> <td style="text-align:right;"><code> 334.389 </code> </td> <td style="text-align:right;"><code> 28</code> </td> <td style="text-align:right;"><code> 3,467</code> </td> </tr> <tr> <td style="text-align:right;"><code> 2.4 %</code> </td> <td style="text-align:right;"><code> 521,738</code> </td> <td><code>LdbcQuery4 </code> </td> <td style="text-align:right;"><code> 1,530</code> </td> <td style="text-align:right;"><code> 341.005 </code> </td> <td style="text-align:right;"><code> 49</code> </td> <td style="text-align:right;"><code> 2,774</code> </td> </tr> <tr> <td style="text-align:right;"><code> 2.1 %</code> </td> <td style="text-align:right;"><code> 440,197</code> </td> <td><code>LdbcShortQuery4MessageContent </code> </td> <td style="text-align:right;"><code>46,302</code> </td> <td style="text-align:right;"><code> 9.50708</code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,015</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.9 %</code> </td> <td style="text-align:right;"><code> 407,450</code> </td> <td><code>LdbcUpdate5AddForumMembership </code> </td> <td style="text-align:right;"><code>14,338</code> </td> <td style="text-align:right;"><code> 28.4175 </code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,008</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.9 %</code> </td> <td style="text-align:right;"><code> 405,243</code> </td> <td><code>LdbcShortQuery7MessageReplies </code> </td> <td style="text-align:right;"><code>46,302</code> </td> <td style="text-align:right;"><code> 8.75217</code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,112</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.9 %</code> </td> <td style="text-align:right;"><code> 404,002</code> </td> <td><code>LdbcShortQuery6MessageForum </code> </td> <td style="text-align:right;"><code>46,302</code> </td> <td style="text-align:right;"><code> 8.72537</code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 1,968</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.8 %</code> </td> <td style="text-align:right;"><code> 387,044</code> </td> <td><code>LdbcUpdate3AddCommentLike </code> </td> <td style="text-align:right;"><code>12,659</code> </td> <td style="text-align:right;"><code> 30.5746 </code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,060</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.7 %</code> </td> <td style="text-align:right;"><code> 361,290</code> </td> <td><code>LdbcShortQuery1PersonProfile </code> </td> <td style="text-align:right;"><code>46,285</code> </td> <td style="text-align:right;"><code> 7.80577</code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,015</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1.6 %</code> </td> <td style="text-align:right;"><code> 334,409</code> </td> <td><code>LdbcShortQuery5MessageCreator </code> </td> <td style="text-align:right;"><code>46,302</code> </td> <td style="text-align:right;"><code> 7.22234</code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 2,055</code> </td> </tr> <tr> <td style="text-align:right;"><code> 1 %</code> </td> <td style="text-align:right;"><code> 220,740</code> </td> <td><code>LdbcQuery2 </code> </td> <td style="text-align:right;"><code> 1,488</code> </td> <td style="text-align:right;"><code> 148.347 </code> </td> <td style="text-align:right;"><code> 2</code> </td> <td style="text-align:right;"><code> 2,504</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.96 %</code> </td> <td style="text-align:right;"><code> 205,910</code> </td> <td><code>LdbcQuery7 </code> </td> <td style="text-align:right;"><code> 1,721</code> </td> <td style="text-align:right;"><code> 119.646 </code> </td> <td style="text-align:right;"><code> 11</code> </td> <td style="text-align:right;"><code> 2,295</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.93 %</code> </td> <td style="text-align:right;"><code> 198,971</code> </td> <td><code>LdbcUpdate2AddPostLike </code> </td> <td style="text-align:right;"><code> 5,974</code> </td> <td style="text-align:right;"><code> 33.3062 </code> </td> <td style="text-align:right;"><code> 0</code> </td> <td style="text-align:right;"><code> 1,987</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.88 %</code> </td> <td style="text-align:right;"><code> 189,871</code> </td> <td><code>LdbcQuery11 </code> </td> <td style="text-align:right;"><code> 2,294</code> </td> <td style="text-align:right;"><code> 82.7685 </code> </td> <td style="text-align:right;"><code> 4</code> </td> <td style="text-align:right;"><code> 2,219</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.85 %</code> </td> <td style="text-align:right;"><code> 182,964</code> </td> <td><code>LdbcQuery13 </code> </td> <td style="text-align:right;"><code> 2,898</code> </td> <td style="text-align:right;"><code> 63.1346 </code> </td> <td style="text-align:right;"><code> 1</code> </td> <td style="text-align:right;"><code> 2,201</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.74 %</code> </td> <td style="text-align:right;"><code> 158,188</code> </td> <td><code>LdbcQuery9 </code> </td> <td style="text-align:right;"><code> 78</code> </td> <td style="text-align:right;"><code>2,028.05 </code> </td> <td style="text-align:right;"><code>1,108</code> </td> <td style="text-align:right;"><code> 4,183</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.67 %</code> </td> <td style="text-align:right;"><code> 143,457</code> </td> <td><code>LdbcUpdate7AddComment </code> </td> <td style="text-align:right;"><code> 3,986</code> </td> <td style="text-align:right;"><code> 35.9902 </code> </td> <td style="text-align:right;"><code> 1</code> </td> <td style="text-align:right;"><code> 1,912</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.26 %</code> </td> <td style="text-align:right;"><code> 54,947</code> </td> <td><code>LdbcUpdate8AddFriendship </code> </td> <td style="text-align:right;"><code> 571</code> </td> <td style="text-align:right;"><code> 96.2294 </code> </td> <td style="text-align:right;"><code> 1</code> </td> <td style="text-align:right;"><code> 988</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.2 %</code> </td> <td style="text-align:right;"><code> 43,451</code> </td> <td><code>LdbcUpdate6AddPost </code> </td> <td style="text-align:right;"><code> 1,386</code> </td> <td style="text-align:right;"><code> 31.3499 </code> </td> <td style="text-align:right;"><code> 1</code> </td> <td style="text-align:right;"><code> 2,060</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.0086%</code> </td> <td style="text-align:right;"><code> 1,848</code> </td> <td><code>LdbcUpdate4AddForum </code> </td> <td style="text-align:right;"><code> 103</code> </td> <td style="text-align:right;"><code> 17.9417 </code> </td> <td style="text-align:right;"><code> 1</code> </td> <td style="text-align:right;"><code> 65</code> </td> </tr> <tr> <td style="text-align:right;"><code> 0.0002%</code> </td> <td style="text-align:right;"><code> 44</code> </td> <td><code>LdbcUpdate1AddPerson </code> </td> <td style="text-align:right;"><code> 2</code> </td> <td style="text-align:right;"><code> 22 </code> </td> <td style="text-align:right;"><code> 10</code> </td> <td style="text-align:right;"><code> 34</code> </td> </tr> </table> </blockquote> <p>At this point we have in-depth knowledge of the choke points the benchmark stresses, and we can give a first assessment of whether the design meets its objectives for setting an agenda for the coming years of graph database development.</p> <p>The implementation is well optimized in general but still has maybe 30% room for improvement. We note that this is based on a compressed column store. One could think that alternative data representations, like in-memory graphs of structs and pointers between them, are better for the task. This is not necessarily so; at the least, a compressed column store is much more space efficient. Space efficiency is the root of cost efficiency, since as soon as the working set is not in memory, a random access workload is badly hit.</p> <p>The set of choke points (technical challenges) actually revealed by the benchmark is so far as follows:</p> <ul> <li> <p> <b>Cardinality estimation under heavy data skew —</b> Many queries take a <code>tag</code> or a <code>country</code> as a parameter. The cardinalities associated with <code>tags</code> vary from 29M <code>posts</code> for the most common to 1 for the least common. Q6 has a common <code>tag</code> (in top few hundred) half the time and a random, most often very infrequent, one the rest of the time. A declarative implementation must recognize the cardinality implications from the literal and plan accordingly. An imperative one would have to count. Missing this makes Q6 take about 40% of the time instead of 4.1% when adapting.</p> </li> <li> <p> <b>Covering indices —</b> Being able to make multi-column indices that duplicate some columns from the table often saves an entire table lookup. For example, an index on <code>post</code> by <code>author</code> can also contain the <code>post</code>'s <code>creation date</code>.</p> </li> <li> <p> <b>Multi-hop graph traversal —</b> Most queries access a two-hop environment starting at a <code>person</code>. Two queries look for shortest paths of unbounded length. For the two-hop case, it makes almost no difference whether this is done as a union or a special graph traversal operator. For shortest paths, this simply must be built into the engine; doing this client-side incurs prohibitive overheads. A bidirectional shortest path operation is a requirement for the benchmark.</p> </li> <li> <p> <b>Top <i>K</i> —</b> Most queries returning <code>posts</code> order results by descending <code>date</code>. Once there are at least <i>k</i> results, anything older than the <i>k</i>th can be dropped, adding a <code>date</code> selection as early as possible in the query. This interacts with vectored execution, so that starting with a short vector size more rapidly produces an initial top <i>k</i>.</p> </li> <li> <p> <b>Late projection —</b> Many queries access several columns and touch millions of rows but only return a few. The columns that are not used in sorting or selection can be retrieved only for the rows that are actually returned. This is especially useful with a column store, as this removes many large columns (e.g., text of a <code>post</code>) from the working set.</p> </li> <li> <p> <b>Materialization —</b> Q14 accesses an expensive-to-compute edge weight, the number of <code>post-reply</code> pairs between two <code>people</code>. Keeping this precomputed drops Q14 from the top place. Other materialization would be possible, for example Q2 (top 20 <code>posts</code> by friends), but since Q2 is just 1% of the load, there is no need. One could of course argue that this should be 20x more frequent, in which case there could be a point to this.</p> </li> <li> <p> <b>Concurrency control —</b> Read-write contention is rare, as updates are randomly spread over the database. However, some pages get read very frequently, e.g., some middle level index pages in the <code>post</code> table. Keeping a count of reading threads requires a mutex, and there is significant contention on this. Since the hot set can be one page, adding more mutexes does not always help. However, hash partitioning the index into many independent trees (as in the case of a cluster) helps for this. There is also contention on a mutex for assigning threads to client requests, as there are large numbers of short operations.</p> </li> </ul> <p>In subsequent posts, we will look at specific queries, what they in fact do, and what their theoretical performance limits would be. In this way we will have a precise understanding of which way SNB can steer the graph DB community.</p> <h3> SNB Interactive Series</h3> <ul> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1834" id="link-id0x2aab62c9ee38"> SNB Interactive, Part 1: What is SNB Interactive Really About?</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1836" id="link-id0x2aab62e64818"> SNB Interactive, Part 2: Modeling Choices</a> </li> <li> SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso<i> (this post)</i> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-06-09T11:24:58.727972-04:00
The Virtuoso Science Library
http://www.openlinksw.com/weblog/oerling/?id=1838
2015-06-03T16:51:04Z
<p>There is a lot of scientific material on Virtuoso, but it has not been presented all together in any one place. So I am making here a compilation of the best resources with a paragraph of introduction on each. Some of these are project deliverables from projects under the EU FP7 programme; some are peer-reviewed publications.</p> <p>For the future, <a href="http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/VirtuosoScienceLibrary" id="link-id0x2aab62129648">an updated version of this list</a> may be found on the <a href="http://virtuoso.openlinksw.com/" id="link-id0x2aab614e7508">main Virtuoso site</a>.</p> <h2>European Project Deliverables</h2> <ul> <li> <p> <b><a href="http://svn.aksw.org/projects/GeoKnow/Public/D2.6.1_Prototype_of_Built-in_Complex_Geo_Problem_Solving.pdf" id="link-id0x2aab62a51908">GeoKnow D 2.6.1</a>: Graph Analytics in the DBMS</b> (2015-01-05)</p> <p>This introduces the idea of unbundling basic cluster DBMS functionality like cross partition joins and partitioned group by to form a graph processing framework collocated with the data.</p> </li> <li> <p> <b><a href="http://svn.aksw.org/projects/GeoKnow/Public/D2.4.1_Geospatial_Clustering.pdf" id="link-id0x2aab61577e18">GeoKnow D2.4.1</a>: Geospatial Clustering and Characteristic Sets</b> (2015-01-06)</p> <p>This presents experimental results of structure-aware RDF applied to geospatial data. The regularly structured part of the data goes in tables; the rest is triples/quads. Furthermore, for the first time in the RDF space, physical storage location is correlated to properties of entities, in this case geo location, so that geospatially adjacent items are also likely adjacent in the physical data representation.</p> </li> <li> <p> <b><a href="http://lod2.eu/Deliverable/D2.1.5.html" id="link-id0x2aab617997f8">LOD2 D2.1.5</a>: 500 billion triple BSBM</b> (2014-08-18)</p> <p>This presents experimental results on lookup and BI workloads on Virtuoso cluster with 12 nodes, for a total of 3T RAM and 192 cores. This also discusses bulk load, at up to 6M triples/s and specifics of query optimization in scale-out settings.</p> </li> <li> <p> <b><a href="http://lod2.eu/Deliverable/D2.6.html" id="link-id0x2aab629cf8c8">LOD2 D2.6</a>: Parallel Programming in SQL</b> (2012-08-12)</p> <p>This discusses ways of making SQL procedures partitioning-aware, so that one can, map-reduce style, send parallel chunks of computation to each partition of the data.</p> </li> </ul> <h2>Publications</h2> <h3>2015</h3> <ul> <li> <p> <b>Minh-Duc, Pham, Linnea, P., Erling, O., and Boncz, P.A. "<a href="http://homepages.cwi.nl/~duc/papers/emergentschema_www15.pdf" id="link-id0x2aab61eb10a8">Deriving an Emergent Relational Schema from RDF Data</a>," WWW, 2015.</b> </p> <p>This paper shows how RDF is in fact structured and how this structure can be reconstructed. This reconstruction then serves to create a physical schema, reintroducing all the benefits of physical design to the schema-last world. Experiments with Virtuoso show marked gains in query speed and data compactness.</p> </li> </ul> <h3>2014</h3> <ul> <li> <p> <b>Peter A. Boncz, Orri Erling, Minh-Duc Pham: <a href="http://oai.cwi.nl/oai/asset/21394/21394B.pdf" id="link-id0x2aab62eb22f8">Experiences with Virtuoso Cluster RDF Column Store</a>. Linked Data Management 2014: 239-259</b> </p> <p>This book chapter gives an in-depth look at the performance dynamics of Virtuoso scale out.</p> </li> </ul> <h3>2013</h3> <ul> <li> <p> <b>P. A. Boncz, T. Neumann, and O. Erling. <a href="http://oai.cwi.nl/oai/asset/21424/21424B.pdf" id="link-id0x2aab621bc728">TPC-H Analyzed: Hidden Messages and Lessons Learned from an Influential Benchmark</a>. Proceedings of the TPC Technology Conference on Performance Evaluation & Benchmarking TPCTC, 2013.</b> </p> <p>This is a summary of all factors that make up analytics performance by those who know. The Virtuoso <a href="http://www.openlinksw.com/weblog/oerling/?id=1739" id="link-id0x2aab61dceb68">TPC-H blog series</a> is a further development and commentary on these same truths.</p> </li> </ul> <h3>2012</h3> <ul> <li> <p> <b>Orri Erling: <a href="http://sites.computer.org/debull/A12mar/vicol.pdf" id="link-id0x2aab61fac308">Virtuoso, a Hybrid RDBMS/Graph Column Store</a>. IEEE Data Eng. Bull. (DEBU) 35(1):3-8 (2012)</b> </p> <p>This paper introduces the Virtuoso column store architecture and design choices. One design is made to serve both random updates and lookups as well as the big scans where column stores traditionally excel. Examples are given from both TPC-H and the schema-less RDF world.</p> </li> <li> <p> <b>Minh-Duc Pham, Peter A. Boncz, Orri Erling: <a href="http://oai.cwi.nl/oai/asset/19919/19919D.pdf" id="link-id0x2aab617b0238">S3G2: A Scalable Structure-Correlated Social Graph Generator</a>. TPCTC 2012:156-172</b> </p> <p>This paper presents the basis of the social network benchmarking technology later used in the LDBC benchmarks.</p> </li> </ul> <h3>2011</h3> <ul> <li> <p> <b>Christian Bizer, Peter A. Boncz, Michael L. Brodie, Orri Erling: "<a href="http://www.sigmod.org/publications/sigmod-record/1112/pdfs/10.report.bizer.pdf" id="link-id0x2aab616b2678">The Meaningful Use of Big Data: Four Perspectives – Four Challenges</a>." SIGMOD Record (SIGMOD) 40(4):56-60 (2011)</b> </p> <p>This is an anthology of views by industry thought leaders on what semantics could or ought to contribute to the practice of data management.</p> </li> </ul> <h3>2009</h3> <ul> <li> <p> <b>Orri Erling, Ivan Mikhailov: <a href="http://www.openlinksw.com/weblog/oerling/lodw2.pdf" id="link-id0x2aab62629058">Faceted Views over Large-Scale Linked Data</a>. LDOW 2009</b> </p> <p>This paper introduces anytime query answering as an enabling technology for open-ended querying of large data on public service end points. While not every query can be run to completion, partial results can most often be returned within a constrained time window.</p> </li> <li> <p> <b>Orri Erling, Ivan Mikhailov: <a href="http://www.researchgate.net/publication/234196284_Virtuoso_RDF_Support_in_a_Native_RDBMS" id="link-id0x2aab6150f208">Virtuoso: RDF Support in a Native RDBMS</a>. Semantic Web Information Management 2009:501-519</b> </p> <p>This is a general presentation of how a SQL engine needs to be adapted to serve a run-time typed and schema-less workload.</p> </li> </ul> <h3>2008</h3> <ul> <li> <p> <b>Orri Erling, Ivan Mikhailov: <a href="http://www.openlinksw.com/weblog/oerling/RDFAndMapped_BI.pdf" id="link-id0x2aab62b107d8">Integrating Open Sources and Relational Data with SPARQL</a>. ESWC 2008:838-842</b> </p> <p>This paper introduces the still challenging RDF-H benchmark, an RDF translation of the classic TPC-H. Running this over SPARQL to SQL mapping is considered.</p> </li> </ul> <h3>2007</h3> <ul> <li> <p> <b>Orri Erling, Ivan Mikhailov: <a href="http://ceur-ws.org/Vol-301/Paper_5_Erling.pdf" id="link-id0x2aab616f18a8">RDF Support in the Virtuoso DBMS</a>. CSSW 2007:59-68</b> </p> <p>This is an initial discussion of RDF support in Virtuoso. Most specifics are by now different but this can give a historical perspective.</p> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-06-03T12:53:36.943110-04:00
SNB Interactive, Part 2: Modeling Choices
http://www.openlinksw.com/weblog/oerling/?id=1836
2015-05-14T15:37:50Z
<p> <a href="http://ldbcouncil.org/benchmarks/snb" id="link-id0x2aab76e5ac28">SNB Interactive</a> is the wild frontier, with very few rules. This is necessary, among other reasons, because there is no standard property graph data model, and because the contestants support a broad mix of programming models, ranging from in-process APIs to declarative query.</p> <p>In the case of <a href="http://dbpedia.org/resource/Virtuoso_Universal_Server" id="link-id0x2aab76e5af48">Virtuoso</a>, we have played with <a href="http://dbpedia.org/resource/SQL" id="link-id0x2aab76e5b088">SQL</a> and <a href="http://dbpedia.org/resource/SPARQL" id="link-id0x2aab77e969e8">SPARQL</a> implementations. For a fixed schema and well known workload, SQL will always win. The reason is that SQL allows materialization of multi-part indices and data orderings that make sense for the application. In other words, there is transparency into physical design. An RDF/SPARQL-based application may also have physical design by means of structure-aware storage, but this is more complex and here we are just concerned with speed and having things work precisely as we intend.</p> <h2>Schema Design</h2> <p>SNB has a regular schema described by a <a href="https://en.wikipedia.org/wiki/Unified_Modeling_Language" id="link-id0x2aab77e96da8">UML</a> diagram. This has a number of relationships, of which some have attributes. There are no heterogenous sets, i.e., no need for run-time typed attributes or graph edges with the same label but heterogenous end-points. Translation into SQL or SPARQL is straightforward. Edges with attributes (e.g., the <code><a href="http://xmlns.com/foaf/spec/#term_knows" id="link-id0x2aab77e97098">foaf:knows</a></code> relation between people) would end up represented as a subject with the end points and the effective date as properties. The relational implementation has a two-part primary key and the effective date as a dependent column. A native property graph database would use an edge with an extra property for this, as such are typically supported.</p> <p>The only table-level choice has to do with whether <code>posts</code> and <code>comments</code> are kept in the same or different data structures. The Virtuoso schema uses a single table for both, with nullable columns for the properties that occur only in one. This makes the queries more concise. There are cases where only non-reply <code>posts</code> of a given <code>author</code> are accessed. This is supported by having two <code>author</code> foreign key columns each with its own index. There is a single nullable foreign key from the reply to the post/comment being replied to.</p> <p>The workload has some frequent access paths that need to be supported by index. Some queries reward placing extra columns in indices. For example, a common pattern is accessing the most recent posts of an author or a group of authors. There, having a composite key of <code>ps_creatorid, ps_creationdate, ps_postid</code> pays off since the <code>top-k</code> on <code>creationdate</code> can be pushed down into the index without needing a reference to the table.</p> <p>The implementation is free to choose data types for attributes, particularly <code>datetimes</code>. The Virtuoso implementation adopts the practice of the <a href="http://dbpedia.org/resource/DEX_(Graph_database)" id="link-id0x2aab77e97d88">Sparksee</a> and <a href="http://dbpedia.org/resource/Neo4j" id="link-id0x2aab77e97ed8">Neo4j</a> implementations and represents this is a count of milliseconds since epoch. This is less confusing, faster to compare, and more compact than a native datetime datatype that may or may not have timezones, etc. Using a built-in datetime seems to be nearly always a bad idea. A dimension table or a number for a time dimension avoids the ambiguities of a calendar or at least makes these explicit.</p> <p>The benchmark allows procedurally maintained materializations of intermediate results for use by queries as long as these are maintained transaction-by-transaction. For example, each person could have the 20 newest posts by their immediate contacts precomputed. This would reduce Q2 "top of the wall" to a single lookup. This does not however appear to be worthwhile. The Virtuoso implementation does do one such materialization for Q14: A connection weight is calculated for every pair of persons that know each other. This is related to the count of replies by either to content generated by the other. If there does not exist a single reply in either direction, the weight is taken to be 0. This weight is precomputed after bulk load and subsequently maintained each time a reply is added. The table for this is the only row-wise structure in the schema and represents a half-matrix of connected people, i.e., <code>person1, person2 -> weight</code>. <code>Person1</code> is by convention the one with the smaller <code>p_personid</code>. Note that comparing IDs in this way is useful but not normally supported by SPARQL/RDF systems. SPARQL would end up comparing strings of URIs with disastrous performance implications unless an implementation-specific trick were used.</p> <p>In <a href="http://www.openlinksw.com/weblog/oerling/?id=1841" id="link-id0x2aab6315d2b8">the next installment</a>, we will analyze an actual run.</p> <h3> SNB Interactive Series</h3> <ul> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1834" id="link-id0x2aab62d2ef08"> SNB Interactive, Part 1: What is SNB Interactive Really About?</a> </li> <li> SNB Interactive, Part 2: Modeling Choices<i> (this post)</i> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1841" id="link-id0x2aab62df6248"> SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso</a> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-06-09T11:26:49.994102-04:00
SNB Interactive, Part 1: What is SNB Interactive Really About?
http://www.openlinksw.com/weblog/oerling/?id=1834
2015-05-14T15:37:01Z
<p>This is the first in a series of blog posts analyzing the Interactive workload of the <a href="http://ldbcouncil.org/benchmarks/snb" id="link-id0x2aab77caacf8">LDBC Social Network Benchmark</a>. This is written from the dual perspective of participating in the benchmark design, and of building the <a href="http://dbpedia.org/resource/Virtuoso_Universal_Server" id="link-id0x2aab76f21928">OpenLink Virtuoso</a> implementation of same.</p> <p>With two implementations of SNB Interactive at four different scales, we can take a first look at what the benchmark is really about. The hallmark of a benchmark implementation is that its performance characteristics are understood; even if these do not represent the maximum of the attainable, there are no glaring mistakes; and the implementation represents a reasonable best effort by those who ought to know such, namely the system vendors.</p> <p>The essence of a benchmark is a set of trick questions or "choke points," as <a href="http://ldbcouncil.org/" id="link-id0x2aab77ca9c88">LDBC</a> calls them. A number of these were planned from the start. It is then the role of experience to tell whether addressing these is really the key to winning the race. Unforeseen ones will also surface.</p> <p>So far, we see that SNB confronts the implementor with choices in the following areas:</p> <ul> <li> <p> <b>Data model —</b> Tabular relational (commonly known as SQL), graph relational (including RDF), property graph, etc.</p> </li> <li> <p> <b>Physical storage model —</b> Row-wise vs. column-wise, for instance.</p> </li> <li> <p> <b>Ordering of materialized data —</b> Sorted projections, composite keys, replicating columns in auxiliary data structures, etc.</p> </li> <li> <p> <b>Persistence of intermediate results — </b> Materialized views, triggers, precomputed temporary tables, etc.</p> </li> <li> <p> <b>Query optimization —</b> join order/type, interesting physical data orderings, late projection, top k, etc.</p> </li> <li> <p> <b>Parameters vs. literals —</b> Sometimes different parameter values result in different optimal query plans.</p> </li> <li> <p> <b>Predictable, uniform latency —</b> Measurement rules stipulate the the <a href="http://dbpedia.org/resource/System_under_test" id="link-id0x2aab77de7a58">SUT (system under test)</a> must not fall behind the simulated workload.</p> </li> <li> <p> <b>Durability —</b> How to make data durable while maintaining steady throughput, e.g., logging, checkpointing, etc.</p> </li> </ul> <p>In the process of making a benchmark implementation, one naturally encounters questions about the validity, reasonability, and rationale of the benchmark definition itself. Additionally, even though the benchmark might not directly measure certain aspects of a system, making an implementation will take a system past its usual envelope and highlight some operational aspects.</p> <ul> <li> <p> <b>Data generation —</b> Generating a mid-size dataset takes time, e.g., 8 hours for 300G. In a cloud situation, keeping the dataset in <a href="http://dbpedia.org/resource/Amazon_S3" id="link-id0x2aab77c11b18">S3</a> or similar is necessary; re-generating every time is not an option.</p> </li> <li> <p> <b>Query mix —</b> Are the relative frequencies of the operations reasonable? What bias does this introduce?</p> </li> <li> <p> <b>Uniformity of parameters —</b> Due to non-uniform data distributions in the dataset, there is easily a 100x difference between "fast" and "slow" cases of a single query template. How long does one need to run to balance these fluctuations?</p> </li> <li> <p> <b>Working set —</b> Experience shows that there is a large difference between almost-warm and steady-state of working set. This can be a factor of 1.5 in throughput.</p> </li> <li> <p> <b>Reasonability of latency constraints —</b> In the present case, a qualifying run must have no more than 5% of all query executions starting over 1 second late. Each execution is scheduled beforehand and done at the intended time. If the SUT does not keep up, it will have all available threads busy and must finish some work before accepting new work, so some queries will start late. Is this a good criterion for measuring consistency of response time? There are some obvious possibilities for abuse.</p> </li> <li> <p> <b>Ease of benchmark implementation/execution —</b> Perfection is open-ended and optimization possibilities infinite, albeit with diminishing returns. Still, getting started should not be too hard. Since systems will be highly diverse, testing that these in fact do the same thing is important. The SNB validation suite is good for this and, given publicly available reference implementations, the effort of getting started is not unreasonable.</p> </li> <li> <p> <b>Ease of adjustment —</b> Since a qualifying run must meet latency constraints while going as fast as possible, setting the performance target involves trial and error. Does the tooling make this easy?</p> </li> <li> <p> <b>Reasonability of durability rule —</b> Right now, one is not required to do checkpoints but must report the time to roll forward from the last checkpoint or initial state. Inspiring vendors to build faster recovery is certainly good, but we are not through with all the implications. What about redundant clusters?</p> </li> </ul> <p>The following posts will look at the above in light of actual experience.</p> <h3> SNB Interactive Series</h3> <ul> <li> SNB Interactive, Part 1: What is SNB Interactive Really About?<i> (this post)</i> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1836" id="link-id0x2aab6357ab98"> SNB Interactive, Part 2: Modeling Choices</a> </li> <li> <a href="http://www.openlinksw.com/weblog/oerling/?id=1841" id="link-id0x2aab6357ada8"> SNB Interactive, Part 3: Choke Points and Initial Run on Virtuoso</a> </li> </ul>
Orri Erling
oerling@openlinksw.com
2015-06-09T11:30:09.916724-04:00
DBpedia Usage Report, January 2015
http://www.openlinksw.com/weblog/oerling/?id=1832
2015-01-07T20:12:37Z
<p>We've just published the latest <a href="http://bit.ly/1DymR8p" id="link-id0x2aab663027d8">DBpedia Usage Report</a>, covering v3.3 (released July, 2009) to v3.9 (released September, 2013); v3.10 (sometimes called "DBpedia 2014"; released September, 2014) will be included in the next report.</p> <p>We think you'll find some interesting details in the statistics. There are also some important notes about Virtuoso configuration options and other sneaky technical issues that can surprise you (as they did us!) when exposing an ad-hoc query server to the world.</p>
Orri Erling
oerling@openlinksw.com
2015-08-11T12:59:03.643766-04:00