@prefix foaf:	<http://xmlns.com/foaf/0.1/> .
@prefix ns1:	<http://www.openlinksw.com/dataspace/person/oerling#> .
ns1:this	foaf:made	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> .
@prefix atom:	<http://atomowl.org/ontologies/atomrdf#> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog>	atom:contains	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> ;
	atom:entry	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> .
@prefix sioc:	<http://rdfs.org/sioc/ns#> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog>	sioc:container_of	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> .
@prefix ns4:	<http://rdfs.org/sioc/services#> .
@prefix ns5:	<http://www.openlinksw.com/dataspace/services/weblog/> .
ns5:item	ns4:services_of	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	sioc:has_container	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog> .
@prefix dt:	<http://www.w3.org/2001/XMLSchema#> .
@prefix dcterms:	<http://purl.org/dc/terms/> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	dcterms:created	"2013-11-25T11:58:10-05:00"^^dt:dateTime ;
	foaf:maker	ns1:this .
@prefix rdfs:	<http://www.w3.org/2000/01/rdf-schema#> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	rdfs:seeAlso	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755/page/1> ;
	dcterms:modified	"2015-06-10T12:07:36.781653-04:00"^^dt:dateTime ;
	sioc:link	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755> ;
	sioc:id	"086ac2311ac8ee92c938bd2140f984ed" ;
	sioc:content	"<p>Q13 is one of the longest running of the 22 queries.  The <a href=\"http://www.tpc.org/tpch/\" id=\"link-id0x23e81828\">TPC-H</a> metric is a geometric mean of two scores, power and throughput, where the throughput score is the elapsed time of the multiuser part of the test divided by the number of queries executed.  In this part of the score, Q13 can be up to 1/5 of the total.  The power score on the other hand is a geometric mean of the run times of all the queries, scaled into queries per hour.  There all queries have equal importance.  A bad Q13 will sink a whole result.</p>\n\n<p>Q13 counts the <code>orders</code> of each <code>customer</code> and then shows, for each distinct <code>count</code> of <code>orders</code>, how many <code>customers</code> have this number of <code>orders</code>.  1/3 of the <code>customers</code> have no <code>orders</code>; hence this is an outer join between <code>customers</code> and <code>orders</code>, as follows:</p>\n\n<blockquote>\n <code><pre>\n  SELECT              c_count,\n          COUNT(*) AS custdist\n    FROM  (    SELECT                      c_custkey,\n                      COUNT(o_orderkey) AS c_count\n                FROM  ( SELECT * \n                          FROM customer\n                               LEFT OUTER JOIN orders \n                                 ON\n                                    c_custkey = o_custkey \n                                    AND\n                                    o_comment NOT LIKE &#39;%special%requests%&#39;\n                      ) c_customer\n             GROUP BY  c_custkey\n          ) c_orders\nGROUP BY  c_count\nORDER BY  custdist DESC,\n          c_count DESC\n;\n</pre>\n </code>\n</blockquote>\n\n\n\n<p>The only parameter of the query is the pattern in the <code>NOT LIKE</code> condition.  The <code>NOT LIKE</code> is very unselective, so almost all <code>orders</code> will be considered.</p>\n\n<p>The Virtuoso run time for Q13 is 6.7s, which we can consider a good result.  Running 5 of these at the same time has the fastest execution finishing in 23.7s and the slowest in 35.3s.  Doing 5x the work takes 5.2x the time.  This is not bad, considering that the query has a high transient memory consumption.  A second execution of 5 concurrent Q13s has the fastest finishing in 22.s and the slowest in 29.8s.  The difference comes from already having the needed memory blocks cached, so there are no calls to the OS for mapping more memory.</p>\n\n<p>To measure the peak memory consumption, which is a factor with this query, there is the <code>mp_max_large_in_use</code> counter.  To reset: </p>\n\n<blockquote>\n<code>__dbf_set (&#39;mp_max_large_in_use&#39;, 0);</code>\n</blockquote>  \n<p>To read:</p>  \n<blockquote>\n<code>SELECT sys_stat (&#39;mp_max_large_in_use&#39;);</code>\n</blockquote>   \n\n<p>For the 5 concurrent executions of Q13, the counter goes to 10GB.  This is easily accommodated at 100 GB; but at ten times the scale, this will be a significant quantity, even in a scale out setting. The memory allocation time is recorded in the counter <code>mp_mmap_clocks</code>, read with <code>sys_stat</code>.  This is a count of cycles spent waiting for <code>mmap</code> or <code>munmap</code> and allows tracking if the process is being slowed down by transient memory allocation.</p>\n\n<p>Let us consider how this works. The plan is as follows:</p>\n\n\n<blockquote>\n <code><pre>\n{ \n{ hash filler\nCUSTOMER   1.5e+07 rows(t3.C_CUSTKEY)\n</pre>\n </code>\n</blockquote>\n\n\n<p>\n<i>-- Make a hash table of the 150M customers.  The stage 2 operator below means that the <code>customers</code> are partitioned in a number of distinct partitions based on the <code>c_custkey</code>, which is the key in the hash table.  This means that a number of disjoint hash tables are built, as many as there are concurrent threads.  This corresponds to the <code>ThreadsPerQuery</code> ini file setting of the <code>enable_qp</code> setting with <code>__dbf_set</code> and <code>sys_stat</code>.</i>\n</p>\n \n<blockquote>\n <code><pre>\nStage 2\nSort hf 34 (q_t3.C_CUSTKEY)\n}\n{ fork\n{ fork\n{ fork\nEND Node\nouter {\n</pre>\n </code>\n</blockquote>\n\n\n<p>\n<i>-- Here we start a <code>RIGHT OUTER JOIN</code> block.  The below operator scans the <code>orders</code> table and picks out the <code>orders</code> which do not contain the mentioned <code>LIKE</code> pattern.</i>\n</p>\n\n<blockquote>\n <code>\n  <pre>\nORDERS   1.5e+08 rows(t4.O_CUSTKEY, t4.O_ORDERKEY)\n O_COMMENT LIKE <c special=\"special\" requests=\"requests\"> LIKE <c>\nhash partition+bloom by 80 ()\n</c></c>\n  </pre>\n </code>\n</blockquote>\n\n\n<p>\n<i>-- Below is a partitioning operator, also known as an exchange operator, which will divide the stream of <code>o_custkeys</code> from the previous scan into different partitions, each served by a different thread.</i>\n</p>\n\n<blockquote>\n <code><pre>\nStage 2\n</pre>\n </code>\n</blockquote>\n\n\n\n<p>\n<i>-- Below is a lookup in the <code>customer</code> hash table.  The lookup takes place in the partition determined by the <code>o_custkey</code> being looked up.</i>\n</p>\n\n<blockquote>\n <code><pre>\nHash source 34  not partitionable         1 rows(q_t4.O_CUSTKEY) -&gt; ()\n right oj, key out ssls: (t3.C_CUSTKEY)\n \nAfter code:\n      0: t3.C_CUSTKEY :=  := artm t4.O_CUSTKEY\n      4: BReturn 0\n</pre>\n </code>\n</blockquote>\n\n<p>\n<i>-- The below is a <code>RIGHT OUTER JOIN</code> end operator; see below for further description </i>\n</p>\n\n\n<blockquote>\n <code><pre>\n end of outer}\nset_ctr\n out: (t4.O_ORDERKEY, t4.O_CUSTKEY)\n shadow: (t4.O_ORDERKEY, t4.O_CUSTKEY)\n \nPrecode:\n      0: isnotnull := Call isnotnull (t4.O_ORDERKEY)\n      5: BReturn 0\n</pre>\n </code>\n</blockquote>\n\n\n<p>\n<i>-- The below sort is the innermost <code>GROUP BY</code>.  The <code>ISNOTNULL</code> above makes a <code>0</code> or a <code>1</code>, depending on whether there was a found <code>o_custkey</code> for the <code>c_custkey</code> of the <code>customer</code>.</i>\n</p>\n\n\n<blockquote>\n <code><pre>\nSort (t3.C_CUSTKEY) -&gt; (isnotnull)\n \n}\n</pre>\n </code>\n</blockquote>\n\n<p>\n<i>-- The below operators start after the above have executed to completion on every partition.  We read the first aggregation, containing for each <code>customer</code> the <code>COUNT</code> of <code>orders</code>. </i>\n</p>\n\n\n<blockquote>\n <code><pre>\ngroup by read node  \n(t3.C_CUSTKEY, aggregate)in each partition slice\n \nAfter code:\n      0: c_custkey :=  := artm t3.C_CUSTKEY\n      4: c_count :=  := artm aggregate\n      8: BReturn 0\nSubquery Select(c_custkey, c_count)\n</pre>\n </code>\n</blockquote>\n\n<p>\n<i>-- Below is the second <code>GROUP BY</code>; for each <code>COUNT</code>, we count how many <code>customers</code> have this many <code>orders</code>.</i>\n</p>\n\n<blockquote>\n <code><pre>\nSort (c_count) -&gt; (inc)\n \n}\ngroup by read node  \n(c_count, custdist)\n</pre>\n </code>\n</blockquote>\n\n\n<p>\n<i>-- Below is the final <code>ORDER BY</code>.</i>\n</p>\n\n<blockquote>\n <code><pre>\nSort (custdist, c_count)\n}\nKey from temp (c_count, custdist)\n \nSelect (c_count, custdist)\n}\n</pre>\n </code>\n</blockquote>\n\n\n\n<p>The CPU profile starts as follows:</p>\n\n<blockquote>\n <code><pre>\n971537   31.8329           setp_chash_run\n494300   16.1960           hash_source_chash_input_1i_n\n262218    8.5917           clrg_partition_dc\n162773    5.3333           strstr_sse42\n68049     2.2297           memcpy_16\n65883     2.1587           cha_insert_1i_n\n57515     1.8845           hs_send_output\n56093     1.8379           cmp_like_const\n53752     1.7612           gb_aggregate\n51274     1.6800           cha_rehash_ents\n...\n</pre>\n </code>\n</blockquote>\n\n\n<p>The <code>GROUP BY</code> is on top, with 31%.  This is the first <code>GROUP BY</code>, which has one group per customer, for a total of 150M groups.  Below the <code>GROUP BY</code> is the hash lookup of the hash join from <code>orders</code> to <code>customer</code>. The third item is partitioning of a data column (<code>dc</code>, or vectored query variable).  The partitioning refers to the operator labeled <b>stage 2</b> above.  From one column of values, it makes several.  In the 4th place, we have the <code>NOT LIKE</code> predicate on <code>o_comment</code>.  This is a substring search implemented using SSE 4.2 instructions.  Finally, in the last place, there is a function for resizing a hash table; in the present case, the hash table for the innermost <code>GROUP BY</code>.</p>\n\n<p>At this point, we have to explain the <code>RIGHT OUTER JOIN</code>: Generally when making a hash join, the larger table is on the probe side and the smaller on the build side.  This means that the rows on the build side get put in a hash table and then for each row on the probe side there is a lookup to see if there is a match in the hash table.</p>\n\n<p>However, here the bigger table is on the right side of <code>LEFT OUTER JOIN</code>. Normally, one would have to make the hash table from the <code>orders</code> table and then probe it with <code>customer</code>, so that one would find no match for the <code>customers</code> with no <code>orders</code> and several matches for <code>customers</code> with many <code>orders</code>.  However, this would be much slower.  So there is a trick for reversing the process: You still build the hash from the smaller set in the <code>JOIN</code>, but now for each key that does get probed, you set a bit in a bit mask, in addition to sending the match as output.  After all outputs have been generated, you look in the hash table for the entries where the bit is not set.  These correspond to the <code>customers</code> with no <code>orders</code>.  For these, you send the <code>c_custkey</code> with a null <code>o_orderkey</code> to the next operator in the pipeline, which is the <code>GROUP BY</code> on <code>c_custkey</code> with the count of non-null <code>o_orderkeys</code>.</p>\n\n<p>One might at first think that such a backwards way of doing an outer join is good for nothing but this benchmark and should be considered a benchmark special.  This is not so, though, as there are accepted implementations that do this very thing.</p>\n\n<p>Furthermore, getting a competitive score in any other way is impossible, as we shall see below.</p>\n\n<p>We further note that the the grouping key in the innermost <code>GROUP BY</code> is the same as the hash key in the last hash join, i.e., <code>o_custkey</code>.  This means that the <code>GROUP BY</code> and the hash join could be combined in a single operator called <code>GROUPJOIN</code>.  If this were done, the hash would be built from <code>customer</code> with extra space left for the counters.  This would in fact remove the hash join from the profile as well as the rehash of the group by hash table, for a gain of about 20%.  The outer join behavior is not a problem here since untouched buckets, e.g., <code>customers</code> without <code>orders</code>, would be inited with a <code>COUNT</code> of <code>0</code>.  For an inner join behavior, one would simply leave out the zero counts when reading the <code>GROUP BY</code>.  At the end of the series, we will see what the DBT3 score will be.  We remember that there is a 1.5s savings to be had here for the throughput score if the score is not high enough otherwise.  The effect on the power score will be less because that only cares about relative speedup, not absolute time.</p>\n\n<p>Next, we disable the <code>RIGHT OUTER JOIN</code> optimization and force the <code>JOIN</code> to build a hash on <code>orders</code> and to probe it with <code>customer</code>.  The execution time is 25s.  Most of the time goes into building the hash table of <code>orders</code>.  The memory consumption also goes up to around 8G. Then we try the <code>JOIN</code> by index with a scan of <code>customer</code>, and for each an index lookup of <code>orders</code> based on an index on <code>o_custkey</code>.  Here we note that there is a condition on a dependent part of the primary key, namely <code>o_comment</code>, which requires joining to the main row from the <code>o_ck</code> index.  There is a gain however because the <code>GROUP BY</code> becomes ordered; i.e., there is no need to keep groups around for <code>customers</code> that have already been seen since we know they will not come again, the outer scan being in order of <code>c_custkey</code>.  For this reason, the memory consumption for the <code>GROUP BY</code> goes away.  However, the index-based plan is extremely sensitive to vector size: The execution takes 29.4s if vector size is allowed to grow to 1MB, but 413s if it stays at the default of 10KB.  The difference is in the 1MB vector hitting 1/150 (1 million lookups for a 150 million row table), whereas the 10KB vector hits 1/15000.  Thus, benefits from vectoring lookups are largely lost, since there are hardly ever hits in the same segment; in this case, within 2000 rows.  But this is not the main problem: The condition on the main row is a <code>LIKE</code> on a long column.  Thus, the whole column for the segment in question must be accessed for read, meaning 2000 or so <code>o_comments</code>, of which one will be checked.  If instead of a condition on <code>o_comment</code>, we have one on <code>o_totalprice &gt; 0</code>, we get 93s with 10KB vector size and 15s with dynamic up to 1MB.</p>\n\n<p>If we now remove the condition on dependent columns of <code>orders</code>, the index plan becomes faster, since the whole condition is resolved within the <code>o_custkey</code> index -- 2.5s with 10KB vector size, 2.6s with dynamic vector size up to 1MB.  The point here is that the access from <code>customer</code> to <code>orders</code> on the <code>o_custkey</code> index is ordered, like a merge join.</p>\n\n\n<h3>Q13 Conclusions</h3>\n\n<p>Q13 is a combo of many choke points in the <i>TPC-H Analyzed</i> paper.  The most important is special <code>JOIN</code> types, i.e., <code>RIGHT OUTER JOIN</code> and <code>GROUPJOIN</code>.  Then there is string operation performance for the substring matching with <code>LIKE</code>.  This needs to be implemented with the SSE 4.2 string instructions; otherwise there is a hit of about 0.5s on query speed.</p>\n\n<p>The <i>TPC-H Analyzed</i> paper was written against the background of analytical DB tradition where the dominant <code>JOIN</code> type is hash, except when there is a merge between two sets that are ordered or at least clustered on the same key.  Clustered here means physical order but without the need to be strictly in key order.</p>\n\n<p>Here I have added some index based variants to show that hash join indeed wins and to point out the sensitivity of random access to vector size.  As column stores go, Virtuoso is especially good at random access.  This must be so since it was optimized to do RDF well, which entails a lot of lookup.  Also note how a big string column goes with great ease in a sequential scan, but kills in a non-local random access pattern.</p>\n\n<h3>\n<i>In Hoc Signo Vinces</i> Series</h3>\n<ul>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1739\" id=\"link-id0x233f6158\"> In Hoc Signo Vinces (part 1): Virtuoso meets TPC-H</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1741\" id=\"link-id0x2aac1c7b8ca8\"> In Hoc Signo Vinces (part 2): TPC-H Schema Choices</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1742\" id=\"link-id0x2aac1c79fb58\"> In Hoc Signo Vinces (part 3): Benchmark Configuration Settings</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1744\" id=\"link-id0x2aac1e3a3c58\"> In Hoc Signo Vinces (part 4): Bulk Load and Refresh</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1747\" id=\"link-id0x2aac1cd4e8a8\"> In Hoc Signo Vinces (part 5): The Return of SQL Federation</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1753\" id=\"link-id0x2aac0eb77df8\"> In Hoc Signo Vinces (part 6): TPC-H Q1 and Q3: An Introduction to Query Plans</a>\n</li>\n<li>\nIn Hoc Signo Vinces (part 7): TPC-H Q13: The Good and the Bad Plans<i> (this post)</i>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1756\" id=\"link-id0x2aac1e4e0988\"> In Hoc Signo Vinces (part 8): TPC-H: INs, Expressions, ORs </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1779\" id=\"link-id0x1fa4f7b8\"> In Hoc Signo Vinces (part 9): TPC-H: TPC-H Q18, Ordered Aggregation, and Top K </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1789\" id=\"link-id0x2aab6d0057c8\"> In Hoc Signo Vinces (part 10): TPC-H: TPC-H Q9, Q17, Q20 - Predicate Games</a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1793\" id=\"link-id0x2aac1eefbb78\"> In Hoc Signo Vinces (part 11): TPC-H Q2, Q10 - Late Projection </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1796\" id=\"link-id0x2aac29fa7b28\">  In Hoc Signo Vinces (part 12): TPC-H:  Result Preview </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1798\" id=\"link-id0x2aac7aa7ae58\"> In Hoc Signo Vinces (part 13): Virtuoso TPC-H Kit Now on V7 Fast Track </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1800\" id=\"link-id0x2aac345c6278\"> In Hoc Signo Vinces (part 14): Virtuoso TPC-H Implementation Analysis </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1802\" id=\"link-id0x2aac344da368\"> In Hoc Signo Vinces (part 15): TPC-H and the Science of Hash </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1816\" id=\"link-id0x17b3e448\"> In Hoc Signo Vinces (part 16): Introduction to Scale-Out </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1818\" id=\"link-id0x2aabebab0578\"> In Hoc Signo Vinces (part 17): 100G and 300G Runs on Dual Xeon E5 2650v2 </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1819\" id=\"link-id0x2aabe95c4c88\"> In Hoc Signo Vinces (part 18): Cluster Dynamics </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1822\" id=\"link-id0xb88bcb8\"> In Hoc Signo Vinces (part 19): Scalability, 1000G, and 3000G </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1824\" id=\"link-id0x2aab517a6398\"> In Hoc Signo Vinces (part 20): 100G and 1000G With Cluster; When is Cluster Worthwhile; Effects of I/O </a>\n</li>\n<li>\n  <a href=\"http://www.openlinksw.com/weblog/oerling/?id=1845\"> In Hoc Signo Vinces (part 21): Running TPC-H on Virtuoso Cluster on Amazon EC2 </a>\n</li>\n</ul>\n" .
@prefix dc:	<http://purl.org/dc/elements/1.1/> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	dc:title	"In Hoc Signo Vinces (part 7 of n) -- TPC-H Q13: The Good and the Bad Plans" .
@prefix opl:	<http://www.openlinksw.com/schema/attribution#> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	opl:isDescribedUsing	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755/sioc.rdf> ;
	atom:source	<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog> ;
	atom:updated	"2015-06-10T16:07:36Z" ;
	atom:title	"In Hoc Signo Vinces (part 7 of n) -- TPC-H Q13: The Good and the Bad Plans" ;
	sioc:links_to	<http://www.openlinksw.com/weblog/oerling/?id=1816> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1802> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1822> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1824> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1798> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1753> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1819> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1739> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1796> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1741> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1845> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1756> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1747> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1742> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1793> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1818> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1800> ,
		<http://www.tpc.org/tpch/> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1744> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1779> ,
		<http://www.openlinksw.com/weblog/oerling/?id=1789> ;
	atom:author	ns1:this ;
	rdfs:label	"In Hoc Signo Vinces (part 7 of n) -- TPC-H Q13: The Good and the Bad Plans" ;
	atom:published	"2013-11-25T16:58:10Z" ;
	ns4:has_services	ns5:item .
@prefix rdf:	<http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix schema:	<http://schema.org/> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	rdf:type	schema:BlogPosting .
@prefix sioct:	<http://rdfs.org/sioc/types#> .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755>	rdf:type	sioct:BlogPost ,
		atom:Entry .
<http://www.openlinksw.com/dataspace/oerling/weblog/Orri%20Erling%27s%20Blog/1755/page/1>	rdfs:label	"page 1" .