What?

A simple guide usable by any Python developer seeking to exploit SPARQL without hassles.

Why?

SPARQL is a powerful query language, results serialization format, and an HTTP based data access protocol from the W3C. It provides a mechanism for accessing and integrating data across Deductive Database Systems (colloquially referred to as triple or quad stores in Semantic Web and Linked Data circles) -- database systems (or data spaces) that manage proposition oriented records in 3-tuple (triples) or 4-tuple (quads) form.

How?

SPARQL queries are actually HTTP payloads (typically). Thus, using a RESTful client-server interaction pattern, you can dispatch calls to a SPARQL compliant data server and receive a payload for local processing e.g. local object binding re. Python.

Steps:

  1. From your command line execute: aptitude search '^python26', to verify Python is in place
  2. Determine which SPARQL endpoint you want to access e.g. DBpedia or a local Virtuoso instance (typically: http://localhost:8890/sparql).
  3. If using Virtuoso, and you want to populate its quad store using SPARQL, assign "SPARQL_SPONGE" privileges to user "SPARQL" (this is basic control, more sophisticated WebID based ACLs are available for controlling SPARQL access).

Script:

#!/usr/bin/env python
#
# Demonstrating use of a single query to populate a # Virtuoso Quad Store via Python. 
#

import urllib, json

# HTTP URL is constructed accordingly with JSON query results format in mind.

def sparqlQuery(query, baseURL, format="application/json"):
        params={
                "default-graph": "",
                "should-sponge": "soft",
                "query": query,
                "debug": "on",
                "timeout": "",
                "format": format,
                "save": "display",
                "fname": ""
        }
        querypart=urllib.urlencode(params)
        response = urllib.urlopen(baseURL,querypart).read()
        return json.loads(response)

# Setting Data Source Name (DSN)
dsn="http://dbpedia.org/resource/DBpedia"

# Virtuoso pragmas for instructing SPARQL engine to perform an HTTP GET
# using the IRI in FROM clause as Data Source URL

query="""DEFINE get:soft "replace"
SELECT DISTINCT * FROM <%s> WHERE {?s ?p ?o}""" % dsn 

data=sparqlQuery(query, "http://localhost:8890/sparql/")

print "Retrieved data:\n" + json.dumps(data, sort_keys=True, indent=4)

#
# End

Output

Retrieved data:
{
    "head": {
        "link": [], 
        "vars": [
            "s", 
            "p", 
            "o"
        ]
    }, 
    "results": {
        "bindings": [
            {
                "o": {
                    "type": "uri", 
                    "value": "http://www.w3.org/2002/07/owl#Thing"
                }, 
                "p": {
                    "type": "uri", 
                    "value": "http://www.w3.org/1999/02/22-rdf-syntax-ns#type"
                }, 
                "s": {
                    "type": "uri", 
                    "value": "http://dbpedia.org/resource/DBpedia"
                }
            }, 
...

Conclusion

JSON was chosen over XML (re. output format) since this is about a "no-brainer installation and utilization" guide for a Python developer that already knows how to use Python for HTTP based data access. SPARQL just provides an added bonus to URL dexterity (delivered via URI abstraction) with regards to constructing Data Source Names or Addresses.

Related