Details
Kingsley Uyi Idehen
Lexington, United States
Subscribe
Post Categories
Subscribe
Recent Articles
Display Settings
|
Showing posts in all categories Refresh
Rough draft poem: Document, what art thou?
I am the Data Container, Disseminator, and Canvas.
I came to be when the cognitive skills of mankind deemed oral history inadequate.
I am transcendent, I take many forms, but my core purpose is constant - Container, Disseminator, and Canvas.
I am dexterous, so I can be blank, partitioned horizontally, horizontally and vertically, and if you get moi excited and I'll show you fractals.
I am accessible in a number of ways, across a plethora of media.
I am loose, so you can access my content too.
I am loose in a cool way, so you can refer to moi independent of my content.
I am cool in a loose way, so you can refer to my content independent of moi.
I am even cool and loose enough to let you figure out stuff from my content including how its totally distinct from moi.
But...
I am possessive about my coolness, so all Containment, Dissemination, and Canvas requirements must first call upon moi, wherever I might be.
So...
If you postulate about my demise or irrelevance, across any medium, I will punish you with confusion!
Remember...
I just told you who I am.
Lesson to be learned..
When something tells you what it is, and it is as powerful as I, best you believe it.
BTW -- I am Okay with HTTP response code 200 OK :-)
|
11/11/2010 13:44 GMT-0500
|
Modified:
11/12/2010 18:08 GMT-0500
|
One Technology That Will Rock 2010 (Update 1)
Thanks to the TechCrunch post titled: Ten Technologies That Will Rock 2010, I've been able to quickly construct a derivative post that condenses the ten item list down to a Single Technology That Will Rock 2010 :-)
Sticking with the TechCrunch layout, here is why all roads simply lead to Linked Data come 2010 and beyond:
-
The Tablet: a new form factor addition re. Internet and Web application hosts which is just another way of saying: Linked Data will be accessible from Tablet applications.
-
Geo: GPS chips are now standard features of mobile phones, so geolocation is increasingly becoming a necessary feature for any killer app. Thus, GeoSpatial Linked Data and GeopSpatial Queries are going to be a critical success factor for any endeavor that seeks to engage mobile applications developers and ultimately their end-users. Basiacally, you want to be able to perform Esoteric Search from these devices of the form: Find Vendors of a Camcorder (e.g., with a Zoom Factor: Weight Ratio of X) within a 2km Radius of my current location. Or how many items from my WishList are available from a Vendor within a 2km radius of my current location. Conversely, provide Vendors with the ability to spot potential Customers within a 2km of a given "clicks & mortar" location (e.g. BestBuy store).
-
Realtime Search: Rich Structured Profiles that leverage standards such as FOAF and FOAF+SSL will enable Highly Personalized Realtime Search (HPRS) without compromisng privacy. Tecnically, this is about WebIDs securely bound to X.509 Certificates, providing access to verifiable and highly navigable Personal Profile Data Spaces that also double as personal search index entry points.
-
Chrome OS: Just another operating system for exploiting the burgeoning Web of Linked Data
-
HTML5: Courtesy of RDFa, just another mechanism for exposing Linked Data by making HTML+RDFa a bona fide markup for metadata (i.e., format for describing real world objects via their attribute-value graphs)
-
Mobile Video: Simplifies the production and sharing of Video annotations (comments, reviews etc.) en route to creating rich Linked Discourse Data Spaces.
-
Augmented Reality: Ditto
-
Mobile Transactions: As per points 1&2 above, Vendor Discovery and Transaction Conusmation will increasingly be driven by high SDQ applications. The "Funnel Effect" (more choices based on individual preferences) will be a critical success factor for any one operating in the Mobile Transaction realm. Note, without Linked Data you cannot deliver scalable solutions that handle the combined requirements of: SDQ, "Funnel Effect", and Mobile Device form factor, will simply maginify the importance of Web accessible Linked Data.
-
Android: An additional platform for items 1-8; basically, 2010 isn't going to be an iPhone only zone. Personally, this reminds me of a battle from the past i.e., Microsoft vs Apple, re. desktop computing dominance. Google has studied history very well :-)
-
Social CRM: this is simply about applying points 1-9 alongide the construction of Linked Data from eCRM Data Spaces.
As I've stated in the past (across a variety of mediums), you cannot build applications that have long term value without addressing the following issues:
- Data Item or Object Identity
- Data Structure -- Data Models
- Data Representation -- Data Model Entity & Relationships Representation mechanism (as delivered by metadata oriented markup)
- Data Storage -- Database Management Systems
- Data Access -- Data Access Protocols
- Data Presentation -- How you present Views and Reports from Structured Data Sources
- Data Security -- Data Access Policies
The items above basically showcase the very essence of the HTTP URI abstraction that drives HTTP based Linked Data; which is also the basic payload unit that underlies REST.
Conclusion
I simply hope that the next decade marks a period of broad appreciation and comprehension of Data Access, Integration, and Management issues on the parts of: application developers, integrators, analysts, end-users, and decision makers. Remember, without structured Data we cannot produce or share Information, and without Information, we cannot produce of share Knowledge.
Related
|
01/02/2010 12:30 GMT-0500
|
Modified:
02/01/2010 09:02 GMT-0500
|
The URI, URL, and Linked Data Meme's Generic HTTP URI (Updated)
Situation Analysis
As the "Linked Data" meme has gained momentum you've more than likely been on the receiving end of dialog with Linked Open Data community members (myself included) that goes something like this:
"Do you have a URI", "Get yourself a URI", "Give me a de-referencable URI" etc..
And each time, you respond with a URL -- which to the best of your Web knowledge is a bona fide URI. But to your utter confusion you are told: Nah! You gave me a Document URI instead of the URI of a real-world thing or object etc..
What's up with that?
Well our everyday use of the Web is an unfortunate conflation of two distinct things, which have Identity: Real World Objects (RWOs) & Address/Location of Documents (Information bearing Resources).
The "Linked Data" meme is about enhancing the Web by unobtrusively reintroducing its core essence: the generic HTTP URI, a vital piece of Web Architecture DNA. Basically, its about so realizing the full capabilities of the Web as a platform for Open Data Identification, Definition, Access, Storage, Representation, Presentation, and Integration.
What is a Real World Object?
People, Places, Music, Books, Cars, Ideas, Emotions etc..
What is a URI?
A Uniform Resource Identifier. A global identifier mechanism for network addressable data items. Its sole function is Name oriented Identification.
URI Generic Syntax
The constituent parts of a URI (from URI Generic Syntax RFC) are depicted below:
What is a URL?
A location oriented HTTP scheme based URI. The HTTP scheme introduces a powerful and inherent duality that delivers:
-
Resource Address/Location Identifier
-
Data Access mechanism for an Information bearing Resource (Document, File etc..)
So far so good!
What is an HTTP based URI?
The kind of URI Linked Data aficionados mean when they use the term: URI.
An HTTP URI is an HTTP scheme based URI. Unlike a URL, this kind of HTTP scheme URI is devoid of any Web Location orientation or specificity. Thus, Its inherent duality provides a more powerful level of abstraction. Hence, you can use this form of URI to assign Names/Identifiers to Real World Objects (RWO). Even better, courtesy of the Identity/Address duality of the HTTP scheme, a single URI can deliver the following:
-
RWO Identfier/Name
-
RWO Metadata document Locator (courtesy of URL aspect)
-
Negotiable Representation of the Located Document (courtesy of HTTP's content negotiation feature).
What is Metadata?
Data about Data. Put differently, data that describes other data in a structured manner.
How Do we Model Metadata?
The predominant model for metadata is the Entity-Attribute-Value + Classes & Relationships model (EAV/CR). A model that's been with us since the inception of modern computing (long before the Web).
What about RDF?
The Resource Description Framework (RDF) is a framework for describing Web addressable resources. In a nutshell, its a framework for adding Metadata bearing Information Resources to the current Web. Its comprised of:
-
Entity-Attribute-Value (aka. Subject-Predictate-Object) plus Classes & Relationships (Data Dictionaries e.g., OWL) metadata model
-
A plethora of instance data representation formats that include: RDFa (when doing so within (X)HTML docs), Turtle, N3, TriX, RDF/XML etc.
What's the Problem Today?
The ubiquitous use of the Web is primarily focused on a Linked Mesh of Information bearing Documents. URLs rather than generic HTTP URIs are the prime mechanism for Web tapestry; basically, we use URLs to conduct Information -- which is inherently subjective -- instead of using HTTP URIs to conduct "Raw Data" -- which is inherently objective.
Note: Information is "data in context", it isn't the same thing as "Raw Data". Thus, if we can link to Information via the Web, why shouldn't we be able to do the same for "Raw Data"?
How Does the Link Data meme solve the problem?
The meme simply provides a set of guidelines (best practices) for producing Web architecture friendly metadata. Meaning: when producing EAV/CR model based metadata, endow Subjects, their Attributes, and Attribute Values (optionally) with HTTP URIs. By doing so, a new level of Link Abstraction on the Web is possible i.e., "Data Item to Data Item" level links (aka hyperdata links). Even better, when you de-reference a RWO hyperdata link you end up with a negotiated representations of its metadata.
Conclusion
Linked Data is ultimately about an HTTP URI for each item in the Data Organization Hierarchy :-)
Related
-
History of how "Resource" became part of URI - historic account by TimBL
-
Linked Data Design Issues Document - TimBL's initial Linked Data Guide
-
Linked Data Rules Simplified - My attempt at simplifying the Linked Data Meme without SPARQL & RDF distraction
-
Linked Data & Identity - another related post
-
The Linked Data Meme's Value Proposition
-
So What Does "HREF" stand for anyway?
-
My Del.icio.us hosted Bookmark Data Space for Identity Schemes
-
TimBL's Ted Talk re. "Raw Linked Data"
-
Resource Oriented Architecture
-
More Famous Than Simon Cowell .
|
08/07/2009 14:34 GMT-0500
|
Modified:
03/28/2010 12:19 GMT-0500
|
The URI, URL, and Linked Data Meme's Generic HTTP URI (Updated)
Situation Analysis
As the "Linked Data" meme has gained momentum you've more than likely been on the receiving end of dialog with Linked Open Data community members (myself included) that goes something like this:
"Do you have a URI", "Get yourself a URI", "Give me a de-referencable URI" etc..
And each time, you respond with a URL -- which to the best of your Web knowledge is a bona fide URI. But to your utter confusion you are told: Nah! You gave me a Document URI instead of the URI of a real-world thing or object etc..
What's up with that?
Well our everyday use of the Web is an unfortunate conflation of two distinct things, which have Identity: Real World Objects (RWOs) & Address/Location of Documents (Information bearing Resources).
The "Linked Data" meme is about enhancing the Web by unobtrusively reintroducing its core essence: the generic HTTP URI, a vital piece of Web Architecture DNA. Basically, its about so realizing the full capabilities of the Web as a platform for Open Data Identification, Definition, Access, Storage, Representation, Presentation, and Integration.
What is a Real World Object?
People, Places, Music, Books, Cars, Ideas, Emotions etc..
What is a URI?
A Uniform Resource Identifier. A global identifier mechanism for network addressable data items. Its sole function is Name oriented Identification.
URI Generic Syntax
The constituent parts of a URI (from URI Generic Syntax RFC) are depicted below:
What is a URL?
A location oriented HTTP scheme based URI. The HTTP scheme introduces a powerful and inherent duality that delivers:
-
Resource Address/Location Identifier
-
Data Access mechanism for an Information bearing Resource (Document, File etc..)
So far so good!
What is an HTTP based URI?
The kind of URI Linked Data aficionados mean when they use the term: URI.
An HTTP URI is an HTTP scheme based URI. Unlike a URL, this kind of HTTP scheme URI is devoid of any Web Location orientation or specificity. Thus, Its inherent duality provides a more powerful level of abstraction. Hence, you can use this form of URI to assign Names/Identifiers to Real World Objects (RWO). Even better, courtesy of the Identity/Address duality of the HTTP scheme, a single URI can deliver the following:
-
RWO Identfier/Name
-
RWO Metadata document Locator (courtesy of URL aspect)
-
Negotiable Representation of the Located Document (courtesy of HTTP's content negotiation feature).
What is Metadata?
Data about Data. Put differently, data that describes other data in a structured manner.
How Do we Model Metadata?
The predominant model for metadata is the Entity-Attribute-Value + Classes & Relationships model (EAV/CR). A model that's been with us since the inception of modern computing (long before the Web).
What about RDF?
The Resource Description Framework (RDF) is a framework for describing Web addressable resources. In a nutshell, its a framework for adding Metadata bearing Information Resources to the current Web. Its comprised of:
-
Entity-Attribute-Value (aka. Subject-Predictate-Object) plus Classes & Relationships (Data Dictionaries e.g., OWL) metadata model
-
A plethora of instance data representation formats that include: RDFa (when doing so within (X)HTML docs), Turtle, N3, TriX, RDF/XML etc.
What's the Problem Today?
The ubiquitous use of the Web is primarily focused on a Linked Mesh of Information bearing Documents. URLs rather than generic HTTP URIs are the prime mechanism for Web tapestry; basically, we use URLs to conduct Information -- which is inherently subjective -- instead of using HTTP URIs to conduct "Raw Data" -- which is inherently objective.
Note: Information is "data in context", it isn't the same thing as "Raw Data". Thus, if we can link to Information via the Web, why shouldn't we be able to do the same for "Raw Data"?
How Does the Link Data meme solve the problem?
The meme simply provides a set of guidelines (best practices) for producing Web architecture friendly metadata. Meaning: when producing EAV/CR model based metadata, endow Subjects, their Attributes, and Attribute Values (optionally) with HTTP URIs. By doing so, a new level of Link Abstraction on the Web is possible i.e., "Data Item to Data Item" level links (aka hyperdata links). Even better, when you de-reference a RWO hyperdata link you end up with a negotiated representations of its metadata.
Conclusion
Linked Data is ultimately about an HTTP URI for each item in the Data Organization Hierarchy :-)
Related
-
History of how "Resource" became part of URI - historic account by TimBL
-
Linked Data Design Issues Document - TimBL's initial Linked Data Guide
-
Linked Data Rules Simplified - My attempt at simplifying the Linked Data Meme without SPARQL & RDF distraction
-
Linked Data & Identity - another related post
-
The Linked Data Meme's Value Proposition
-
So What Does "HREF" stand for anyway?
-
My Del.icio.us hosted Bookmark Data Space for Identity Schemes
-
TimBL's Ted Talk re. "Raw Linked Data"
-
Resource Oriented Architecture
-
More Famous Than Simon Cowell .
|
08/07/2009 14:34 GMT-0500
|
Modified:
03/28/2010 12:19 GMT-0500
|
YODA & the Data FORCE
The original design document (by TimBL) that lead to the WWW (*an important read*) was very clear about the need to create an "information space" that connects heterogeneous data sources. Unfortunately, in trying to create a moniker to distinguish one aspect of the Web (the Linked Document Web) from the part that was overlooked (the Linked Data Web), we ended up with a project code name that's fundamentally a misnomer in the form of: "The Semantic Web".
If we could just take "The Semantic Web" moniker for what it was -- a code name for an aspect of the Web -- and move on, things will get much clearer, fast!
Basically, what is/was the "Semantic Web" should really have been code named: ("You" Oriented Data Access) as a play on: Yoda's appreciation of the FORCE (Fact ORiented Connected Entities) -- the power of inter galactic, interlinked, structured data, fashioned by the World Wide Web courtesy of the HTTP protocol.
As stated in a earlier post, the next phase of the Web is all about the magic of entity "You". The single most important item of reference to every Web user would be the Person Entity ID (URI). Just by remembering your Entity ID, you will have intelligent pathways across, and into, the FORCE that the Linked Data Web delivers. The quality of the pathways and increased density of the FORCE are the keys to high SDQ (tomorrows SEO). Thus, the SDQ of URIs will ultimately be the unit determinant of value to Web Users, along the following personal lines, hence the critical platform questions:
-
Does your platform give me Identity (a URI) with high SDQ?
-
Do the Data Source Names (URIs) in your Data Spaces deliver high SDQ?
While most industry commentators continue to ponder and pontificate about what "The Semantic Web" is (unfortunately), the real thing (the "FORCE") is already here, and self-enhancing rapidly.
Assuming we now accept the FORCE is simply an RDF based Linked Data moniker, and that RDF Linked Data is all about the Web as a structured database, we should start to move our attention over to practical exploitation of this burgeoning global database, and in doing so we should not discard knowledge from the past such as the many great examples available gratis from the Relational Database realm. For instance, we should start paying attention to the discovery, development, and deployment of high level tools such as query builders, report writers, and intelligence oriented analytic tools, none of which should -- at first point of interaction -- expose raw RDF or the SPARQL query language. Along similar lines of thinking, we also need development environments and frameworks that are counterparts to Visual Studio, ACCESS, File Maker, and the like.
Related
|
11/03/2008 17:32 GMT-0500
|
Modified:
07/20/2010 13:53 GMT-0500
|
YODA & the Data FORCE
The original design document (by TimBL) that lead to the WWW (*an important read*) was very clear about the need to create an "information space" that connects heterogeneous data sources. Unfortunately, in trying to create a moniker to distinguish one aspect of the Web (the Linked Document Web) from the part that was overlooked (the Linked Data Web), we ended up with a project code name that's fundamentally a misnomer in the form of: "The Semantic Web".
If we could just take "The Semantic Web" moniker for what it was -- a code name for an aspect of the Web -- and move on, things will get much clearer, fast!
Basically, what is/was the "Semantic Web" should really have been code named: ("You" Oriented Data Access) as a play on: Yoda's appreciation of the FORCE (Fact ORiented Connected Entities) -- the power of inter galactic, interlinked, structured data, fashioned by the World Wide Web courtesy of the HTTP protocol.
As stated in a earlier post, the next phase of the Web is all about the magic of entity "You". The single most important item of reference to every Web user would be the Person Entity ID (URI). Just by remembering your Entity ID, you will have intelligent pathways across, and into, the FORCE that the Linked Data Web delivers. The quality of the pathways and increased density of the FORCE are the keys to high SDQ (tomorrows SEO). Thus, the SDQ of URIs will ultimately be the unit determinant of value to Web Users, along the following personal lines, hence the critical platform questions:
-
Does your platform give me Identity (a URI) with high SDQ?
-
Do the Data Source Names (URIs) in your Data Spaces deliver high SDQ?
While most industry commentators continue to ponder and pontificate about what "The Semantic Web" is (unfortunately), the real thing (the "FORCE") is already here, and self-enhancing rapidly.
Assuming we now accept the FORCE is simply an RDF based Linked Data moniker, and that RDF Linked Data is all about the Web as a structured database, we should start to move our attention over to practical exploitation of this burgeoning global database, and in doing so we should not discard knowledge from the past such as the many great examples available gratis from the Relational Database realm. For instance, we should start paying attention to the discovery, development, and deployment of high level tools such as query builders, report writers, and intelligence oriented analytic tools, none of which should -- at first point of interaction -- expose raw RDF or the SPARQL query language. Along similar lines of thinking, we also need development environments and frameworks that are counterparts to Visual Studio, ACCESS, File Maker, and the like.
Related
|
11/03/2008 17:32 GMT-0500
|
Modified:
07/20/2010 13:53 GMT-0500
|
The Trouble with Labels (Contd.): Data Integration & SOA
I just stumbled across an post from ITBusines Edge titled: How Semantic Technology Can Help Companies with Integration. While reading the post I encountered the term: Master Data Manager (MDM), and wondered to myself, "what's that?" only to realize it's the very same thing I described as a Data Virtualization or Virtual Database technology (circa. 1998). Now, if re-labeling can confuse me when applied to a realm I've been intimately involved with for eons (internet time). I don't want to imagine what it does for others who aren't that intimately involved with the important data access and data integration realms.
On the more refreshing side, the article does shed some light on the potency of RDF and OWL when applied to the construction of conceptual views of heterogeneous data sources.
"How do you know that data coming from one place calculates net revenue the same way that data coming from another place does? You’ve got people using the same term for different things and different terms for the same things. How do you reconcile all of that? That’s really what semantic integration is about."
BTW - I discovered this article via another titled: Understanding Integration And How It Can Help with SOA, that covers SOA and Integration matters. Again, in this piece I feel the gradual realization of the virtues that RDF, OWL, and RDF Linked Data bring to bear in the vital realm of data integration across heterogeneous data silos.
Conclusion
A number of events, at the micro and macro economic levels, are forcing attention back to the issue of productive use of existing IT resources. The trouble with the aforementioned quest is that it ultimately unveils the global IT affliction known as: heterogeneous data silos, and the challenges of pain alleviation, that have been ignored forever or approached inadequately as clearly shown by the rapid build up of SOA horror stories in the data integration realm.
Data Integration via conceptualization of heterogenous data sources, that result in concrete conceptual layer data access and management, remains the greatest and most potent application of technologies associated with the "Semantic Web" and/or "Linked Data" monikers.
Related
|
10/12/2008 18:53 GMT-0500
|
Modified:
10/12/2008 18:54 GMT-0500
|
Crunchbase & Semantic Web Interview (Remix - Update 1)
After reading Bengee's interview with CrunchBase, I decided to knock up a quick interview remix as part of my usual attempt to add to the developing discourse.
CrunchBase: When we released the CrunchBase API, you were one of the first developers to step up and quickly released a CrunchBase Sponger Cartridge. Can you explain what a CrunchBase Sponger Cartridge is?
Me: A Sponger Cartridge is a data access driver for Web Resources that plugs into our Virtuoso Universal Server (DBMS and Linked Data Web Server combo amongst other things). It uses the internal structure of a resource and/or a web service associated with a resource, to materialize an RDF based Linked Data graph that essentially describes the resource via its properties (Attributes & Relationships).
CrunchBase: And what inspired you to create it?
Me: Bengee built a new space with your data, and we've built a space on the fly from your data which still resides in your domain. Either solution extols the virtues of Linked Data i.e. the ability to explore relationships across data items with high degrees of serendipity (also colloquially known as: following-your-nose pattern in Semantic Web circles).
Bengee posted a notice to the Linking Open Data Community's public mailing list announcing his effort. Bearing in mind the fact that we've been using middleware to mesh the realms of Web 2.0 and the Linked Data Web for a while, it was a no-brainer to knock something up based on the conceptual similarities between Wikicompany and CrunchBase. In a sense, a quadrant of orthogonality is what immediately came to mind re. Wikicompany, CrunchBase, Bengee's RDFization efforts, and ours.
Bengee created an RDF based Linked Data warehouse based on the data exposed by your API, which is exposed via the Semantic CrunchBase data space. In our case we've taken the "RDFization on the fly" approach which produces a transient Linked Data View of the CrunchBase data exposed by your APIs. Our approach is in line with our world view: all resources on the Web are data sources, and the Linked Data Web is about incorporating HTTP into the naming scheme of these data sources so that the conventional URL based hyperlinking mechanism can be used to access a structured description of a resource, which is then transmitted using a range negotiable representation formats. In addition, based on the fact that we house and publish a lot of Linked Data on the Web (e.g. DBpedia, PingTheSemanticWeb, and others), we've also automatically meshed Crunchbase data with related data in DBpedia and Wikicompany data.
CrunchBase: Do you know of any apps that are using CrunchBase Cartridge to enhance their functionality?
Me: Yes, the OpenLink Data Explorer which provides CrunchBase site visitors with the option to explore the Linked Data in the CrunchBase data space. It also allows them to "Mesh" (rather than "Mash") CrunchBase data with other Linked Data sources on the Web without writing a single line of code.
CrunchBase: You have been immersed in the Semantic Web movement for a while now. How did you first get interested in the Semantic Web?
Me: We saw the Semantic Web as a vehicle for standardizing conceptual views of heterogeneous data sources via context lenses (URIs). In 1998 as part of our strategy to expand our business beyond the development and deployment of ODBC, JDBC, and OLE-DB data providers, we decided to build a Virtual Database Engine (see: Virtuoso History), and in doing so we sought a standards based mechanism for the conceptual output of the data virtualization effort. As of the time of the seminal unveiling of the Semantic Web in 1998 we were clear about two things, in relation to the effects of the Web and Internet data management infrastructure inflections: 1) Existing DBMS technology had reached it limits 2) Web Servers would ultimately hit their functional limits. These fundamental realities compelled us to develop Virtuoso with an eye to leveraging the Semantic Web as a vehicle from completing its technical roadmap.
CrunchBase: Can you put into layman’s terms exactly what RDF and SPARQL are and why they are important? Do they only matter for developers or will they extend past developers at some point and be used by website visitors as well?
Me: RDF (Resource Description Framework) is a Graph based Data Model that facilitates resource description using the Subject, Predicate, and Object principle. Associated with the core data model, as part of the overall framework, are a number of markup languages for expressing your descriptions (just as you express presentation markup semantics in HTML or document structure semantics in XML) that include: RDFa (simple extension of HTML markup for embedding descriptions of things in a page), N3 (a human friendly markup for describing resources), RDF/XML (a machine friendly markup for describing resources).
SPARQL is the query language associated with the RDF Data Model, just as SQL is a query language associated with the Relational Database Model. Thus, when you have RDF based structured and linked data on the Web, you can query against Web using SPARQL just as you would against an Oracle/SQL Server/DB2/Informix/Ingres/MySQL/etc.. DBMS using SQL. That's it in a nutshell.
CrunchBase: On your website you wrote that “RDF and SPARQL as productivity boosters in everyday web development”. Can you elaborate on why you believe that to be true?
Me: I think the ability to discern a formal description of anything via its discrete properties is of immense value re. productivity, especially when the capability in question results in a graph of Linked Data that isn't confined to a specific host operating system, database engine, application or service, programming language, or development framework. RDF Linked Data is about infrastructure for the true materialization of the "Information at Your Fingertips" vision of yore. Even though it's taken the emergence of RDF Linked Data to make the aforementioned vision tractable, the comprehension of the vision's intrinsic value have been clear for a very long time. Most organizations and/or individuals are quite familiar with the adage: Knowledge is Power, well there isn't any knowledge without accessible Information, and there isn't any accessible Information without accessible Data. The Web has always be grounded in accessibility to data (albeit via compound container documents called Web Pages). Bottom line, RDF based Linked Data is about Open Data access by reference using URIs (HTTP based Entity IDs / Data Object IDs / Data Source Names), and as I said earlier, the intrinsic value is pretty obvious bearing in mind the costs associated with integrating disparate and heterogeneous data sources -- across intranets, extranets, and the Internet.
CrunchBase: In his definition of Web 3.0, Nova Spivack proposes that the Semantic Web, or Semantic Web technologies, will be force behind much of the innovation that will occur during Web 3.0. Do you agree with Nova Spivack? What role, if any, do you feel the Semantic Web will play in Web 3.0?
Me: I agree with Nova. But I see Web 3.0 as a phase within the Semantic Web innovation continuum. Web 3.0 exists because Web 2.0 exists. Both of these Web versions express usage and technology focus patterns. Web 2.0 is about the use of Open Source technologies to fashion Web Services that are ultimately used to drive proprietary Software as Service (SaaS) style solutions. Web 3.0 is about the use of "Smart Data Access" to fashion a new generation of Linked Data aware Web Services and solutions that exploit the federated nature of the Web to maximum effect; proprietary branding will simply be conveyed via quality of data (cleanliness, context fidelity, and comprehension of privacy) exposed by URIs.
Here are some examples of the CrunchBase Linked Data Space, as projected via our CruncBase Sponger Cartridge:
-
Amazon.com
-
Microsoft
-
Google
-
Apple
|
08/27/2008 18:16 GMT-0500
|
Modified:
08/27/2008 20:35 GMT-0500
|
Comments about recent Semantic Gang Podcast
After listening to the latest Semantic Web Gang podcast, I found myself agreeing with some of the points made by Alex Iskold, specifically:
-- Business exploitation of Linked Data on the Web will certainly be driven by the correlation of opportunity costs (which is more than likely what Alex meant by "use cases") associated with the lack of URIs originating from the domain of a given business (Tom Heath: also effectively alluded to this via his BBC and URI land grab anecdotes; same applies Georgi's examples)
-- History is a great tutor, answers to many of today's problems always lie somewhere in plain sight of the past.
Of course, I also believe that Linked Data serves Web Data Integration across the Internet very well too, and the fact that it will be beneficial to businesses in a big way. No individual or organization is an island, I think the Internet and Web have done a good job of demonstrating that thus far :-) We're all data nodes in a Giant Global Graph.
Daniel lewis did shed light on the read-write aspects of the Linked Data Web, which is actually very close to the callout for a Wikipedia for Data. TimBL has been working on this via Tabulator (see Tabulator Editing Screencast), Bengamin Nowack also added similar functionality to ARC, and of course we support the same SPARQL UPDATE into an RDF information resource via the RDF Sink feature of our WebDAV and ODS-Briefcase implementations.
|
05/02/2008 21:44 GMT-0500
|
Modified:
05/05/2008 20:06 GMT-0500
|
Linked Data Illustrated and a Virtuoso Functionality Reminder
Daniel Lewis has put together a nice collection of Linked Data related posts that illustrate the fundamentals of the Linked Data Web and the vital role that Virtuoso plays as a deployment platform.
Remember, Virtuoso was architected in 1998 (see Virtuoso History) in anticipation of the eventual Internet, Intranet, and Extranet level requirements for a different kind of Server. At the time of Virtuoso's inception, many thought our desire to build a multi-protocol, multi-model, and multi-purpose, virtual and native data server was sheer craziness, but we pressed on (courtesy of our vision and technical capabilities).
Today, we have a very sophisticated Universal Server Platform (in Open Source and Commercial forms) that is naturally equipped to do the following via very simple interfaces:
- Provide highly scalable RDF Data Management via a Quad Store (DBpedia is an example of a live demonstration)
- Powerful WebDAV innovations that simplify read-write mode interaction with Linked Data
|
04/28/2008 17:32 GMT-0500
|
Modified:
04/28/2008 14:47 GMT-0500
|
|
|