Details
OpenLink Software
Burlington, United States
Subscribe
Post Categories
Recent Articles
Community Member Blogs
Display Settings
Translate
|
Showing posts in all categories Refresh
What is Linked Data, really?
[
Kingsley Uyi Idehen
]
Linked Data
is simply hypermedia-based
structured data.
Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.
The fundamental steps to creating Linked Data are as follows:
-
Choose a Name Reference Mechanism — i.e., URIs.
-
Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes
-
Subjects (also known as Entities)
-
Subject Attributes (also known as Entity Attributes), and
-
Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).
-
Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, OData, OpenGraph, and many others.
-
Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.
-
Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:
- Identify Subject(s) using Resolvable URI(s).
- Identify Subject Attribute(s) using Resolvable URI(s).
- Assign Attribute Values to Subject Attributes. These Values may be either
Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.
You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.
Related
-
Linked Data an Introduction -- simple introduction to Linked Data and its virtues
-
How Data Makes Corporations Dumb -- Jeff Jonas (IBM) interview
-
Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
-
URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
-
Linked Data Meme -- TimbL design issues note about Linked Data
-
Data 3.0 Manifesto -- note about format agnostic Linked Data
-
DBpedia -- large Linked Data Hub
-
Linked Open Data Cloud -- collection of Linked Data Spaces
-
Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
-
LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
-
LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
.
|
10/14/2010 19:10 GMT
|
Modified:
11/09/2010 13:53 GMT
|
What is Linked Data, really?
[
Kingsley Uyi Idehen
]
Linked Data
is simply hypermedia-based
structured data.
Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.
The fundamental steps to creating Linked Data are as follows:
-
Choose a Name Reference Mechanism — i.e., URIs.
-
Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes
-
Subjects (also known as Entities)
-
Subject Attributes (also known as Entity Attributes), and
-
Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).
-
Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, and OData; there are many others.
-
Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.
-
Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:
- Identify Subject(s) using Resolvable URI(s).
- Identify Subject Attribute(s) using Resolvable URI(s).
- Assign Attribute Values to Subject Attributes. These Values may be either
Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.
You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.
Related
-
Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
-
URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
-
Linked Data Meme -- TimbL design issues note about Linked Data
-
Data 3.0 Manifesto -- note about format agnostic Linked Data
-
DBpedia -- large Linked Data Hub
-
Linked Open Data Cloud -- collection of Linked Data Spaces
-
Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
-
LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
-
LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
.
|
10/14/2010 17:54 GMT
|
Modified:
02/15/2011 17:28 GMT
|
Data 3.0 (a Manifesto for Platform Agnostic Structured Data) Update 5
[
Kingsley Uyi Idehen
]
After a long period of trying to demystify and unravel the wonders of standards compliant structured data access, combined with protocols (e.g., HTTP) that separate:
- Identity,
- Access,
- Storage,
- Representation, and
- Presentation.
I ended up with what I can best describe as the Data 3.0 Manifesto. A manifesto for standards complaint access to structured data object (or entity) descriptors.
Some Related Work
Alex James (Program Manager Entity Frameworks at Microsoft), put together something quite similar to this via his Base4 blog (around the Web 2.0 bootstrap time), sadly -- quoting Alex -- that post has gone where discontinued blogs and their host platforms go (deep deep irony here).
It's also important to note that this manifesto is also a variant of the TimBL's Linked Data Design Issues meme re. Linked Data, but totally decoupled from RDF (data representation formats aspect) and SPARQL which -- in my world view -- remain implementation details.
Data 3.0 manifesto
- An "Entity" is the "Referent" of an "Identifier."
- An "Identifier" SHOULD provide a global, unambiguous, and unchanging (though it MAY be opaque!) "Name" for its "Referent".
- A "Referent" MAY have many "Identifiers" (Names), but each "Identifier" MUST have only one "Referent".
- Structured Entity Descriptions SHOULD be based on the Entity-Attribute-Value (EAV) Data Model, and SHOULD therefore take the form of one or more 3-tuples (triples), each comprised of:
- an "Identifier" that names an "Entity" (i.e., Entity Name),
- an "Identifier" that names an "Attribute" (i.e., Attribute Name), and
- an "Attribute Value", which may be an "Identifier" or a "Literal".
- Structured Descriptions SHOULD be CARRIED by "Descriptor Documents" (i.e., purpose specific documents where Entity Identifiers, Attribute Identifiers, and Attribute Values are clearly discernible by the document's intended consumers, e.g., humans or machines).
- Structured Descriptor Documents can contain (carry) several Structured Entity Descriptions
- Stuctured Descriptor Documents SHOULD be network accessible via network addresses (e.g., HTTP URLs when dealing with HTTP-based Networks).
- An Identifier SHOULD resolve (de-reference) to a Structured Representation of the Referent's Structured Description.
Related
|
04/16/2010 17:09 GMT
|
Modified:
05/25/2010 17:10 GMT
|
Data 3.0 (a Manifesto for Platform Agnostic Structured Data) Update 5
[
Kingsley Uyi Idehen
]
After a long period of trying to demystify and unravel the wonders of standards compliant structured data access, combined with protocols (e.g., HTTP) that separate:
- Identity,
- Access,
- Storage,
- Representation, and
- Presentation.
I ended up with what I can best describe as the Data 3.0 Manifesto. A manifesto for standards complaint access to structured data object (or entity) descriptors.
Some Related Work
Alex James (Program Manager Entity Frameworks at Microsoft), put together something quite similar to this via his Base4 blog (around the Web 2.0 bootstrap time), sadly -- quoting Alex -- that post has gone where discontinued blogs and their host platforms go (deep deep irony here).
It's also important to note that this manifesto is also a variant of the TimBL's Linked Data Design Issues meme re. Linked Data, but totally decoupled from RDF (data representation formats aspect) and SPARQL which -- in my world view -- remain implementation details.
Data 3.0 manifesto
- An "Entity" is the "Referent" of an "Identifier."
- An "Identifier" SHOULD provide a global, unambiguous, and unchanging (though it MAY be opaque!) "Name" for its "Referent".
- A "Referent" MAY have many "Identifiers" (Names), but each "Identifier" MUST have only one "Referent".
- Structured Entity Descriptions SHOULD be based on the Entity-Attribute-Value (EAV) Data Model, and SHOULD therefore take the form of one or more 3-tuples (triples), each comprised of:
- an "Identifier" that names an "Entity" (i.e., Entity Name),
- an "Identifier" that names an "Attribute" (i.e., Attribute Name), and
- an "Attribute Value", which may be an "Identifier" or a "Literal".
- Structured Descriptions SHOULD be CARRIED by "Descriptor Documents" (i.e., purpose specific documents where Entity Identifiers, Attribute Identifiers, and Attribute Values are clearly discernible by the document's intended consumers, e.g., humans or machines).
- Structured Descriptor Documents can contain (carry) several Structured Entity Descriptions
- Stuctured Descriptor Documents SHOULD be network accessible via network addresses (e.g., HTTP URLs when dealing with HTTP-based Networks).
- An Identifier SHOULD resolve (de-reference) to a Structured Representation of the Referent's Structured Description.
Related
|
04/16/2010 17:09 GMT
|
Modified:
05/25/2010 17:10 GMT
|
Take N: Yet Another OpenLink Data Spaces Introduction
[
Kingsley Uyi Idehen
]
Problem:
Your Life, Profession, Web, and Internet do not need to become mutually exclusive due to "information overload".
Solution:
A platform or service that delivers a point of online presence that embodies the fundamental separation of: Identity, Data Access, Data Representation, Data Presentation, by adhering to Web and Internet protocols.
How:
Typical post installation (Local or Cloud) task sequence:
-
Identify myself (happens automatically by way of registration)
- If in an LDAP environment, import accounts or associate system with LDAP for account lookup and authentication
-
Identify Online Accounts (by fleshing out profile) which also connects system to online accounts and their data
- Use Profile for granular description (Biography, Interests, WishList, OfferList, etc.)
- Optionally upstream or downstream data to and from my online accounts
- Create content Tagging Rules
- Create rules for associating Tags with formal URIs
- Create automatic Hyperlinking Rules for reuse when new content is created (e.g. Blog posts)
- Exploit Data Portability virtues of RSS, Atom, OPML, RDFa, RDF/XML, and other formats for imports and exports
- Automatically tag imported content
- Use function-specific helper application UIs for domain specific data generation e.g. AddressBook (optionally use vCard import), Calendar (optionally use iCalendar import), Email, File Storage (use WebDAV mount with copy and paste or HTTP GET), Feed Subscriptions (optionally import RSS/Atom/OPML feeds), Bookmarking (optionally import bookmark.html or XBEL) etc..
- Optionally enable "Conversation" feature (today: Social Media feature) across the relevant application domains (manage conversations under covers using NNTP, the standard for this functionality realm)
- Generate HTTP based Entity IDs (URIs) for every piece of data in this burgeoning data space
- Use REST based APIs to perform CRUD tasks against my data (local and remote) (SPARQL, GData, Ubiquity Commands, Atom Publishing)
- Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for accessing data elsewhere
- Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for Controlling access to my data (Self Signed Certificate Generation, Browser Import of said Certificate & associated Private Key, plus persistence of Certificate to FOAF based profile data space in "one click")
- Have a simple UI for Entity-Attribute-Value or Subject-Predicate-Object arbitrary data annotations and creation since you can't pre model an "Open World" where the only constant is data flow
- Have my Personal URI (Web ID) as the single entry point for controlled access to my HTTP accessible data space
I've just outlined a snippet of the capabilities of the OpenLink Data Spaces platform. A platform built using OpenLink Virtuoso, architected to deliver: open, platform independent, multi-model, data access and data management across heterogeneous data sources.
All you need to remember is your URI when seeking to interact with your data space.
Related
-
Get Yourself a URI (Web ID) in 5 Minutes or Less!
-
Various posts over the years about Data Spaces
-
Future of Desktop Post
-
Simplify My Life Post by Bengee Nowack
|
04/22/2009 14:46 GMT
|
Modified:
04/22/2009 15:32 GMT
|
Take N: Yet Another OpenLink Data Spaces Introduction
[
Kingsley Uyi Idehen
]
Problem:
Your Life, Profession, Web, and Internet do not need to become mutually exclusive due to "information overload".
Solution:
A platform or service that delivers a point of online presence that embodies the fundamental separation of: Identity, Data Access, Data Representation, Data Presentation, by adhering to Web and Internet protocols.
How:
Typical post installation (Local or Cloud) task sequence:
-
Identify myself (happens automatically by way of registration)
- If in an LDAP environment, import accounts or associate system with LDAP for account lookup and authentication
-
Identify Online Accounts (by fleshing out profile) which also connects system to online accounts and their data
- Use Profile for granular description (Biography, Interests, WishList, OfferList, etc.)
- Optionally upstream or downstream data to and from my online accounts
- Create content Tagging Rules
- Create rules for associating Tags with formal URIs
- Create automatic Hyperlinking Rules for reuse when new content is created (e.g. Blog posts)
- Exploit Data Portability virtues of RSS, Atom, OPML, RDFa, RDF/XML, and other formats for imports and exports
- Automatically tag imported content
- Use function-specific helper application UIs for domain specific data generation e.g. AddressBook (optionally use vCard import), Calendar (optionally use iCalendar import), Email, File Storage (use WebDAV mount with copy and paste or HTTP GET), Feed Subscriptions (optionally import RSS/Atom/OPML feeds), Bookmarking (optionally import bookmark.html or XBEL) etc..
- Optionally enable "Conversation" feature (today: Social Media feature) across the relevant application domains (manage conversations under covers using NNTP, the standard for this functionality realm)
- Generate HTTP based Entity IDs (URIs) for every piece of data in this burgeoning data space
- Use REST based APIs to perform CRUD tasks against my data (local and remote) (SPARQL, GData, Ubiquity Commands, Atom Publishing)
- Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for accessing data elsewhere
- Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for Controlling access to my data (Self Signed Certificate Generation, Browser Import of said Certificate & associated Private Key, plus persistence of Certificate to FOAF based profile data space in "one click")
- Have a simple UI for Entity-Attribute-Value or Subject-Predicate-Object arbitrary data annotations and creation since you can't pre model an "Open World" where the only constant is data flow
- Have my Personal URI (Web ID) as the single entry point for controlled access to my HTTP accessible data space
I've just outlined a snippet of the capabilities of the OpenLink Data Spaces platform. A platform built using OpenLink Virtuoso, architected to deliver: open, platform independent, multi-model, data access and data management across heterogeneous data sources.
All you need to remember is your URI when seeking to interact with your data space.
Related
-
Get Yourself a URI (Web ID) in 5 Minutes or Less!
-
Various posts over the years about Data Spaces
-
Future of Desktop Post
-
Simplify My Life Post by Bengee Nowack
|
04/22/2009 14:46 GMT
|
Modified:
04/22/2009 15:32 GMT
|
Reminder: Why We Need Linked Data!
[
Kingsley Uyi Idehen
]
"The phrase Open Social implies portability of personal and social data. That would be exciting but there are entirely different protocols underway to deal with those ideas. As some people have told me tonight, it may have been more accurate to call this "OpenWidget" - though the press wouldn't have been as good. We've been waiting for data and identity portability - is this all we get?"
[Source: Read/Write Web's Commentary & Analysis of Google's OpenSocial API]
..Perhaps the world will read the terms of use of the API, and realize this is not an open API; this is a free API, owned and controlled by one company only: Google. Hopefully, the world will remember another time when Google offered a free API and then pulled it. Maybe the world will also take a deeper look and realize that the functionality is dependent on Google hosted technology, which has its own terms of service (including adding ads at the discretion of Google), and that building an OpenSocial application ties Google into your application, and Google into every social networking site that buys into the Dream. Hopefully the world will remember. Unlikely, though, as such memories are typically filtered in the Great Noise.... [Source: Poignant commentary excerpt from Shelly Power's Blog (as always)]
The "Semantic Data Web" vision has always been about "Data & Identity" portability across the Web. Its been that and more from day one.
In a nutshell, we continue to exhibit varying degrees of Cognitive Dissonance re the following realities:
- The Network is the Computer (Internet/Intranet/Extranet depending on your TCP/IP usage scenarios)
- The Web is the OS (ditto) and it provides a communications subsystem (Information BUS) comprised of
- URIs (pointer system for identifying, accessing, and manipulating data)
- HTTP based Interprocess (i.e Web Apps are processes when you discard the HTML UI and interact with the application logic containers called "Web Services" behind the pages) ultimately hit data
- Web Data is best Modeled as a Graph (RDF, Containers/Items/Item Types, Property & Value Pairs associated with something, and other labels)
- Network are Graphs and vice versa
- Social Networks are graphs where nodes are connected via social connectors ( [x]--knows-->[y] )
- The Web is a Graph that exposes a People and Data Network (to the degree we allude to humans not being data containers i.e. just nodes in a network, otherwise we are talking about a Data Network)
- Data access and manipulation depends inherently on canonical Data Access mechanisms such as Data Source Identifiers / Names (time-tested practice in various DBMS realms)
- Data is forever, it is the basis of Information, and it is increasing exponentially due to proliferation of Web Services induced user activities (User Generated Content)
- Survival, Vitality, Longevity, Efficiency, Productivity etc.. are all depend on our ability to process data effectively in a shrinking time continuum where Data and/or Information overload is the alternative.
The Data Web is about Presence over Eyeballs due to the following realities:
- Eyeballs are input devices for a DNA based processing system (Humans). The aforementioned processing system can reason very well, but simply cannot effectively process masses of data or information
- Widgets offer little value long term re. the imminent data and information overload dilemma, ditto Web pages (however pretty), and any other Eyeballs-only centric Web Apps
- Computers (machines) are equipped with inorganic (non DNA) based processing power, they are equipped to process huge volumes of data and/or information, but they cannot reason
- To be effective in the emerging frontier comprised of a Network Computer and a Web OS, we need an effective mechanism that makes best use of the capabilities possessed by humans and machines, by shifting the focus to creation and interaction with points of "Data Web Presence" that openly expose "Structured Linked Data".
This is why we need to inject a mesh of Linked Data into the existing Web. This is what the often misunderstood vision of the "Semantic Data Web" or "Web of Data" or "Web or Structured Data" is all about.
As stated earlier (point 10 above), "Data is forever" and there is only more of it to come! Sociality and associated Social Networking oriented solutions are at best a spec in the Web's ocean of data once you comprehend this reality.
Note: I am writing this post as an early implementor of GData and an implementor of RDF Linked Data technology and a "Web Purist".
OpenSocial implementation and support across our relevant product families: Virtuoso (i.e the Sponger Middleware for RDF component), OpenLink Data Spaces (Data Space Controller / Services), and the OpenLink Ajaxt Toolkit (i.e OAT Widgets and Libraries), is a triviality now that the OpenSocial APIs are public.
The concern I have, and the problem that remains mangled in the vast realms of Web Architecture incomprehension, is the fact that GData and GData based APIs cannot deliver Structured Linked Data in line with the essence of the Web without introducing "lock-in" that ultimately compromises the "Open Purity" of the Web. Facebook and Google's OpenSocial response to the Facebook juggernaut (i.e. open variant of the Facebook Activity Dashboard and Social Network functionality realms, primarily), are at best icebergs in the ocean we know as the "World Wide Web". The nice and predictable thing about icebergs is that they ultimately melt into the larger ocean :-)
On a related note, I had the pleasure of attending the W3C's RDF and DBMS Integration Workshop, last week. The event was well attended by organizations with knowledge, experience, and a vested interested in addressing the issues associated with exposing none RDF data (e.g. SQL) as RDF, and the imminence of data and/or information overload covered in different ways via the following presentations:
.
|
11/02/2007 18:50 GMT
|
Modified:
11/02/2007 18:52 GMT
|
Fourth Platform: Data Spaces in The Cloud (Update)
[
Kingsley Uyi Idehen
]
I've written extensively on the subject of Data Spaces in relation to the Data Web for while. I've also written sparingly about OpenLink Data Spaces (a Data Web Platform that build using Virtuoso). On the other hand, I haven't shed much light on installation and deployment of OpenLink Data Spaces. Jon Udell recently penned a post titled: The Fourth Platform. The post arrives at a spookily coincidental time (this happens quite often between Jon and I as demonstrated last year during our podcast; the "Fourth" in his Innovators Podcast series). The platform that Jon describes is "Cloud Based" and comprised of Storage and Computation. I would like to add Data Access and Management (native and virtual) under the fourth platform banner with the end product called: "Cloud based Data Spaces". As I write, we are releasing a Virtuoso AMI (Amazon Image) labeled: virtuoso-dataspace-server. This edition of Virtuoso includes the OpenLink Data Spaces Layer and all of the OAT applications we've been developing for a while. What Benefits Does this offer? - Personal Data Spaces in the Cloud - a place where you can control and consolidate data across your Blogs, Wikis, RSS/Atom Feed Subscriptions, Shared Bookmarks, Shared Calendars, Discussion Threads, Photo Galleries etc
- All the data in your Data Space is SPARQL or GData accessible.
- All of the data in your Personal Data Space is Linked Data from the get go. Each Item of data is URI addressable
- SIOC support - your Blogs, Wikis, Bookmarks etc.. are based on the SIOC ontology for Semantically Interlinking Online Communities (think: Open social-graph++)
- FOAF support - your FOAF Profile page provides a URI that is an in-road to all Data in your Data Space.
- OpenID support - your Personal Data Space ID is usable wherever OpenID is supported. OpenID and FOAF are integrated as per latest FOAF specs
- Two Integration with Facebook - You can access your Data Space from Facebook or access Facebook from your Data Space
- Unified Storage - The WebDAV based filesystem provides Cloud Storage that's integrated with Amazon S3; It also exposes all of your Data Space data via a traditional filesystem UI (think virtual Spotlight); You can also mount this drive to your local filesystem via your native operating system's WebDAV support
- SyncML - you can sync calendar and contact details with your Data Space in the cloud from your Mobile phone.
- A practical Semantic Data Web solution - based on Web Infrastructure and doesn't require you to do anything beyond exposing URIs for data in your Data Spaces.
EC2-AMI Details: Manifest file: virtuoso-images/virtuoso-dataspace-server.manifest.xml Installation Guide: - Get an Amazon Web Services (AWS) account
- Signup for S3 and EC2 services
- Install the EC2 plugin for Firefox
- Start the EC2 plugin
- Locate the row containing ami-7c31d515 Manifest virtuoso-test/virtuoso-cloud-beta-9-i386.manifest.xml (sort using the AMI ID or Manifest Columns or search on pattern: virtuoso, due to name flux)
- Start the Virtuoso Data Space Server AMI
- Wait 4-5 minutes (*take a few minutes to create the pre-configured Linux Image*)
- Connect to http://http://your-ec2-instance-cname:8890/ Log in with user/password dba/dba
- Go to the Admin UI (Virtuoso Conductor) and change the PWDs for the 'dba' and 'dav' accounts (*Important!*)
- Give the "SPARQL" user "SPARQL_UPDATE" privileges (required if you want to exploit the in-built Sponger Middleware)
- Click on the ODS (OpenLink Data Spaces) link to start an Personal Editon of OpenLink Data Spaces (or go to: http://your-ec2-instance-cname/dataspace/ods/index.html)
- Log-in using the username and password credentials for the 'dav' account (or register a new user note: OpenID is an option here also) Create an Data Space Application Instance by clicking on a Data Space App. Tab
- Import data from your existing Web 2.0 style applications into OpenLink Data Spaces e.g. subscribe to a few RSS/Atom feeds via the "Feeds Manager" application or import some Bookmarks using the "Bookmarks" application
- Then look at the imported data in Linked Data form via your ODS generated URIs based on the patterns: http://your-ec2-instance-cname/dataspace/person/your-ods-id#this (URI for You the Person), http://your-ec2-instance-cname/dataspace/person/your-ods-id (FOAF File URI), http://your-ec2-instance-cname/dataspace/your-ods-id (SIOC File URI)
(OAT) from your Data Space instanceInstall the OAT VAD package via the Admin UI and then apply the URI patterns below within your browser: - http://:8890/oatdemo - Entire OAT Demo Collection
- http://:8890/rdfbrowser - RDF Browser
- http://:8890/isparql - SPARQL Query Builder (iSPARQL)
- http://:8890/qbe - SQL Query Builder (iSQL)
- http://:8890/formdesigner - Forms Builder (for building Meshups based on RDF, SQL, or Web Servives Data Souces)
- http://:8890/dbdesigner - SQL DB Schema Designer (note a Visual SQL-RDF Mapper is also on it's way
- http://:8890/DAV/JS/ - To view the OAT Tree (there are some experimental demos that are missing from the main demo app etc..)
There's more to come!
|
09/22/2007 19:43 GMT
|
Modified:
10/26/2008 17:59 GMT
|
Semantic Web Data Spaces
[
Kingsley Uyi Idehen
]
Web Data Spaces
Now that broader understanding of the Semantic Data Web is emerging, I would like to revisit the issue of "Data Spaces".
A Data Space is a place where Data Resides. It isn't inherently bound to a specific Data Model (Concept Oriented, Relational, Hierarchical etc..). Neither is it implicitly an access point to Data, Information, or Knowledge (the perception is purely determined through the experiences of the user agents interacting with the Data Space.
A Web Data Space is a Web accessible Data Space.
Real world example:
Today we increasing perform one of more of the following tasks as part of our professional and personal interactions on the Web:
- Blog via many service providers or personally managed weblog platforms
- Create Event Calendars via Upcoming.com and Eventful
- Maintain and participate in Social Networks (e.g. Facebook, Orkut, MySpace)
- Create and Participate in Discussions (note: when you comment on blogs or wikis for instance, you are participating in, or creating, a conversation)
- Track news by subscribing to RSS 1.0, RSS 2.0, or Atom Feeds
- Share Bookmarks & Tags via Del.icio.us and other Services
- Share Photos via Flickr
- Buy, Review, or Search for books via Amazon
- Participates in auctions via eBay
- Search for data via Google (of course!)
John Breslin has nice a animation depicting the creation of Web Data Spaces that drives home the point.
Web Data Space Silos
Unfortunately, what isn't as obvious to many netizens, is the fact that each of the activities above results in the creation of data that is put into some context by you the user. Even worse, you eventually realize that the service providers aren't particularly willing, or capable of, giving you unfettered access to your own data. Of course, this isn't always by design as the infrastructure behind the service can make this a nightmare from security and/or load balancing perspectives. Irrespective of cause, we end up creating our own "Data Spaces" all over the Web without a coherent mechanism for accessing and meshing these "Data Spaces".
What are Semantic Web Data Spaces?
Data Spaces on the Web that provide granular access to RDF Data.
What's OpenLink Data Spaces (ODS) About?
Short History
In anticipation of this the "Web Data Silo" challenge (an issue that we tackled within internal enterprise networks for years) we commenced the development (circa. 2001) of a distributed collaborative application suite called OpenLink Data Spaces (ODS). The project was never released to the public since the problems associated with the deliberate or inadvertent creation of Web Data silos hadn't really materialized (silos only emerged in concreted form after the emergence of the Blogosphere and Web 2.0). In addition, there wasn't a clear standard Query Language for the RDF based Web Data Model (i.e. the SPARQL Query Language didn't exist).
Today, ODS is delivered as a packaged solution (in Open Source and Commercial flavors) that alleviates the pain associated with Data Space Silos that exist on the Web and/or behind corporate firewalls. In either scenario, ODS simply allows you to create Open and Secure Data Spaces (via it's suite of applications) that expose data via SQL, RDF, XML oriented data access and data management technologies. Of course it also enables you to integrates transparently with existing 3rd party data space generators (Blogs, Wikis, Shared Bookmrks, Discussion etc. services) by supporting industry standards that cover:
-
Content Publishing - Atom, Moveable Type, MetaWeblog, Blogger protocols
-
Content Syndication Formats - RSS 1.0, RSS 2.0, Atom, OPML etc.
-
Data Management - SQL, RDF, XML, Free Text
-
Data Access - SQL, SPARQL, GData, Web Services (SOAP or REST styles), WebDAV/HTTP
-
Semantic Data Web Middleware - GRDDL, XSLT, SPARQL, XPath/XQuery, HTTP (Content Negotiation) for producing RDF from non RDF Data ((X)HTML, Microformats, XML, Web Services Response Data etc).
Thus, by installing ODS on your Desktop, Workgroup, Enterprise, or public Web Server, you end up with a very powerful solution for creating Open Data access oriented presence on the "Semantic Data Web" without incurring any of the typically assumed "RDF Tax".
Naturally, ODS is built atop Virtuoso and of course it exploits Virtuoso's feature-set to the max. It's also beginning to exploit functionality offered by the OpenLink Ajax Toolkit (OAT).
|
04/13/2007 21:15 GMT
|
Modified:
04/13/2007 18:19 GMT
|
Web 3.0: When Web Sites Become Web Services
[
Kingsley Uyi Idehen
]
(Via Read/Write Web.)
Web 3.0: When Web Sites Become Web Services: "
.....
Conclusion
As more and more of the Web is becoming remixable, the entire system is turning into
both a platform and the database. Yet, such transformations are never smooth. For one,
scalability is a big issue. And of course legal aspects are never simple.'
But it is not a question of if web sites become web services, but when
and how. APIs are a more controlled, cleaner and altogether preferred way of
becoming a web service. However, when APIs are not avaliable or sufficient, scraping is
bound to continue and expand. As always, time will be best judge; but in the meanwhile we
turn to you for feedback and stories about how your businesses are preparing for
'web 3.0'.
We are hitting a little problem re. Web 3.0 and Web 2.0, naturally :-)
Web 2.0 is one of several (present and future) Dimensions of Web Interaction that turns Web Sites into Web Services Endpoints; a point I've made repeatedly [1] [2] [3] [4] across the blogosphere, in addition to my early futile attempts to make the Wikipedia's Web 2.0 article meaningful (circa 2005), as per the Wikipedia Web 2.0 Talk Page excerpt below:
Web 2.0 is a web of executable endpoints and well formed content. The executable endpoints and well formed content are accessible via URIs. Put differently, Web 2.0 is a web defined by URIs for invoking Web Services and/or consuming or syndicating well formed content.
Hopefully, someone with more time on their hands will expand on this ( I am kinda busy) .
BTW - Web 2.0 being a platform doesn't distinguish it in anyway from Web 1.0. They are both platforms, the difference comes down to platform focus and mode of experience.
Web 3.0 is about Data Spaces: Points of Semantic Web Presence that provide granular access to Data, Information, and Knowledge via Conceptual Data Model oriented Query Languages and/or APIs.
The common denominator across all the current and future Web Interaction Dimensions is HTTP. While their differences are as follows:
Web 1.0 - Browser (HTTP + (X)HTML)
Web 2.0 - Presence (Web Service Endpoints for REST or SOAP over HTTP)
Web 3.0 - Presence (Query Languages, Data Models, and HTTP based Query Oriented Web Service Endpoints)
Examples of Web 3.0 Infrastructure:
- Query Languages: SPARQL, Googlebase Query Language, Facebook Query Language (FQL), and many others to come
- Query Language aligned Web Services (Query Services): SPARQL Protocol, GData, or REST style Web services such as Facebook's service for FQL.
- Data Models: Concrete Conceptual Data Model (which RDF happens to deliver for Web Data)
Web 3.0 is not purely about Web Sites becoming Web Services endpoints. It is about the "M" (Data Model) taking it's place in the MVC pattern as applied to the Web Platform.
I will repeat myself yet again:
The Devil is in the Details of the Data Model. Data Models make or break everything. You ignore data at your own peril. No amount of money in the bank will protect you from Data Ignorance! A bad Data Model will bring down any venture or enterprise, the only variable is time (where time is directly related to your increasing need to obtain, analyze, and then act on data, over repetitive operational cycles, that have ever decreasing intervals).
This applies to the Real-time enterprise of Information and/or knowledge workers and Real-time Web Users alike.
BTW - Data Makes Shifts Happen (spotter: Sam Sethi).
|
03/19/2007 21:44 GMT
|
Modified:
03/20/2007 08:27 GMT
|
|
|