Details

OpenLink Software
Burlington, United States

Subscribe

Post Categories

Recent Articles

Community Member Blogs

Display Settings

articles per page.
order.

Translate

Showing posts in all categories RefreshRefresh
What is Linked Data, really? [ Kingsley Uyi Idehen ]

Linked Data is simply hypermedia-based structured data.

Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.

The fundamental steps to creating Linked Data are as follows:

  1. Choose a Name Reference Mechanism — i.e., URIs.

  2. Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes

    1. Subjects (also known as Entities)
    2. Subject Attributes (also known as Entity Attributes), and
    3. Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).
  3. Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, OData, OpenGraph, and many others.

  4. Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.

  5. Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:

    1. Identify Subject(s) using Resolvable URI(s).
    2. Identify Subject Attribute(s) using Resolvable URI(s).
    3. Assign Attribute Values to Subject Attributes. These Values may be either Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.

You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.

Related

  1. Linked Data an Introduction -- simple introduction to Linked Data and its virtues
  2. How Data Makes Corporations Dumb -- Jeff Jonas (IBM) interview
  3. Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
  4. URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
  5. Linked Data Meme -- TimbL design issues note about Linked Data
  6. Data 3.0 Manifesto -- note about format agnostic Linked Data
  7. DBpedia -- large Linked Data Hub
  8. Linked Open Data Cloud -- collection of Linked Data Spaces
  9. Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
  10. LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
  11. LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
  12. .
# PermaLink Comments [0]
10/14/2010 19:10 GMT Modified: 11/09/2010 13:53 GMT
What is Linked Data, really? [ Kingsley Uyi Idehen ]

Linked Data is simply hypermedia-based structured data.

Linked Data offers everyone a Web-scale, Enterprise-grade mechanism for platform-independent creation, curation, access, and integration of data.

The fundamental steps to creating Linked Data are as follows:

  1. Choose a Name Reference Mechanism — i.e., URIs.

  2. Choose a Data Model with which to Structure your Data — minimally, you need a model which clearly distinguishes

    1. Subjects (also known as Entities)
    2. Subject Attributes (also known as Entity Attributes), and
    3. Attribute Values (also known as Subject Attribute Values or Entity Attribute Values).
  3. Choose one or more Data Representation Syntaxes (also called Markup Languages or Data Formats) to use when creating Resources with Content based on your chosen Data Model. Some Syntaxes in common use today are HTML+RDFa, N3, Turtle, RDF/XML, TriX, XRDS, GData, and OData; there are many others.

  4. Choose a URI Scheme that facilitates binding Referenced Names to the Resources which will carry your Content -- your Structured Data.

  5. Create Structured Data by using your chosen Name Reference Mechanism, your chosen Data Model, and your chosen Data Representation Syntax, as follows:

    1. Identify Subject(s) using Resolvable URI(s).
    2. Identify Subject Attribute(s) using Resolvable URI(s).
    3. Assign Attribute Values to Subject Attributes. These Values may be either Literals (e.g., STRINGs, BLOBs) or Resolvable URIs.

You can create Linked Data (hypermedia-based data representations) Resources from or for many things. Examples include: personal profiles, calendars, address books, blogs, photo albums; there are many, many more.

Related

  1. Hypermedia Types -- evolving information portal covering different aspects of Hypermedia resource types
  2. URIBurner -- service that generates Linked Data from a plethora of heterogeneous data sources
  3. Linked Data Meme -- TimbL design issues note about Linked Data
  4. Data 3.0 Manifesto -- note about format agnostic Linked Data
  5. DBpedia -- large Linked Data Hub
  6. Linked Open Data Cloud -- collection of Linked Data Spaces
  7. Linked Open Commerce Cloud -- commerce (clicks & mortar and/or clicks & clicks) oriented Linked Data Space
  8. LOD Cloud Cache -- massive Linked Data Space hosting most of the LOD Cloud Datasets
  9. LOD2 Initiative -- EU Co-Funded Project to develop global knowledge space from LOD
  10. .
# PermaLink Comments [4]
10/14/2010 17:54 GMT Modified: 02/15/2011 17:28 GMT
Data 3.0 (a Manifesto for Platform Agnostic Structured Data) Update 5 [ Kingsley Uyi Idehen ]

After a long period of trying to demystify and unravel the wonders of standards compliant structured data access, combined with protocols (e.g., HTTP) that separate:

  1. Identity,
  2. Access,
  3. Storage,
  4. Representation, and
  5. Presentation.

I ended up with what I can best describe as the Data 3.0 Manifesto. A manifesto for standards complaint access to structured data object (or entity) descriptors.

Some Related Work

Alex James (Program Manager Entity Frameworks at Microsoft), put together something quite similar to this via his Base4 blog (around the Web 2.0 bootstrap time), sadly -- quoting Alex -- that post has gone where discontinued blogs and their host platforms go (deep deep irony here).

It's also important to note that this manifesto is also a variant of the TimBL's Linked Data Design Issues meme re. Linked Data, but totally decoupled from RDF (data representation formats aspect) and SPARQL which -- in my world view -- remain implementation details.

Data 3.0 manifesto

  • An "Entity" is the "Referent" of an "Identifier."
  • An "Identifier" SHOULD provide a global, unambiguous, and unchanging (though it MAY be opaque!) "Name" for its "Referent".
  • A "Referent" MAY have many "Identifiers" (Names), but each "Identifier" MUST have only one "Referent".
  • Structured Entity Descriptions SHOULD be based on the Entity-Attribute-Value (EAV) Data Model, and SHOULD therefore take the form of one or more 3-tuples (triples), each comprised of:
    • an "Identifier" that names an "Entity" (i.e., Entity Name),
    • an "Identifier" that names an "Attribute" (i.e., Attribute Name), and
    • an "Attribute Value", which may be an "Identifier" or a "Literal".
  • Structured Descriptions SHOULD be CARRIED by "Descriptor Documents" (i.e., purpose specific documents where Entity Identifiers, Attribute Identifiers, and Attribute Values are clearly discernible by the document's intended consumers, e.g., humans or machines).
  • Structured Descriptor Documents can contain (carry) several Structured Entity Descriptions
  • Stuctured Descriptor Documents SHOULD be network accessible via network addresses (e.g., HTTP URLs when dealing with HTTP-based Networks).
  • An Identifier SHOULD resolve (de-reference) to a Structured Representation of the Referent's Structured Description.

Related

# PermaLink Comments [6]
04/16/2010 17:09 GMT Modified: 05/25/2010 17:10 GMT
Data 3.0 (a Manifesto for Platform Agnostic Structured Data) Update 5 [ Kingsley Uyi Idehen ]

After a long period of trying to demystify and unravel the wonders of standards compliant structured data access, combined with protocols (e.g., HTTP) that separate:

  1. Identity,
  2. Access,
  3. Storage,
  4. Representation, and
  5. Presentation.

I ended up with what I can best describe as the Data 3.0 Manifesto. A manifesto for standards complaint access to structured data object (or entity) descriptors.

Some Related Work

Alex James (Program Manager Entity Frameworks at Microsoft), put together something quite similar to this via his Base4 blog (around the Web 2.0 bootstrap time), sadly -- quoting Alex -- that post has gone where discontinued blogs and their host platforms go (deep deep irony here).

It's also important to note that this manifesto is also a variant of the TimBL's Linked Data Design Issues meme re. Linked Data, but totally decoupled from RDF (data representation formats aspect) and SPARQL which -- in my world view -- remain implementation details.

Data 3.0 manifesto

  • An "Entity" is the "Referent" of an "Identifier."
  • An "Identifier" SHOULD provide a global, unambiguous, and unchanging (though it MAY be opaque!) "Name" for its "Referent".
  • A "Referent" MAY have many "Identifiers" (Names), but each "Identifier" MUST have only one "Referent".
  • Structured Entity Descriptions SHOULD be based on the Entity-Attribute-Value (EAV) Data Model, and SHOULD therefore take the form of one or more 3-tuples (triples), each comprised of:
    • an "Identifier" that names an "Entity" (i.e., Entity Name),
    • an "Identifier" that names an "Attribute" (i.e., Attribute Name), and
    • an "Attribute Value", which may be an "Identifier" or a "Literal".
  • Structured Descriptions SHOULD be CARRIED by "Descriptor Documents" (i.e., purpose specific documents where Entity Identifiers, Attribute Identifiers, and Attribute Values are clearly discernible by the document's intended consumers, e.g., humans or machines).
  • Structured Descriptor Documents can contain (carry) several Structured Entity Descriptions
  • Stuctured Descriptor Documents SHOULD be network accessible via network addresses (e.g., HTTP URLs when dealing with HTTP-based Networks).
  • An Identifier SHOULD resolve (de-reference) to a Structured Representation of the Referent's Structured Description.

Related

# PermaLink Comments [6]
04/16/2010 17:09 GMT Modified: 05/25/2010 17:10 GMT
Take N: Yet Another OpenLink Data Spaces Introduction [ Kingsley Uyi Idehen ]

Problem:

Your Life, Profession, Web, and Internet do not need to become mutually exclusive due to "information overload".

Solution:

A platform or service that delivers a point of online presence that embodies the fundamental separation of: Identity, Data Access, Data Representation, Data Presentation, by adhering to Web and Internet protocols.

How:

Typical post installation (Local or Cloud) task sequence:

  1. Identify myself (happens automatically by way of registration)
  2. If in an LDAP environment, import accounts or associate system with LDAP for account lookup and authentication
  3. Identify Online Accounts (by fleshing out profile) which also connects system to online accounts and their data
  4. Use Profile for granular description (Biography, Interests, WishList, OfferList, etc.)
  5. Optionally upstream or downstream data to and from my online accounts
  6. Create content Tagging Rules
  7. Create rules for associating Tags with formal URIs
  8. Create automatic Hyperlinking Rules for reuse when new content is created (e.g. Blog posts)
  9. Exploit Data Portability virtues of RSS, Atom, OPML, RDFa, RDF/XML, and other formats for imports and exports
  10. Automatically tag imported content
  11. Use function-specific helper application UIs for domain specific data generation e.g. AddressBook (optionally use vCard import), Calendar (optionally use iCalendar import), Email, File Storage (use WebDAV mount with copy and paste or HTTP GET), Feed Subscriptions (optionally import RSS/Atom/OPML feeds), Bookmarking (optionally import bookmark.html or XBEL) etc..
  12. Optionally enable "Conversation" feature (today: Social Media feature) across the relevant application domains (manage conversations under covers using NNTP, the standard for this functionality realm)
  13. Generate HTTP based Entity IDs (URIs) for every piece of data in this burgeoning data space
  14. Use REST based APIs to perform CRUD tasks against my data (local and remote) (SPARQL, GData, Ubiquity Commands, Atom Publishing)
  15. Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for accessing data elsewhere
  16. Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for Controlling access to my data (Self Signed Certificate Generation, Browser Import of said Certificate & associated Private Key, plus persistence of Certificate to FOAF based profile data space in "one click")
  17. Have a simple UI for Entity-Attribute-Value or Subject-Predicate-Object arbitrary data annotations and creation since you can't pre model an "Open World" where the only constant is data flow
  18. Have my Personal URI (Web ID) as the single entry point for controlled access to my HTTP accessible data space

I've just outlined a snippet of the capabilities of the OpenLink Data Spaces platform. A platform built using OpenLink Virtuoso, architected to deliver: open, platform independent, multi-model, data access and data management across heterogeneous data sources.

All you need to remember is your URI when seeking to interact with your data space.

Related

  1. Get Yourself a URI (Web ID) in 5 Minutes or Less!
  2. Various posts over the years about Data Spaces
  3. Future of Desktop Post
  4. Simplify My Life Post by Bengee Nowack
# PermaLink Comments [0]
04/22/2009 14:46 GMT Modified: 04/22/2009 15:32 GMT
Take N: Yet Another OpenLink Data Spaces Introduction [ Kingsley Uyi Idehen ]

Problem:

Your Life, Profession, Web, and Internet do not need to become mutually exclusive due to "information overload".

Solution:

A platform or service that delivers a point of online presence that embodies the fundamental separation of: Identity, Data Access, Data Representation, Data Presentation, by adhering to Web and Internet protocols.

How:

Typical post installation (Local or Cloud) task sequence:

  1. Identify myself (happens automatically by way of registration)
  2. If in an LDAP environment, import accounts or associate system with LDAP for account lookup and authentication
  3. Identify Online Accounts (by fleshing out profile) which also connects system to online accounts and their data
  4. Use Profile for granular description (Biography, Interests, WishList, OfferList, etc.)
  5. Optionally upstream or downstream data to and from my online accounts
  6. Create content Tagging Rules
  7. Create rules for associating Tags with formal URIs
  8. Create automatic Hyperlinking Rules for reuse when new content is created (e.g. Blog posts)
  9. Exploit Data Portability virtues of RSS, Atom, OPML, RDFa, RDF/XML, and other formats for imports and exports
  10. Automatically tag imported content
  11. Use function-specific helper application UIs for domain specific data generation e.g. AddressBook (optionally use vCard import), Calendar (optionally use iCalendar import), Email, File Storage (use WebDAV mount with copy and paste or HTTP GET), Feed Subscriptions (optionally import RSS/Atom/OPML feeds), Bookmarking (optionally import bookmark.html or XBEL) etc..
  12. Optionally enable "Conversation" feature (today: Social Media feature) across the relevant application domains (manage conversations under covers using NNTP, the standard for this functionality realm)
  13. Generate HTTP based Entity IDs (URIs) for every piece of data in this burgeoning data space
  14. Use REST based APIs to perform CRUD tasks against my data (local and remote) (SPARQL, GData, Ubiquity Commands, Atom Publishing)
  15. Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for accessing data elsewhere
  16. Use OpenID, OAuth, FOAF+SSL, FOAF+SSL+OpenID for Controlling access to my data (Self Signed Certificate Generation, Browser Import of said Certificate & associated Private Key, plus persistence of Certificate to FOAF based profile data space in "one click")
  17. Have a simple UI for Entity-Attribute-Value or Subject-Predicate-Object arbitrary data annotations and creation since you can't pre model an "Open World" where the only constant is data flow
  18. Have my Personal URI (Web ID) as the single entry point for controlled access to my HTTP accessible data space

I've just outlined a snippet of the capabilities of the OpenLink Data Spaces platform. A platform built using OpenLink Virtuoso, architected to deliver: open, platform independent, multi-model, data access and data management across heterogeneous data sources.

All you need to remember is your URI when seeking to interact with your data space.

Related

  1. Get Yourself a URI (Web ID) in 5 Minutes or Less!
  2. Various posts over the years about Data Spaces
  3. Future of Desktop Post
  4. Simplify My Life Post by Bengee Nowack
# PermaLink Comments [0]
04/22/2009 14:46 GMT Modified: 04/22/2009 15:32 GMT
Reminder: Why We Need Linked Data! [ Kingsley Uyi Idehen ]

"The phrase Open Social implies portability of personal and social data. That would be exciting but there are entirely different protocols underway to deal with those ideas. As some people have told me tonight, it may have been more accurate to call this "OpenWidget" - though the press wouldn't have been as good. We've been waiting for data and identity portability - is this all we get?"
[Source: Read/Write Web's Commentary & Analysis of Google's OpenSocial API]

..Perhaps the world will read the terms of use of the API, and realize this is not an open API; this is a free API, owned and controlled by one company only: Google. Hopefully, the world will remember another time when Google offered a free API and then pulled it. Maybe the world will also take a deeper look and realize that the functionality is dependent on Google hosted technology, which has its own terms of service (including adding ads at the discretion of Google), and that building an OpenSocial application ties Google into your application, and Google into every social networking site that buys into the Dream. Hopefully the world will remember. Unlikely, though, as such memories are typically filtered in the Great Noise....

[Source: Poignant commentary excerpt from Shelly Power's Blog (as always)]

The "Semantic Data Web" vision has always been about "Data & Identity" portability across the Web. Its been that and more from day one.

In a nutshell, we continue to exhibit varying degrees of Cognitive Dissonance re the following realities:

  1. The Network is the Computer (Internet/Intranet/Extranet depending on your TCP/IP usage scenarios)
  2. The Web is the OS (ditto) and it provides a communications subsystem (Information BUS) comprised of
    • - URIs (pointer system for identifying, accessing, and manipulating data)
  3. HTTP based Interprocess (i.e Web Apps are processes when you discard the HTML UI and interact with the application logic containers called "Web Services" behind the pages) ultimately hit data
  4. Web Data is best Modeled as a Graph (RDF, Containers/Items/Item Types, Property & Value Pairs associated with something, and other labels)
  5. Network are Graphs and vice versa
  6. Social Networks are graphs where nodes are connected via social connectors ( [x]--knows-->[y] )
  7. The Web is a Graph that exposes a People and Data Network (to the degree we allude to humans not being data containers i.e. just nodes in a network, otherwise we are talking about a Data Network)
  8. Data access and manipulation depends inherently on canonical Data Access mechanisms such as Data Source Identifiers / Names (time-tested practice in various DBMS realms)
  9. Data is forever, it is the basis of Information, and it is increasing exponentially due to proliferation of Web Services induced user activities (User Generated Content)
  10. Survival, Vitality, Longevity, Efficiency, Productivity etc.. are all depend on our ability to process data effectively in a shrinking time continuum where Data and/or Information overload is the alternative.

The Data Web is about Presence over Eyeballs due to the following realities:

  1. Eyeballs are input devices for a DNA based processing system (Humans). The aforementioned processing system can reason very well, but simply cannot effectively process masses of data or information
  2. Widgets offer little value long term re. the imminent data and information overload dilemma, ditto Web pages (however pretty), and any other Eyeballs-only centric Web Apps
  3. Computers (machines) are equipped with inorganic (non DNA) based processing power, they are equipped to process huge volumes of data and/or information, but they cannot reason
  4. To be effective in the emerging frontier comprised of a Network Computer and a Web OS, we need an effective mechanism that makes best use of the capabilities possessed by humans and machines, by shifting the focus to creation and interaction with points of "Data Web Presence" that openly expose "Structured Linked Data".

This is why we need to inject a mesh of Linked Data into the existing Web. This is what the often misunderstood vision of the "Semantic Data Web" or "Web of Data" or "Web or Structured Data" is all about.

As stated earlier (point 10 above), "Data is forever" and there is only more of it to come! Sociality and associated Social Networking oriented solutions are at best a spec in the Web's ocean of data once you comprehend this reality.

Note: I am writing this post as an early implementor of GData and an implementor of RDF Linked Data technology and a "Web Purist".

OpenSocial implementation and support across our relevant product families: Virtuoso (i.e the Sponger Middleware for RDF component), OpenLink Data Spaces (Data Space Controller / Services), and the OpenLink Ajaxt Toolkit (i.e OAT Widgets and Libraries), is a triviality now that the OpenSocial APIs are public.

The concern I have, and the problem that remains mangled in the vast realms of Web Architecture incomprehension, is the fact that GData and GData based APIs cannot deliver Structured Linked Data in line with the essence of the Web without introducing "lock-in" that ultimately compromises the "Open Purity" of the Web. Facebook and Google's OpenSocial response to the Facebook juggernaut (i.e. open variant of the Facebook Activity Dashboard and Social Network functionality realms, primarily), are at best icebergs in the ocean we know as the "World Wide Web". The nice and predictable thing about icebergs is that they ultimately melt into the larger ocean :-)

On a related note, I had the pleasure of attending the W3C's RDF and DBMS Integration Workshop, last week. The event was well attended by organizations with knowledge, experience, and a vested interested in addressing the issues associated with exposing none RDF data (e.g. SQL) as RDF, and the imminence of data and/or information overload covered in different ways via the following presentations: .
# PermaLink Comments [0]
11/02/2007 18:50 GMT Modified: 11/02/2007 18:52 GMT
Fourth Platform: Data Spaces in The Cloud (Update) [ Kingsley Uyi Idehen ]

I've written extensively on the subject of Data Spaces in relation to the Data Web for while. I've also written sparingly about OpenLink Data Spaces (a Data Web Platform that build using Virtuoso). On the other hand, I haven't shed much light on installation and deployment of OpenLink Data Spaces.

Jon Udell recently penned a post titled: The Fourth Platform. The post arrives at a spookily coincidental time (this happens quite often between Jon and I as demonstrated last year during our podcast; the "Fourth" in his Innovators Podcast series).

The platform that Jon describes is "Cloud Based" and comprised of Storage and Computation. I would like to add Data Access and Management (native and virtual) under the fourth platform banner with the end product called: "Cloud based Data Spaces".

As I write, we are releasing a Virtuoso AMI (Amazon Image) labeled: virtuoso-dataspace-server. This edition of Virtuoso includes the OpenLink Data Spaces Layer and all of the OAT applications we've been developing for a while.

What Benefits Does this offer?

  1. Personal Data Spaces in the Cloud - a place where you can control and consolidate data across your Blogs, Wikis, RSS/Atom Feed Subscriptions, Shared Bookmarks, Shared Calendars, Discussion Threads, Photo Galleries etc
  2. All the data in your Data Space is SPARQL or GData accessible.
  3. All of the data in your Personal Data Space is Linked Data from the get go. Each Item of data is URI addressable
  4. SIOC support - your Blogs, Wikis, Bookmarks etc.. are based on the SIOC ontology for Semantically Interlinking Online Communities (think: Open social-graph++)
  5. FOAF support - your FOAF Profile page provides a URI that is an in-road to all Data in your Data Space.
  6. OpenID support - your Personal Data Space ID is usable wherever OpenID is supported. OpenID and FOAF are integrated as per latest FOAF specs
  7. Two Integration with Facebook - You can access your Data Space from Facebook or access Facebook from your Data Space
  8. Unified Storage - The WebDAV based filesystem provides Cloud Storage that's integrated with Amazon S3; It also exposes all of your Data Space data via a traditional filesystem UI (think virtual Spotlight); You can also mount this drive to your local filesystem via your native operating system's WebDAV support
  9. SyncML - you can sync calendar and contact details with your Data Space in the cloud from your Mobile phone.
  10. A practical Semantic Data Web solution - based on Web Infrastructure and doesn't require you to do anything beyond exposing URIs for data in your Data Spaces.

EC2-AMI Details:

    AMI ID: ami-e2ca2f8b
    Manifest file: virtuoso-images/virtuoso-dataspace-server.manifest.xml

Installation Guide:

  1. Get an Amazon Web Services (AWS) account
  2. Signup for S3 and EC2 services
  3. Install the EC2 plugin for Firefox
  4. Start the EC2 plugin
  5. Locate the row containing ami-7c31d515  Manifest virtuoso-test/virtuoso-cloud-beta-9-i386.manifest.xml (sort using the AMI ID or Manifest Columns or search on pattern: virtuoso, due to name flux)
  6. Start the Virtuoso Data Space Server AMI
  7. Wait 4-5 minutes (*take a few minutes to create the pre-configured Linux Image*)
  8. Connect to http://http://your-ec2-instance-cname:8890/ Log in with user/password dba/dba
  9. Go to the Admin UI (Virtuoso Conductor) and change the PWDs for the 'dba' and 'dav' accounts (*Important!*)
  10. Give the "SPARQL" user "SPARQL_UPDATE" privileges (required if you want to exploit the in-built Sponger Middleware)
  11. Click on the ODS (OpenLink Data Spaces) link to start an Personal Editon of OpenLink Data Spaces (or go to: http://your-ec2-instance-cname/dataspace/ods/index.html)
  12. Log-in using the username and password credentials for the 'dav' account (or register a new user note: OpenID is an option here also) Create an Data Space Application Instance by clicking on a Data Space App. Tab
  13. Import data from your existing Web 2.0 style applications into OpenLink Data Spaces e.g. subscribe to a few RSS/Atom feeds via the "Feeds Manager" application or import some Bookmarks using the "Bookmarks" application
  14. Then look at the imported data in Linked Data form via your ODS generated URIs based on the patterns: http://your-ec2-instance-cname/dataspace/person/your-ods-id#this (URI for You the Person), http://your-ec2-instance-cname/dataspace/person/your-ods-id (FOAF File URI), http://your-ec2-instance-cname/dataspace/your-ods-id (SIOC File URI)

(OAT) from your Data Space instance

Install the OAT VAD package via the Admin UI and then apply the URI patterns below within your browser:
  1. http://:8890/oatdemo - Entire OAT Demo Collection
  2. http://:8890/rdfbrowser - RDF Browser
  3. http://:8890/isparql - SPARQL Query Builder (iSPARQL)
  4. http://:8890/qbe - SQL Query Builder (iSQL)
  5. http://:8890/formdesigner - Forms Builder (for building Meshups based on RDF, SQL, or Web Servives Data Souces)
  6. http://:8890/dbdesigner - SQL DB Schema Designer (note a Visual SQL-RDF Mapper is also on it's way
  7. http://:8890/DAV/JS/ - To view the OAT Tree (there are some experimental demos that are missing from the main demo app etc..)

There's more to come!

# PermaLink Comments [0]
09/22/2007 19:43 GMT Modified: 10/26/2008 17:59 GMT
Semantic Web Data Spaces [ Kingsley Uyi Idehen ]
Web Data Spaces

Now that broader understanding of the Semantic Data Web is emerging, I would like to revisit the issue of "Data Spaces".

A Data Space is a place where Data Resides. It isn't inherently bound to a specific Data Model (Concept Oriented, Relational, Hierarchical etc..). Neither is it implicitly an access point to Data, Information, or Knowledge (the perception is purely determined through the experiences of the user agents interacting with the Data Space.

A Web Data Space is a Web accessible Data Space.

Real world example:

Today we increasing perform one of more of the following tasks as part of our professional and personal interactions on the Web:

  1. Blog via many service providers or personally managed weblog platforms
  2. Create Event Calendars via Upcoming.com and Eventful
  3. Maintain and participate in Social Networks (e.g. Facebook, Orkut, MySpace)
  4. Create and Participate in Discussions (note: when you comment on blogs or wikis for instance, you are participating in, or creating, a conversation)
  5. Track news by subscribing to RSS 1.0, RSS 2.0, or Atom Feeds
  6. Share Bookmarks & Tags via Del.icio.us and other Services
  7. Share Photos via Flickr
  8. Buy, Review, or Search for books via Amazon
  9. Participates in auctions via eBay
  10. Search for data via Google (of course!)

John Breslin has nice a animation depicting the creation of Web Data Spaces that drives home the point.

Web Data Space Silos

Unfortunately, what isn't as obvious to many netizens, is the fact that each of the activities above results in the creation of data that is put into some context by you the user. Even worse, you eventually realize that the service providers aren't particularly willing, or capable of, giving you unfettered access to your own data. Of course, this isn't always by design as the infrastructure behind the service can make this a nightmare from security and/or load balancing perspectives. Irrespective of cause, we end up creating our own "Data Spaces" all over the Web without a coherent mechanism for accessing and meshing these "Data Spaces".

What are Semantic Web Data Spaces?

Data Spaces on the Web that provide granular access to RDF Data.

What's OpenLink Data Spaces (ODS) About?

Short History

In anticipation of this the "Web Data Silo" challenge (an issue that we tackled within internal enterprise networks for years) we commenced the development (circa. 2001) of a distributed collaborative application suite called OpenLink Data Spaces (ODS). The project was never released to the public since the problems associated with the deliberate or inadvertent creation of Web Data silos hadn't really materialized (silos only emerged in concreted form after the emergence of the Blogosphere and Web 2.0). In addition, there wasn't a clear standard Query Language for the RDF based Web Data Model (i.e. the SPARQL Query Language didn't exist).

Today, ODS is delivered as a packaged solution (in Open Source and Commercial flavors) that alleviates the pain associated with Data Space Silos that exist on the Web and/or behind corporate firewalls. In either scenario, ODS simply allows you to create Open and Secure Data Spaces (via it's suite of applications) that expose data via SQL, RDF, XML oriented data access and data management technologies. Of course it also enables you to integrates transparently with existing 3rd party data space generators (Blogs, Wikis, Shared Bookmrks, Discussion etc. services) by supporting industry standards that cover:

  1. Content Publishing - Atom, Moveable Type, MetaWeblog, Blogger protocols
  2. Content Syndication Formats - RSS 1.0, RSS 2.0, Atom, OPML etc.
  3. Data Management - SQL, RDF, XML, Free Text
  4. Data Access - SQL, SPARQL, GData, Web Services (SOAP or REST styles), WebDAV/HTTP
  5. Semantic Data Web Middleware - GRDDL, XSLT, SPARQL, XPath/XQuery, HTTP (Content Negotiation) for producing RDF from non RDF Data ((X)HTML, Microformats, XML, Web Services Response Data etc).

Thus, by installing ODS on your Desktop, Workgroup, Enterprise, or public Web Server, you end up with a very powerful solution for creating Open Data access oriented presence on the "Semantic Data Web" without incurring any of the typically assumed "RDF Tax".

Naturally, ODS is built atop Virtuoso and of course it exploits Virtuoso's feature-set to the max. It's also beginning to exploit functionality offered by the OpenLink Ajax Toolkit (OAT).

# PermaLink Comments [0]
04/13/2007 21:15 GMT Modified: 04/13/2007 18:19 GMT
Web 3.0: When Web Sites Become Web Services [ Kingsley Uyi Idehen ]

(Via Read/Write Web.)

Web 3.0: When Web Sites Become Web Services: "

.....

Conclusion

As more and more of the Web is becoming remixable, the entire system is turning into both a platform and the database. Yet, such transformations are never smooth. For one, scalability is a big issue. And of course legal aspects are never simple.'

But it is not a question of if web sites become web services, but when and how. APIs are a more controlled, cleaner and altogether preferred way of becoming a web service. However, when APIs are not avaliable or sufficient, scraping is bound to continue and expand. As always, time will be best judge; but in the meanwhile we turn to you for feedback and stories about how your businesses are preparing for 'web 3.0'.

We are hitting a little problem re. Web 3.0 and Web 2.0, naturally :-) Web 2.0 is one of several (present and future) Dimensions of Web Interaction that turns Web Sites into Web Services Endpoints; a point I've made repeatedly [1] [2] [3] [4] across the blogosphere, in addition to my early futile attempts to make the Wikipedia's Web 2.0 article meaningful (circa 2005), as per the Wikipedia Web 2.0 Talk Page excerpt below:

Web 2.0 is a web of executable endpoints and well formed content. The executable endpoints and well formed content are accessible via URIs. Put differently, Web 2.0 is a web defined by URIs for invoking Web Services and/or consuming or syndicating well formed content.

Hopefully, someone with more time on their hands will expand on this ( I am kinda busy)

.

BTW - Web 2.0 being a platform doesn't distinguish it in anyway from Web 1.0. They are both platforms, the difference comes down to platform focus and mode of experience.

Web 3.0 is about Data Spaces: Points of Semantic Web Presence that provide granular access to Data, Information, and Knowledge via Conceptual Data Model oriented Query Languages and/or APIs.

The common denominator across all the current and future Web Interaction Dimensions is HTTP. While their differences are as follows:

    Web 1.0 - Browser (HTTP + (X)HTML)
    Web 2.0 - Presence (Web Service Endpoints for REST or SOAP over HTTP)
    Web 3.0 - Presence (Query Languages, Data Models, and HTTP based Query Oriented Web Service Endpoints)

Examples of Web 3.0 Infrastructure:

  1. Query Languages: SPARQL, Googlebase Query Language, Facebook Query Language (FQL), and many others to come
  2. Query Language aligned Web Services (Query Services): SPARQL Protocol, GData, or REST style Web services such as Facebook's service for FQL.
  3. Data Models: Concrete Conceptual Data Model (which RDF happens to deliver for Web Data)

Web 3.0 is not purely about Web Sites becoming Web Services endpoints. It is about the "M" (Data Model) taking it's place in the MVC pattern as applied to the Web Platform.

I will repeat myself yet again:

The Devil is in the Details of the Data Model. Data Models make or break everything. You ignore data at your own peril. No amount of money in the bank will protect you from Data Ignorance! A bad Data Model will bring down any venture or enterprise, the only variable is time (where time is directly related to your increasing need to obtain, analyze, and then act on data, over repetitive operational cycles, that have ever decreasing intervals).

This applies to the Real-time enterprise of Information and/or knowledge workers and Real-time Web Users alike.

BTW - Data Makes Shifts Happen (spotter: Sam Sethi).

# PermaLink Comments [0]
03/19/2007 21:44 GMT Modified: 03/20/2007 08:27 GMT
 <<     | 1 | 2 | 3 |     >>
Powered by OpenLink Virtuoso Universal Server
Running on Linux platform