Installation and configuration takes only minutes, by following the documentation which remains available anytime, specifically for this driver on Windows.
Release 7.0 licenses are also available for immediate purchase.
Release 7.0 installers are available for immediate download for Windows. Builds for Mac, Linux, and other Unix-like OS will be available soon; please contact us if you have urgent need.
Release 7.0 supports all 32-bit and 64-bit ODBC client tools and applications, both GUI and command-line, on —
Windows and Windows Server on x86 and x86_64 |
---|
|
The Release 7.0 Lite Edition ODBC Driver supports virtually every version of Oracle in current use, including —
|
Support for Oracle 12c
Enhanced support for Oracle 11g
Installation and configuration takes only minutes, by following the documentation which remains available anytime, specifically for this driver on OS X and Windows.
Release 7.0 licenses are also available for immediate purchase.
Release 7.0 installers are available for immediate download for Mac and Windows. (Express Edition is not typically produced for Linux and other Unix-like OS will be available soon; please contact us if you have specific need.)
Release 7.0 supports all 32-bit and 64-bit ODBC client tools and applications, both GUI and command-line, on —
OS X and OS X Server on x86 and x86_64 |
Windows and Windows Server on x86 and x86_64 |
---|---|
|
|
The Release 7.0 Express Edition ODBC Driver supports virtually every version of Oracle in current use, including —
|
Support for Oracle 12c
Enhanced support for Oracle 11g
Today, we have the Lite Edition ODBC Drivers for Sybase and Microsoft SQL Server.
Installation and configuration takes only minutes, by following the documentation which remains available anytime, specifically for this driver on OS X and Windows.
Release 7.0 licenses are also available for immediate purchase.
Release 7.0 installers are available for immediate download for Mac and Windows. Builds for Linux and other Unix-like OS will be available soon; please contact us if you have urgent need.
Release 7.0 supports all 32-bit and 64-bit ODBC client tools and applications, both GUI and command-line, on —
OS X and OS X Server on x86 and x86_64 |
Windows and Windows Server on x86 and x86_64 |
---|---|
|
|
The Release 7.0 Lite Edition ODBC Driver supports virtually every version of Microsoft SQL Server and Sybase Adaptive Server in current use, including —
|
|
|
added support for SPARSE
columns in SQLColumns()
call
added DSN options
and Multi-Tier connect option -X
)
details, based on test table:
CREATE TABLE tbl_sparse_test ( col1 INT SPARSE , col2 INT , col3 XML COLUMN_SET FOR ALL_SPARSE_COLUMNS )
wildcard query will return only col2
and col3
; will not include SPARSE
columns. This is standard SQL Server behavior, and it cannot be changed.
SELECT * FROM tbl_sparse_test ;
To include SPARSE
columns in results, they must be explicitly SELECTed
SELECT col1, col2, col3 FROM tbl_sparse_test ;
By default, calls to SQLColumns()
don't return Sparse Columns. To receive full columns list:
via our Lite Edition ODBC driver —
open connection with SHOWSPARSECOLS
in DSN connection string, e.g.,
via the Microsoft ODBC driver —
added support for new SQL Server datatypes such as datetime2
added support for NBCROW
token
added support for Sybase 15
added support for BIGDATETIME
and BIGTIME
added support for UNITEXT
added support for UNSIGNED BIGINT
fixed issue with SQL Server BIT
datatype
fixed memory overwrite error, when DB procedure is called with
parameter of CHAR/VARCHAR/LONGVARCHAR
fixed issue with VARBINARY
datatype and DB procedures
fixed issue with converting TIMESTAMP
to CHAR/WCHAR
fixed datatype info in
-- new Sybase and MSSQL datatypes were added
fixed database catalog and query metadata info for Sybase 15's UNSIGNED INT, UNSIGNED SMALLINT, BIGINT, SYSNAME, LONGSYSNAME
It's the year 2015, and the fundamental issues associated with the utility of data access drivers remain confusing. Basically, we remain uncertain about the value-to-compensation alignment of ODBC (Open Database Connectivity), JDBC (Java Database Connectivity), and ADO.NET drivers/providers.
ODBC | JDBC |
---|---|
ADO.NET | |
After allowing for consumer irrationality [1], the basis of any payment is fundamentally tied to the monetization of opportunity costs. Essentially, we pay for one thing to alleviate the (usually higher) costs of something else.
The rest of this post focuses on highlighting the real pains associated with the $0.00 value misconception associated with Data Access Drivers: ODBC, JDBC, ADO.NET, OLE-DB, etc.
In the most basic sense, there are some fundamental aspects of data access that are complex to implement and rarely implemented (if at all) by free drivers. The list includes:
Beyond actual driver sophistication, in regards to key feature implementations, let's up the ante by veering into the area of data security. At the most basic level, It's extremely important to understand that all data access drivers provide read-write access to your databases; thus, it's imperative that data access drivers address the following:
Once you're done with security, you then have the thorny issue of data access and data flow management. In a nutshell, your driver needs to be able to handle:
Once you've dealt with Security and Data Flow, you then have to address the enforcement of these settings across a myriad of ODBC compliant host, which is where Zeroconfig and centralized data access administration comes into play i.e., configure once (locally) and enforce globally.
When OpenLink Software entered the ODBC Driver Market segment (circa 1992), the issues above were the fundamental basis of our Multi-Tier Drivers. Although the marketplace highlighted our drivers for high performance, stability, and specification adherence -- to all of which we remain committed -- our fundamental engineering focus has always been skewed towards configurable data security, platform independence, and scalability.
Every item of concern outlined in the section above is addressed by security features built into our Multi-Tier Drivers [2][3][4]. These features all leverage the fact that our multi-tier drivers include a sophisticated DB session rules book that enables construction and enforcement of user attribute (user name, application, client operating system, IP address, target database etc.) based rules which are applied to all database sessions (single or pooled).
Today, in the year 2015, the security issues that pervade Data Access, whether via Native SQL RDBMS Drivers, or ODBC, JDBC, and ADO.NET Drivers/Providers, have only increased, courtesy of ubiquitous computing -- facilitated by the Internet & Web, across desktop and mobile device dimensions. Paradoxically, there remains a fundamental illusion that all Data Access Drivers are made the same; i.e., they simply provide you with the ability to connect to SQL RDBMS back-ends, for the industry standard price of $0.00, without consequence -- thereby skewing the very nature of SQL RDBMS data access and its security and privacy implications.
I hope that this post brings some clarity to a very serious security and general configuration management issues associated with Data Access Drivers. Free ODBC Drivers offer nothing; that's why they cost $0.00. When dealing with real issues associated with Open Data Access, you must have a handle on the inevitable issues of data security and privacy.
When Sun originally released Java 1.0, there were no JDBC drivers -- there wasn't even a JDBC.
Data access came in Java 2.0, as JDBC 1.0, but there were very few JDBC drivers from any source, as would be expected with any new technology -- but the ODBC ecosystem (itself then at only v2.0) was going strong.
Sun recognized that Java wouldn't have as much uptake without a functional data access solution -- so they produced and bundled the original Type 1 JDBC-ODBC Bridge Driver, sun.jdbc.odbc.JdbcOdbcDriver
, but from the very beginning, they warned that users "should use the JDBC-ODBC Bridge only for experimental prototyping or when you have no other driver available."
That bundled JDBC-ODBC Bridge was (and always remained) single-threaded, and though it received some other updates along the way, it only ever supported a subset of JDBC 2.0 and later. Sun (and later Oracle) recommended that users employ "a pure Java JDBC technology-enabled driver, type 3 or 4, in order to get all of the benefits of the Java programming language and the JDBC API."
Even in the early days of JDBC, we saw that there would not always be an available JDBC driver for a given target data source -- but the numbers of ODBC drivers were rapidly increasing, supporting every major and many minor DBMS and other data sources. We saw a need for an enterprise-grade, non-experimental Bridge solution, with full support for the JDBC API.
We delivered this first as our Type 3 Multi-Tier solution, bridging from JDBC in one environment (typically a UNIX-like OS) to ODBC in another (most often, Microsoft Windows).
Type 3 Enterprise Edition (Multi-Tier) Architecture Diagram
(click to enlarge)
Type 1 Lite Edition (Single-Tier) Architecture Diagram
(click to enlarge)
Sun long warned that the JRE-bundled Bridge was transitional, and Oracle confirmed immediately upon acquisition that it would "be removed in JDK 8. In addition, Oracle does not support the JDBC-ODBC Bridge." Java 8 is now in full release, and indeed, the venerable sun.jdbc.odbc.JdbcOdbcDriver
is no longer present, as evidenced by the scary looking error --
java.lang.ClassNotFoundException: sun.jdbc.odbc.JdbcOdbcDriver
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:30
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:259)
Any Java users or applications relying on ODBC connections and also needing the security and other improvements found in Java 8 are left high and dry... Or would be, but for OpenLink Software.
Our JDBC-to-ODBC Bridge, in both Type 1 and Type 3 forms, has been available and regularly updated since its original release for JDBC 1. Fully multi-threaded since Java Runtime Environments (JREs) could handle such, we have also kept pace with the JDBC API -- now at JDBC 4.2, in 2015's Java 8 a/k/a JDK/JVM/JRE 1.8 -- and maintained compatibility with the also-evolving ODBC API, now at 3.8.
Especially important for the modern world, our solutions support both 64-bit and 32-bit environments, including both 64-bit JVMs and ODBC drivers, and our Type 3 solutions can even bridge between these, whether you have a 64-bit JVM and need to connect to a 32-bit ODBC driver, or you have a 32-bit JVM and need to connect to a 64-bit ODBC driver.
As always, our solutions are available for immediate download, with a free two-week trial license provided alongside. We encourage pre-purchase installation, configuration, and testing, with support provided through our web-based Support Forums and even free up-and-running Support Cases. Once you've confirmed the driver works for you, entry level and special offer licenses may be purchased online or through our Sales Team; these as well as custom license configurations or partnership (IBP, ISV, VAR, OEM, etc.) arrangements are always available by direct contact.
]]>This post is about highlighting the real pains associated with the $0.00 misconception associated with Data Access Drivers: ODBC, JDBC, ADO.NET, OLE-DB etc.
In the most basic sense, there are some fundament aspects of data access that are complex to implement and rarely implemented (if at all) by free drivers, the list includes:
Okay, so we're done with actual driver sophistication re. implementation of critical features. Let's Up the ante by veering into the area of security. At the most basic level, It's extremely important to understand that all data access driver types provide read-write access to your databases; thus, it's imperative that data access drivers address the following:
Once you're done with security, you then have the thorny issue of data access and data flow management. In a nutshell, your driver needs to be able to handle:
Once you've dealt with Security and Data Flow, you then have to address the enforcement of these settings across a myriad of ODBC compliant host, which is where Zeroconfig and centralized data access administration comes into play i.e., configure once (locally) and enforce globally.
When OpenLink Software entered the ODBC Driver Market segment in 1992, the issues above where the fundamental basis of our Multi-Tier Drivers. Thus, although we distinguished ourselves via performance, stability, and specification adherence, our fundamental engineering focus has always been skewed towards security and configurability, alongside high-performance and scalability.
As we close 2009, the security issues that pervade Native DBMS Drives, ODBC, JDBC, ADO.NET, OLE-DB etc. Drivers have only increased, courtesy of ubiquitous computing, sadly though, there remains a fundamental illusion that Data Access Drivers simply connect you to DBMS back-ends, and since you can get these drivers at $0.00 from most DBMS vendors they can't be that important.
I hope that this post brings some clarity to a very serious security and general configuration management issues associated with Data Access Drivers. Free ODBC Drivers offer nothing, when it comes to the real issues of Open Data Access. If they did, they wouldn't be worth $0.00!
Note: wondering if this has anything to do with Linked Data (my current data access focal point)? Well, remember, the Linked Data meme is fundamentally about REST based Open Data Access & Integration via HTTP; thus, what applies to Relational Model databases naturally applies to their more granular Graph Model relatives. Basically, data access security never goes away, it just gets more granular, complex, and ultimately, mercurial.
This month's DataSpaces contains material of interest to the Virtuoso developer and UDA user community alike —
New ODBC, JDBC, ADO.NET, and OLE DB Drivers for Major Databases]]>
Burlington, MA. Tuesday, January 15, 2008 - OpenLink Software, Inc., technology leader in the development and deployment of secure,
high-performance universal data access middleware, announces the commercial availability of Release 6.1 of its high-performance and
secure Universal Data Access Drivers.
The updated components support new and older releases of Oracle, Microsoft SQL Server, Sybase, IBM DB2, IBM Informix, Ingres, Progress
Open Edge, MySQL, PostgreSQL, and Firebird, across Windows, Mac OS X, Linux, Solaris, HP-UX, and AIX.
New features across then entire suite include:
- XA-based two-phase commit across ODBC, JDBC, and ADO.NET
- Microsoft SQL Linked Server compatible ODBC provider for OLE DB (32 & 64 Bit)
- ODBC Bridge for JDBC accessible Databases (32 & 64 Bit)
- Ruby on Rails Adapter for ODBC- and JDBC-accessible databases
- Support for 64-bit Windows running on x86_64 (e.g., Opteron, Xeon) and IA64 (e.g., Itanium2) Processors across all Data Access APIs --
ODBC, JDBC, OLEDB, and ADO.NET
- Support in Mac OS X Universal binaries for PPC and Intel 32-bit mode on Tiger (10.4) and Leopard (10.5), plus Intel 64-bit mode on Leopard
- ADO.NET 2.0 Support (and an ADO.NET 3.0 Beta Provider on request)
- ADO.NET integration with Visual Studio 2005
"The new product release builds on our legacy as leading provider of quality, secure and high-performance data access drivers to all major
DBMS engines," said Kingsley Idehen, President & CEO.
"We are at a critical juncture within the enterprise and across the Web, where data access, portability, and unobtrusive integration require
the technological prowess and leadership qualities we've consistently demonstrated over the last 15 years. Standardized data access
middleware that enables the development and deployment of database and operating system independent applications remains a critical priority
for organizations worldwide," he added.
About OpenLink Software
=======================
OpenLink Software is a privately held software company with offices in the U.S.A., United Kingdom, Russia, and Bulgaria. It has been the
leading provider and technology innovator in the universal data access middleware market since 1993, with over 10,000+ companies currently
using its products worldwide.
Additional information on OpenLink Software can be obtained from the web site: http://www.openlinksw.com/.
Contact:
Helen Heward-Mills,
OpenLink Software, Inc.
Tel: 781 273 0900
Email: hmills@openlinksw.com
OpenLink Software are pleased to announce release 1.1 of the ODBC Adapter for Ruby on Rails (ActiveRecord).
This unifies data-access from a plethora of individual adapters to one common configuration in Rails; rather than having a multitude of DBMS-specific Rails Adaptors with inconsistent functionality and behaviour, you can now focus on a single data adapter with consistent behaviour across ODBC-accessible databases on all Ruby-supported platforms. This release adds support for DB2, MySQL, Sybase and SQL Server. The supported DBMSes now include: Oracle, Informix, Ingres, OpenLink Virtuoso, SQL Server, Sybase, MySQL and DB2.
The adapter can be downloaded from rubyforge:
http://rubyforge.org/projects/odbc-rails/
Technorati Tags: odbc, rails, ruby, webdevelopment
]]>Why Web 2.0 clones are not innovative:
Richard MacManus at ZDNet writes his view on Web 2.0 clone applications. He observed that every country has its set of Web 2.0 clones — bookmarking sites that looks del.icio.us, photo sharing sites that like Flickr, social networking sites like MySpace, community news sites like digg, etc. He criticizes those Web 2.0 clones being non-innovative.
It’s true that most of the clone apps don’t come with innovative ideas, but it would be unwise to think that they totally have no values. Contrary to Richard’s point of view, I think clone apps are essential ingredients in helping the IT business in developing countries to become innovative.
Innovative ideas don’t usually born in the thin air. They requires extensive testings and experiments. The mature IT business in the US has extensive knowledge and experience in developing innovative ideas. People here have a general idea about what works and what doesn’t. In many developing countries, however, the settings are completely different.
Take China for an example. Its IT market is still in an infant stage comparing to that of the US. Chinese businesses that recently entered the market are still in the stage trying to figure out how to make profits and establish a sustainable business model. The need to be innovative now, perhaps, is not on the todo lists of the business executives.
Furthermore, the past generation of Chinese engineers and developers were not exactly trained to be innovative and think outside-the-box. They were trained with impressive memorization skills and obey orders from superiors. It’s unfair to expect this generation of Chinese IT workers to live and breath with innovations as their US counterparts do.
Given this type of harsh environment in many developing countries, it’s quite natural to act as copycats and repeat business ideas that have good track records. In fact, it’s a good business if being a copycat can bring profits.
We don’t criticize Yahoo! Maps being a copycat of Google Maps. We don’t criticize Google Notebook being a copycat of del.icio.us. Why should we criticize foreign Web 2.0 clones when their intention is to learn how to enter a global IT market and to become prosperous? Maybe in the cloning process, copycats will discover innovative ideas by accident.
]]>
What Problem Does Natural Language Search Solve?:
Matt Marshall recently posted a story about a new search engine looking to raise a lot of money at a very high valuation, which has created quite a bit of buzz as people argue over whether or not the company has a chance, or deserves such a high valuation. Matt followed up with more details on the company, though he still expresses some reasonable skepticism. Like many people, my first reaction on hearing about it was that I can't remember a year that's gone by without someone claiming to have come out with a revolution in natural language search. However, when it comes to search engine news, no one can go through the history and explain why something is a bad idea quite like Danny Sullivan can. He lists out all the attempts at natural language search, and shows how each one failed (in some cases, miserably). He also points out that the problem with natural language search is that it requires everyone to change their behavior. As with any startup, when you're looking at their chances, the big question to ask is pretty simple: what problem does it solve? Plenty of people have figured out how to search with keywords. In fact, many of us find it more natural and faster than trying to construct a natural language query. So, while all the natural language search engines that come along insist that searches suck because they can't understand the the searcher, it's not clear that's the real problem. When people want to use a search engine, they want to find what they want. That means being able to search quickly. Dumping two or three keywords into a box is always going to be a lot faster than figuring out the natural language equivalent. So, perhaps someone can enlighten us. What is the problem natural language search solves?
Technorati Tags: search
(via Techdirt)
Technorati Tags: natural-language, search
]]>Enjoy!
]]>The trouble with "Standards Appreciation" is that vendors see standards from the following perspectives primarily:
Korateng Ofusu-Amaah provides insightful perspective on the issues above, in a recent "must read" blog post about how this dysfunctionality plays out today in the realm of HTML Buttons and Forms. Here are some notebable excerpts:
"Instead my discourse devolved into a case of I told you so, a kind of Old Testament view of things instead of the softer New Age stylings that are in vogue these days. Sure there was a little concern for the users that had been hurt by lost data, but there was almost no empathy for the developers who had to lose their weekends furiously reworking their applications to do the right thing especially because it appeared that they would rather persist in trying to do the wrong thing.
The sentiment behind that mini tempest-in-a-teapot however was a recognition of the fact that those who have been quietly evangelizing the web style were talking about the wrong thing and to the wrong people."
..."..As application developers we should ask for better forms, we should be demanding of browser makers things like XForms or Web Forms 2.0 to make sure that we can go beyond the kind of stilted usability that we currently have. Our users would appreciate our efforts in that vein but for now, they know what to expect. Until then application developers should push back when we are told to "do the wrong thing".
There is an unfortunate mindset trend at the current time that espouses: "Sloppiness" is good, and "Simple" justifies inadequacy at all times. Today, the real focus of most development endeavours is popularity first and coherance (backward compatibility, standards compliance, security, scalability etc.) a distant second, if you can simply make things popular then that justifies the sloppiness (acquisition, VC money, Blogosphere Juice etc.). Especially as someone else will ultimately have to deal with the predictable ramifications of the sloppiness.
Standards are critical to the success of IT investment within any enterprise, but standards are difficult to design, write, implement, and then comprehend; due to the inherent requirement for abstraction - it's a top down, as opposed to bottom up, process.
Vendors will never genuinely embrace standards, until IT decision makers demand standards compliance of them, by demonstrating a penchant for smelling out "leaky abstractions" embedded within product implementations. Naturally, this requires a fundamental change of mindset for most decision makers. It means moving away from the "this analyst said...", "I heard that company X is going to deliver....", "I read that .....", "I saw that demo..." approach to product evaluation, to a more knowledgeable evaluation process that seeks out the What, Why, and How of any prospective IT solution.
Knowledge empowers all of the time. It's a gift that stands the test of time once you invest some time in its acquisition (unfortunately this gift isn't free!). Ignorance with all its superficial seduction (free and widely available!), is temporary bliss at best, and nothing but heartache over time.
]]>Here are a few links that resolve any confusion about this matter:
Or simple google on PHP and ODBC or PHP and iODBC ...
]]>]]>On the surface, Grahamâs piece seems like a nice pat on the back to the Mac platform. But thereâs an implication in his piece that the worldâs most prodigiously talented programmers are only now switching (or switching back) to the Mac, when in fact some of them have been here all along. GUI programming is hard, and for GUI programmers, the Mac has always been, in Brent Simmonsâs words, âThe Showâ.
I.e. the idea that by the mid-â90s the Mac user base had been whittled down to âgraphic designers and grandmasâ is demonstrably false â someone must have been writing the software the designers and grandmas were using, no? â but I donât think itâs worth pressing the point, because I suspect it wasnât really what Graham meant to imply. And the main thrust of his point is true: there is a certain class of hackers â your prototypical Unix nerds â who not only werenât using Macs a decade ago, but whose antipathy toward Macs was downright hostile. And it is remarkable that these hackers are now among Mac OS Xâs strongest adherents.
Itâs another sign of Mac OS Xâs dual nature: from the perspective of your typical user (and particularly long-time Mac users), it is the Mac OS with a modern Unix architecture encapsulated under the hood; from the perspective of the hackers Graham writes of, it is Unix with a vastly superior GUI.
Ajax, Hard Facts, Brass Tacks ... and Bad Slacks
There are a whopping 44,000 SAP customers running on Oracle databases, and IBM wants them. To get them, for the first time ever, it's optimized its enterprise database for a specific vendor's applications. The new version of DB, 8.2.2, will include a slew of SAP-optimized features, including self-tuning, self-configuration, silent install, dynamic storage allocation and more.
Wouldn't SAP be better served by simply making their application database independent via ODBC? This process really could have commenced years ago and prevented today's dilema: Your Partner has become Your most aggressive Competitor!
SAP tuned for specifically for DB2 or SAP tuned likewise for Microsoft SQL simply reeks of: "Same Sh*t different Pile". Microsoft and IBM will emulate Oracle in due course regarding their assault on SAP's market if DBMS specificity remains the SAP data access API strategy (this is a simple fact).
SAP should be using its quest for DBMS independence to stimulate or contribute ODBC enhancements (should ODBC be lacking in areas critical to its application needs; it is available in Open Source form and across all major platforms). Should the ODBC API not be the problem, then it can push ODBC Driver vendors (DBMS vendors such as IBM included) to get their Drivers in shape (should they be lacking, I know our ODBC Drivers are absolutely fine for this kind of task).
Database specificity gets application vendors nowhere. You can only control your business development destiny by being database independent. When applications are database independent the intellectual capital that drives your applications is preserved. This is akin to building physical and logical firewalls around the ecosystem created by your products. This is much better that being a pseudo DBMS engine reseller for a future competitor.
]]>
Advertising in RSS is just starting now, for all practical purposes. If we wanted to, as an industry, reject the idea, we could.
Here goes:
Blog Editing
I can use any editor that supports the following Blog Post APIs:
- Moveable Type
- Meta Weblog
- Blogger
Typically I use Virtuoso (which has an unreleased WYSIWYG blog post editor), Newzcrawler, ecto, Zempt, or w.bloggar for my posts. If a post is of interest to me, or relevant to our company or customers I tend to perform one of the following tasks:
- Generate a post using the "Blog This" feature of my blog editor
- Write a new post that was triggered by a previously read post etc.
Either way, the posts end up in our company wide blog server that is Virtuoso based (more about this below). The internal blog server automatically categorizes my blog posts, and automagically determines which posts to upstream to other public blogs that I author (e.g http://kidehen.typepad.com ) or co-author (e.g http://www.openlinksw.com/weblogs/uda and http://www.openlinksw.com/weblogs/virtuoso ). I write once and my posts are dispatched conditionally to multiple outlets.
RSS/Atom/RDF Aggregation & Reading
I discover, subscribe to, and view blog feeds using Newzcrawler (primarily), and from time to time for experimentation and evaluation purposes I use RSS Bandit, FeedDemon, and Bloglines. I am in the process of moving this activity over to Virtuoso completely due to the large number of feeds that I consume on a daily basis (scalability is a bit of a problem with current aggregators).
Blog Publishing
When you visit my blog you are experiencing the soon to be released Virtuoso Blog Publishing engine first hand, which is how WebDAV, SQLX, XQuery/XPath, and Free Text etc. come into the mix.
Each time I create a post internally, or subscribe to an external feed, the data ends up in Virtuoso's SQL Engine (this is how we handle some of the obvious scalability challenges associated with large subscription counts). This engine is SQL2000N based, which implies that it can transform SQL to XML on the fly using recent extensions to SQL in the form of SQLX (prior to the emergence of this standard we used the FOR XML SQL syntax extensions for the same result). It also has its own in-built XSLT processor (DB Engine resident), and validating XML parser (with support for XML Schema). Thus, my RSS/RDF/Atom archives, FOAF, BlogRoll, OPML, and OCS blog syndication gems are all live examples of SQLX documents that leverage Virtuoso's WebDAV engine for exposure to Blog Clients.
Blog Search
When you search for blog posts using the basic or advanced search features of my blog, you end up interacting with one of the following methods of querying data hosted in Virtuoso: Free Text Search, XPath, or XQuery. The result sets produced by the search feature uses SQLX to produce subscription gems (RSS/Atom/RDF/ blog home page exists as a result of Virtuoso's Virtual Domain / Multi-Homing Web Server functionality. The entire site resides in an Object Relational DBMS, and I can take my DB file across Windows, Solaris, Linux, Mac OS X, FreeBSD, AIX, HP-UX, IRIX, and SCO UnixWare without missing a single beat! All I have to do is instantiate my Virtuoso server and my weblog is live.
]]>I also hope that Oracle will support Mono -off the bat- rather than taking the typical "we will port to Mono sometime in the future..." type message which will not be acceptable, especially as we pulled this off first time around in 2002 (as atop Mono then). Thus, I am sure they can do it in 2005 :-)
Hopefully we should be able to add Oracle 10g Release 2 and DB2 to our SQL CLR hosting features comparison document that currently only covers SQL Server 2005 and Virtuoso.
]]>
Exhibit A: From The Submarine by Paul Graham
PR people fear bloggers for the same reason readers like them. And that means there may be a struggle ahead. As this new kind of writing draws readers away from traditional media, we should be prepared for whatever PR mutates into to compensate. When I think how hard PR firms work to score press hits in the traditional media, I can't imagine they'll work any less hard to feed stories to bloggers, if they can figure out how.
Exhibit B: From My Dinner With Microsoft's Jim Allchin in Thomas Hawk's weblog
Last night I had a unique opportunity to sit down with Jim Allchin, Microsoftâs Group Vice President for Platforms, for dinner along with a group of other bloggers and technologists and discuss the future development of Longhorn as well as see an early demo of the Longhorn technology firsthand.
Exhibit C: From A comment on Slashdot by Thomas Hawk about the dinner
]]>I do feel that there is room in the world of journalism for hard news, op/ed and yes, openly biased writing where the blogger places him or her self as a participant in the news itself.
Was I thrilled to be having dinner with Allchin? Of course. I'm a huge Microsoft enthusiast. I have been an advocate of the digital home for many years and I think that Microsoft may represent our best chance possible of making the digital home of the future a reality.
Was I really enthused about Longhorn? Absolutely. From what I saw it was really was amazing. I spend hundreds of hours every year organizing digital media in front of all five of my Windows PCs. The technology that I saw will save me hundreds of hours of work going forward. This is really exciting to me at a personal level.
I think this marketing message for the next release of Windows is broken, especially for someone whose been using what appears to be a not "just working" operating systems since Windows 2.0 :-(
]]>
The shakesperian tale of Macbeth also comes to mind as depicted in the excerpt below:
".... Macbeth goes to visit the witches in their cavern. There, they show him a sequence of demons and spirits who present him with further prophecies: he must beware of Macduff, a Scottish nobleman who opposed Macbeth's accession to the throne; he is incapable of being harmed by any man born of woman; and he will be safe until Birnam Wood comes to Dunsinane Castle. "
Having used all the major operating systems on a serious basis for a number of years in a variety of modes; user, developer, and administrator. I have always felt that a RISC based UNIX operating system (of BSD genealogical branch extraction), if somehow combined with a user interface that is superior to Windows, would ultimately unravel the Windows Desktop Monopoly. That operating system exists today in the form of Mac OS X (its lastest Tiger release simply kicks the differential up a notch).
Back to the Macbeth correlation:
"Birnam Woods coming to Dunsinane" is the metaphoric equivalent of desktop users and first time computer users being forced (by the scourge of virus and spyware) to revaluate Windows as the only choice for productive desktop computing. What would you recommend to "Aunt Milly" when she tells you she wants to get on the Internet? Especially if "Aunt Milly" isn't living with you?
"Man not born of a woman" is no different to saying: UNIX with a superior user interface to Windows!
I don't think you need me to tell who play the characters of Macbeth and Macduff in this drama :-)
The Windows security vulnerabilities quagmire (google juice on this phrase is currently 6,620 pages) has basically created an inflection of monumental proportions adversely affecting Windows and creating great visibility and evaluation building opportunities for Mac OS X ("once users experience a Mac they don't come back to Windows!").
Paul Murphy of cio-today.com has also written a great article sheds light on the often overlooked hardware aspect to the security problem for Windows Here is a poignant excerpt:
Software and Hardware Vulnerabilities
At present, attacks on Microsoft's Windows products are generally drawn from a different population of possible attacks than those on Unix variants such as BSD, Linux and Solaris. From a practical perspective, the key difference is that attacks on Wintel tend to have two parts: A software vulnerability is exploited to give a remote attacker access to the x86 hardware and that access is then used to gain control of the machine.
In contrast, attacks on Unix generally require some form of initial legal access to the machine and focus on finding software ways to upgrade priveleges illegally.
Consider, for example, CAN-2004-1134 in the NIST vulnerabilities database:
Summary: Buffer overflow in the Microsoft W3Who ISAPI (w3who.dll) allows remote attackers to cause a denial of service and possibly execute arbitrary code via a long query string.
Published Before: 1/10/2005
Severity: High
The vulnerability exists in Microsoft's code, but the exploit depends on the rigid stack-order execution and limited page protection inherent in the x86 architecture. If Windows ran on Risc, that vulnerability would still exist, but it would be a non-issue because the exploit opportunity would be more theoretical than practical.
Linux and open-source applications are thought to have far fewer software vulnerabilities than Microsoft's products, but Linux on Intel (Nasdaq: INTC - news) is susceptible to the same kind of attacks as those now predominantly affecting Wintel users. For real long-term security improvements, therefore, the right answer is to look at Linux, or any other Unix, on non x86 hardware.
One such option is provided by Apple's (Nasdaq: AAPL - news) BSD-based products on the PowerPC-derived G4 and G5 CPUs. Linus Torvalds, for example, apparently now runs Linux on a Mac G5 and there are several Linux distributions for this hardware -- all of which are immune to the typical x86-oriented exploit.
This may even been the nullifier of that age old argument about porting Mac OS X to the x86 in order to broaden its adoption potential?
Mac OS X is certainly a breath of fresh air for anyone who needs to simply get stuff done with their desktops and notebooks.
]]>Why Is Every Information Leak Worse Than Originally Thought? While there have been an incredible number of stories about data leaks over the past couple of months, one interesting thing is that in so many cases, the companies involved later come out and admit that the problem was much worse than they first admitted. That happened with ChoicePoint and LexisNexis, who both had to come out a second time and admit that the original data breach they discussed wasn't as limited as they had believed. The latest is that the DSW Shoe Warehouse database that was stolen included information (including credit cards) on many, many more people than originally stated. So rather than 100,000 credit cards out there, we're talking 1.4 million. What's unclear, however, is why this is happening. Is it that these companies are so clueless and unable to manage their own data that they don't realize how badly they've leaked data until they do further investigations? Or is that the companies are still trying to hide the nature of the losses until later (maybe spreading them out a bit)? Either way, you'll notice that no one ever seems to correct the damages in the other direction...
BTW - I took the time to update my public blog-he-roll and new blog-her-roll; both being tiny snapshots of my actual blog subscription collection, which by the way, is actually so large and diverse that it's part of an internal project covering distributed XQuery and scalability :-)
]]>The Skype Economy Do you have a product or a platform? More and more companies are recognizing that the real route to success is not to offer a product, but a platform on which other products are offered. With that in mind, we're seeing more and more products that are building up strong and active development communities that make their initial offering more useful and valuable to buyers. Recently there have been articles about the ecosystem of companies who provide enhancements for the iPod, and now some are recognizing that Skype is moving into similar territory. Of course, the risk for companies or developers who build on these newer platforms is that they're totally beholden to the provider -- and that puts them at risk. They have no control over the environment they're working in. Skype could decide to build the same functionality themselves. Or, other products could become more popular than Skype. Sometimes it works... but many companies don't realize the danger of putting all their eggs in one basket. If they pick the right platform, it can be lucrative for a while, but it's not always easy to know who's going to win. [via Techdirt]
The Internet Archive initiative is building up an amazing collection of content that includes this "must watch" movie about the somewhat forgotten hypercard development environment.
As I watched the hypercard movie I obtained clear reassurance that my vision of Web 2.0 as critical infrastructure for a future Semantic Web isn't unfounded. The solution building methodology espoused by hypercard is exactly how Semantic Web applications will be built, and this will be done by orchestrating the componentary of Web 2.0.
When watching this clip make the following mental adjustments:
Web 2.0 is a reflection of the web taking its first major step out of the technology stone age (certainly the case relative to the hypercard movie and "pre web" application development in general).
]]>
I absolutely understand the frustration expressed in Dare's post. An additional comment from my perspective is that this devolution has been in motion for a while and it is an integral part of the Misinformation and Disinformation based marketing strategies of many companies.
Misinformation and Disinformation only work when the target audience is apathetic (unfortunately the sad reality to date!). The bad news for marketing strategies that assume perpetuation of the aforementioned apathy is that the Internet is fundamentally reducing the cost of knowledge acquisition; by implication today's naive customer is tomorrow's knowledgeable decision maker. Vendors have a choice: build valuable products, and then market these products by disseminating knowledge. If a competitor's product is better than yours, get back to the labs (developers are actually stimulated and motivated by constructive challenges; especially as any developer worth his or her salt intrinsically believes they are the best at their craft deep down; and so they should!).
In the imminent future (Internet time) I expect to see the Wikisphere, Blogosphere, and other Web 2.0 (and beyond) realms bring clarity to the futility of Misinformation and Disinformation based marketing and PR (see my post about the Wikipedia induced inflection on Marketing and PR ).
BTW -- Does anyone know what's the difference between an ESB and a Universal Server? Likewise, the difference between a Virtual Database and an EII solution?
]]>What You'll Wish You'd Known Paul's advice to high school students.
It finally dawned on me what OpenSearch does. Basically you tell it about different search engines by showing it how to query something in each, and get back an RSS return. Then when you search for some term, say foo+bar, it performs the search in all the engines you have configured it for. So it's a way to group a bunch of search engines together and command them all to look for the same thing. It is clever. It is something that hasn't been done before, to my knowledge. That's the good news. The bad news is that Amazon is a leading patent abuser. So as good as this idea is, it's bad for all the rest of us, unless they tell us that they're granting us some kind of license to use the idea. [via Scripting News]
Over at BB, Cory posts on Mark Pilgrim's hack "Butler" which strips out most Google ads, removes copying restrictions in Google Print, adds alternative search results to nearly every Google service, and generally does things which I can only imagine will keep give big G fits. It is still in geek stage - it requires "Greasemonkey" and Firefox - but man, it sure sounds like fun.
]]>When putting together a post yesterday about "Virtualization", I instinctively looked to Gurunet's "answers.com" service for information on the subject: Enterprise Information Integration (EII). Woe and behold! Here is what I found at the tail end of the answers.com article on this subject:
Now, I knew this was Wikipedia content repurposed by "answers.com", and I proceeded to clean up the article. The wikified article took a while to complete, because true to the "Wikipedia" ethos, I had to contribute knowledge as opposed to the original weenie marketing gunk. Its naturally easier to cut and paste marketing fluff for a misguided quick win attempt than it is to embed links, add knowledge, and discern Wiki Markup (but "Wiki" don't play that!).
This little exercise has broader implications for marketing as a whole, especially for the IT sector. The end of days for "Misinformation based Marketing" are nigh! Wikis, Blogs, Search Engines, Web Services, and Social Networking are rapidly destroying the historically prohibitive costs associated with customer pursuit of facts.
I am very confident that product quality will soon overshadow market share as the key determinant for both product selection on the part of customers (this is no longer a pipe dream!). I also have increased hope that IT product development and associated product marketing by technology vendors will veer in the same direction.
]]>The article discusses most of the key issues, but it should also have included and discussed he following question: "should Microsoft benefit from the mess that we let them create?". By "we" I mean the extensive pool of Microsoft product consumers, developers, and partners etc.
I have worked with Microsoft products (as a developer and user) for more years than I would like to remember; I have personally experienced the journey from Windows 2.0 to Windows XP (and played around with Longhorn).
I added my question to this dialog as without it's resultant perspective, history will simply repeat itself. If IT technology decision makers don't change their product selection and acquisition habits, then why should Microsoft or any other vendor change their ways? Especially when a perpetual promise-under deliver-repromise cycle works absolutely fine. This isn't rocket science, it basic common sense (but we know that common sense ain't that common).
Microsoft like most software companies seek significant portions of their revenue growth from product upgrades. In a sense, it inherently implies that these products will always be millions of miles away from the "silver bullet" promises espoused in the pre product release marketing and PR hype. Sadly, there was a time when Marketing and PR hype used to be about new features; a time when there was a clear line between a new feature and a fundamental product bug.
Buying products from any company simply because they have the largest market share is dumb! All it does is encourage other vendors to focus on product market share rather than product quality, which ultimately results in the following:
Microsoft isn't a unique source of this problem, but hey! They are the largest Software Company (the one with the vital market share), and their software products are on some 80-90% of desktops on this planet, and the planet isn't at its most productive at the current time, and no matter how you look at it, this loss of productivity has something to do with the increased nuisance of desktop computing.
If Microsoft could just focus on its core competence (BTW - I can't quite pint point this anymore since they are in every software market that exists today), it would have at least have an iota of a chance in hell of cleaning up this mess.
]]>
Speaking of the Mac A little humor for the day, from one of my fav sites.
The Information Machine Check out this charming movie from the late 50's, developed for the IBM Pavilion at the 1958 World Fair in Brussels.
It's been a while since I've seen punched cards (which reminds me, I still have the first program I'd ever written, on punched cards written for the IBM 1130).
Google Pollutes Links Stream With Evil Precedent For Market Censorship
AMD set to detail multi-OS plan Will its "Pacifica" virtualization technology be compatible with Intel's? If not, that's a potential headache for some software makers.
Udell to event promoters on leveraging folksonomy: 'Pick a tag' I'm now trying to figure out why InfoWorld's Jon Udell is a journalist and not a millionaire technologist (or maybe he is). Udell keeps coming up with one brilliant idea after another. The first of these -- which I thought was just plain obvious -- was Udell's idea for vendors ...
I do know Jon (albeit primarily via emails and phone interviews), he even put me forward for an innovators award in 2003 re. Virtuoso etc.
Great Business Strategy or Dumb Luck Interesting read here today at ZDNet -- Open Solaris and strategic consequences. Here's a bit of the conclusion:
Friendster befriends blogs--and fees Two Web trends converge as the social networking site prepares to launch blogs through partnership with Six Apart.
The coming crackdown on blogging Federal Election Commissioner Bradley Smith says that the freewheeling days of political expression on the Internet may be about to end.
Today is one of those days where one topic appears to be on the mind of many across cyberspace. You guessed right! Its that Web 2.0 thing again.
Paul Bausch brings Yahoo!'s most recent Web 2.0 contribution to our broader attention in this excerpt from his O'Reilly Network article:
I browse news, check stock prices, and get movie times with Yahoo! Even though I interact with Yahoo! technology on a regular basis, I've never thought of Yahoo! as a technology company. Now that Yahoo! has released a Web Services interface, my perception of them is changing. Suddenly having programmatic access to a good portion of their data has me seeing Yahoo! through the eyes of a developer rather than a user.
The great thing about this move by Yahoo! is two fold (IMHO):
The great thing about the Platform oriented Web 2.0 is the ability to syndicate your value proposition (aka products and services) instead of pursuing fallable email campaigns. It enables the auto-discovery of products and services by user agents (the content aspect). Web 2.0 also provides an infrastructure for user agents to enter into a consumptive interactions with discrete or composite Web Services via published endpoints exposed by a platform (the execution aspect).
A scenario example:
You can obtain RSS feeds (electronic product catalogs) from Amazon today, although you have to explicitly locate these catalog-feeds since Amazon doesn't exploit feed auto-discovery within their domain.
If you use Firefox or another auto-discovery supporting RSS/Atom/RDF user agent; visit this URL ; Firefox users should simply click on the little orange icon bottom right of the browser's window to its RSS feed auto-discovery in action.
Anyway, once you have the feeds the next step is execution endpoints discovery within the Amazon domain (the conduits to Amazon's order processing system in this example). At the current time there isn't broad standardization of Web Services auto-discovery but it's certainly coming; WSIL is a potential front runner for small scale discovery while UDDI provides a heavier duty equivalent for larger scale tasks that includes discovery and other related functionality realms.
Back to the example trail, by having the RSS/Atom/RDF feed data within the confines of a user agent (an Internet Application to be precise) nothing stops the extraction of key purchasing data from these feeds, plus your consumer data en route to assembling an execution message (as prescribed by the schema of the service in question)for Amazon's order processing/ shopping cart service. All of this happens without ever seeing/eye-balling the Amazon site (a prerequisite of Web 1.0 hence the dated term: Web Site).
To summarize: Web 2.0 enables you to syndicate your value proposition and then have it consumed via Web Services, leveraging computer, as opposed to human interaction cycles. This is how I believe Web 2.0 will ultimately impact the growth rates (in most cases exponentially) of those companies that comprehend its potential.
]]>Payroll hole exposes dozens of companies Flaw in PayMaxx Web site exposed the financial information of customers' workers, the payroll-services firm acknowledges.
It is clear that in comparison to the Web of the last century, the nature of data on the Web later in this decade will be very different in the following aspects:
- Volume of data is growing by orders of magnitudes every year
Multimedia and sensor data are becoming more and more common.
- Spatio-temporal attributes of data are important.
- Different data sources provide information to form the holistic picture.
- Users are not concerned with the location of data source, as long as its quality and credibility is assured. They want to know the result of the data assimilation (the big picture of the event).
- Real-time data processing is the only way to extract meaningful information
Exploration, not querying, is the predominant mode of interaction, which makes context and state critical.
- The user is interested in experience and information, independent of the medium and the source.
Effectively, the nature of the knowledge on the Web is changing very fast. It used to be mostly static text documents; now it will be a combination of live and static multimedia, including text, data and documents with spatio-temporal attributes. Considering these changes, can the search engines developed for static text documents be able to deal with the needs of the Web? [via E M E R G I C . o r g]
No, but this doesn't render them useless since we wouldn't be at this point without the likes of Google, Yahoo! et al. But building upon the data substrate that web data oriented search engines provide is where the next batch of Information access and Knowledge discovery solutions will carve out their space. The symbiotic relationship between Google (data) and Gurunet's Answers.com (Information and Knowledge) is one interesting example.
The Web is a distributed collection of databases that implement variety of data storage models but are commonly accessible via protocols that rely on HTTP for transport (in-bound and out-bound messages) services. These databases increasingly using well-formed XML for query result (data contextualization) persistence and URIs for permenant reference. 'What Database?" you might ask, "What you once called your Web Site, Blog, Wiki, etc.." my time-less reply.
When you have the database that I describe above, and a collection of entry points from which discrete or composite Web Services can be invoked available from one or more internet domains, you end up with what I prefer to call "Web 2.0" presence, or what Richard McManus describes as: "The Web as a Platform".
Here is a collection of posts I have made in the past relating to Web 2.0, note that this list is dynamic since this blog is Virtuoso based (predictably):
Free Text Search with XHTML results page (with Virtuoso generated URIs for RSS, Atom, and RDF): http://www.openlinksw.com/blog/search.vspx?blogid=127&q=web+2.0&type=text&output=html
It's also no secret that I believe that Virtuoso is a bleeding edge Web 2.0 technology platform (and more..). The URIs that I am exposing provide the foundation layer for other complimentary Web initiatives such as the Semantic Web (Web 2.0 provides infrastructure for the Semantic Web as time will show). They are also completely usable outside the realm of this blog.
BTW - Jon Udell is writing, experimenting with, and demonstrating similar concepts across feeds within his Web 2.0 domain.
These are indeed fun times!
]]>
Fred Wilson writes:
I was talking to an entrepreneur today and advised him not to surrender to "analysis paralysis".It's tempting to want to analyze every option and figure out exactly the best approach before jumping in.
But it's the wrong way to go in most cases.
As a contrast, I attended a board meeting today where the CEO presented the board with a post-mortem on some decisions he made that turned out to be suboptimal. That was a stand up thing to do and the board appreciated it. But I am not sure that the CEO in question did the wrong thing.
Because I believe that Teddy Roosevelt (one of my favorite Presidents) had it right when he said: "In any moment of decision the best thing you can do is the right thing, the next best thing is the wrong thing, and the worst thing you can do is nothing."
I think action and risk taking is what separates great entrepreneurs from the pack. I am not advocating blind risk taking, but I am advocating making a decision based on less than perfect information and going for it. More often than not, you will be rewarded for doing that.
Have RSS feeds killed the email star? silicon.com Feb 28 2005 12:58PM GMT
DB2 users of PeopleSoft and IBM (the DB2 developer and vendor) suspect that Oracle will obviously try to use its ownership of PeopleSoft to covertly coerce DB2 users into becoming Oracle DBMS users. This strategy would take the form of new features and fixes discrimination as somewhat echoed in these excerpts:
"..In the crescendo surrounding the Oracle-PeopleSoft merger, one question has been repeatedly drowned out: What happens to users of PeopleSoft's DB2 database? Oracle chief Larry Ellison has repeatedly assured DB2 users--and IBM--that Oracle will continue to support DB2 and PeopleSoft's interfaces to IBM's WebSphere platform. But IBM isn't taking any chances, announcing an initiative to alter DB2 to work with products from Oracle rival SAP."
"..IBM has good reason to be concerned. Oracle vies with SAP as the leading vendor for enterprise applications, but it's under pressure to show concrete benefits from the merger by combining assets and pumping up revenue. One obvious tactic will be to use the PeopleSoft applications to steer enterprise customers toward the Oracle database by optimizing performance and features toward the Oracle back end."
If PeopleSoft's application core was ODBC based, the vulnerability to this predictable competitive tactic would at the very least be significantly alleviated. DB2 end-users and IBM the product vendor would have a much stronger basis for countering Oracle by taking them to task about their claimed inability to implement new application functionality enhancements against DB2 etc. especially as this would have morphed into a generic database issue as opposed to a DB2 specific issue -- by virtue of the application and data access layer seperation provided by ODBC's architecture.
]]>
Anyway, back to cognitive dissonance. Could this be the reason for the following?
And more...
]]>As indicated in an earlier post: IBM is clearly validating what we have done with Virtuoso (as was the case initially with their Virtual / Federated DBMS initiative ala DB2 Integrator). Here is an excerpt from today's eWeek article supporting this position:
To achieve maximum XML performance, bolstered indexing attributes in the technology will enable advanced search functions and a higher degree of filtering. IBM is also adding support for XPath and XQuery data models. This will allow users to create views that involve SQL and XQuery by sending the protocol through DB2's query optimizer for a unified query plan.
Virtuoso has been doing this since 2000; unfortunately a lot of
]]>Amazon's Invisible Innovations Fortune Nov 11 2004 9:42PM GMT
]]>]]>
The other day I was
]]>Today was a very good day Busy, busy, busy. To start things off, the SEC filing for my purchase of shares in Mamma.com hit the tape.
I think mamma.com has that potential. It's not Google or Yahoo, nor will it be a top 5 search engine anytime soon. But it is a good metasearch tool that I use and have used. Google and Yahoo have become carbon copies of each other, and for me, other than usenet and news searches, it's too big. I like the way Mamma.com organizes websearches, and I use it for picture searches. I'm not going to make a big investment in a company just because I use its product. I invested in the company because it generates cash. I'm not into PE ratios, Price to Sales, etc., etc. I'm into good ole fashioned cash.
The company has a simple business proposition: sell its web traffic and keep expenses very low. As long as it can continue to grow its traffic and keep costs down, it will do what I expect of it -- put money in the bank at a rate of 15 pct or more of sales.
Hopefully, I will be able to help it along by cross-promoting it with other businesses I have, and providing technical and marketing support for their management team. Nothing in the business world is a sure thing, and please don't invest in this company because I did, but I obviously like the company's prospects.
[via Blog Maverick]
]]>The search engine war between Google and MSN is generating some nasty tactics reminiscent of the Microsoft vs. Netscape battle of the mid '90's. Those who remember that battle will recall the almost surgical methods used by Microsoft to all but destroy Netscape. Today, Netscape is a shell of its former self, kept in a dull corner of the Time Warner empire and denied the attention or funding it needs to reemerge as a viable entity in the browser market. Many will also remember the tactics used by Microsoft to destroy Netscape generated years of anti-trust litigation and almost led to the break-up of the world's richest corporation and largest software maker. At the end of the day of course, Microsoft got off with a wrist slap and the knowledge that the US Government will not kill a goose that lays golden eggs (and whose products run much of the national infrastructure). Microsoft is obviously feeling free to resort to some its old tricks and the search engine wars are about to go mainstream, possibly becoming public entertainment. Remember the film, Pirates of Silicone Valley? This script promises to be even more interesting.
Search is the fastest growing sector of the Internet and the advertising industry. Currently considered a $2 - 2.5Billion industry, industry experts expect search and search technology to generate over $8Billion per annum by 2007. As a yardstick to measure by, the logging industry in British Columbia is valued at approximately $5Billion per year. Search, in other words, is a serious global business that is projected to generate staggering revenues and growth over the next half-decade. That much money tends to generate a great deal of motivation.
According to yesterday's New York Times, Microsoft has officially turned its great eye on Google and is specifically targeting Google and its employees. Microsoft recruiters are said to be calling Google staff at home, telling them that MSN's new search tool will bury Google and that they had better defect north to Redmond Washington as soon as possible before their jobs and soon to be stock options are worthless. Executives from both companies were seen watching each other like hawks at last week's World Economic Forum in Davos Switzerland. Wherever a Google representative went, a MSN exec was steps behind, and vica versa. Meanwhile, back in the United States, Microsoft employees are examining Google patents looking for potential weaknesses to exploit. Microsoft is obviously playing for keeps and appears to be preparing to head off the inevitable legal battles that will stem from the introduction of Microsoft's new operating system, Longhorn, currently in development and scheduled for release early next year.
]]>Okay, it turns out that I was less wrong than I thought a little while ago. I'd like to quote an article on Instant Messaging Planet here:
"Since 1999, when AOL served 100 percent of IM users, AOL confronted two major new IM entrants, Yahoo! and Microsoft, as well as numerous smaller entrants," the application continues, citing figures from industry researcher Media Metrix, now part of comScore Networks. "As a result, AOL has experienced a substantial decline in its IM share. Its share of unduplicated, all-location users has fallen from 100 percent to 58.5 percent in just three and one-half years."
There we have it. AOL is a bit over half the IM market. That means Yahoo and Microsoft probably have something close to 25% each. Those numbers are from April 2003, so it's anybody's guess as to which direction they've gone since then.
Thanks to Jim for the pointer to newer stats.
Update: He also IM'd me a a CNet article from August which says:
Although AOL's AIM and ICQ together make up the largest IM network, MSN and Yahoo are making strides. In March 2003, AIM had 31.9 million unique users while ICQ had 28.3 million, according to ComScore Media Metrix. MSN Messenger reached 23.1 million unique users while Yahoo Messenger reached 19 million. Both Microsoft and Yahoo launched IM clients with virtually zero market share.
So there we go. It's really a four horse race.
Another Update: Based on the international feedback rolling in, it would seem that the "A" in "AOL" really does mean America. The Microsoft Monopoly is indeed strong overseas. Interesting.
Okay, it turns out that I was less wrong than I thought a little while ago. I'd like to quote an article on Instant Messaging Planet here:
"Since 1999, when AOL served 100 percent of IM users, AOL confronted two major new IM entrants, Yahoo! and Microsoft, as well as numerous smaller entrants," the application continues, citing figures from industry researcher Media Metrix, now part of comScore Networks. "As a result, AOL has experienced a substantial decline in its IM share. Its share of unduplicated, all-location users has fallen from 100 percent to 58.5 percent in just three and one-half years."
There we have it. AOL is a bit over half the IM market. That means Yahoo and Microsoft probably have something close to 25% each. Those numbers are from April 2003, so it's anybody's guess as to which direction they've gone since then.
Thanks to Jim for the pointer to newer stats.
Update: He also IM'd me a a CNet article from August which says:
Although AOL's AIM and ICQ together make up the largest IM network, MSN and Yahoo are making strides. In March 2003, AIM had 31.9 million unique users while ICQ had 28.3 million, according to ComScore Media Metrix. MSN Messenger reached 23.1 million unique users while Yahoo Messenger reached 19 million. Both Microsoft and Yahoo launched IM clients with virtually zero market share.
So there we go. It's really a four horse race.
Another Update: Based on the international feedback rolling in, it would seem that the "A" in "AOL" really does mean America. The Microsoft Monopoly is indeed strong overseas. Interesting.
Planet RDF is an aggregate of the weblogs of software developers in and around the semantic web community. We hope both to take advantage of the community that exists, and also to foster more collaboration between independent developers.
Although by nature not always 100% focused on semantic web content, it provides a great snapshot of the work being done and new web sites of interest to those working on the semantic web.
The participant weblogs are sourced from Dave Beckett's Semantic Web bloggers list, http://journal.dajobe.org/journal/2003/07/semblogs/ , with a bit of additional editorial control to keep the web site focused loosely on topic. Send mail to Dave, dave.beckett@bristol.ac.uk, if you think you have a blog (with a valid RSS 1.0 feed, naturally) that we'd be interested in, and we'll check it out.
For the technically curious: web standards are used as much as possible and the usual electically invalid input of HTML from weblogs has been cleaned up to be as near XHTML-valid as we could muster, both in the web page and the aggregated RDF, http://planetrdf.com/index.rdf
Planet RDF was developed by Matt Biddulph, Dave Beckett and Phil McCarthy.
]]>]]>XForms Freebie First Eric van der Vlist makes his RELAX NG book freely available, and now Micah Dubinko has done the same re XForms.
RELAX NG is a book in progress written by Eric van der Vlist for O'Reilly and submitted to an open review process. The result of this work will be freely available on the World Wide Web under a Free Documentation Licence (FDL).
The subject of this book, RELAX NG (http://relaxng.org), is a XML schema language developped by the OASIS RELAX NG Technical Committee and recently accepted as Draft International Standard 19757-2 by the Document Description and Processing Languages subcommittee (DSDL) of the ISO/IEC Joint Technical Committee 1 (ISO/IEC JTC 1/SC 34/WG 1).
[via Lost Boy]
I've been talking a lot about Mono.Security but until today I didn't realize that it was never officially introduced - at least in my blog.
The only existing introduction is the Mono's Crypto status page - which BTW is a great place to learn what's in and/or out Mono's cryptography.
<lazy-geek:copy-n-paste>
Rational: This assembly provides the missing pieces to .NET security. On Windows CryptoAPI is often used to provide much needed functionalities (like some cryptographic algorithms, code signing, X.509 certificates). Mono, for platform independence, implements these functionalities in 100% managed code.
</ lazy-geek:copy-n-paste>
The most important piece of information is 100% managed code. This means that Mono.Security isn't tied to the Mono runtime and/or specific class library - you're free (really it's MIT X11 licensed) to use it on any runtime you choose.
StructuresSystem.Security.Cryptography.Pkcs
in .NET 1.2;02 Dec 2003: Mono 0.29 has been released
This release took us a long time to go out, but it is pretty exciting, with PPC supported. The best Mono release ever! [via Monologue]
This time last year Mono enabled us to deliver a release of Virtuoso that unveiled the power of .NET integration as a database extension mechanism on Windows and Linux along the following lines; User Defined Types, User Defined Functions, and Stored Procedures using any .NET bound language. It also enabled the deployment of ASP.NET applications on Linux, and on Windows without IIS. One item missing from my check list at the time was a Virtuoso release for Mac OS X with identical functionality.
This announcement implies we are within striking distance of a Virtuoso 3.2 release that enables .NET classes and frameworks utilization (along the lines described above) on Mac OS X.
]]>I hope other diagrams will be are clear as this, especially the ones relating to actual storage :-)
]]>
This is further illuminates the content of my earlier post on this subject.
]]>The Mono Roadmap and Mono Hackers Roadmap have been released.
]]>Every year, as new hard disks get bigger and faster, applications catch up by producing more data. Hard disks are commonly used to store personal information: correspondence, personal contacts, and work documents. These items are currently treated as separate entities, yet they are interrelated on some level; and it's no surprise that e-mail comes from your personal contacts list and influences the work that you should be doing and hence determines the documents that you'll create. When you have a large number of items, it is important to have a flexible and efficient mechanism to search for particular items based on their properties and content. Up until now, storage mechanisms like Outlook
There is a new HOWTO document that addresses an area of frequent confusion on Mac OS X, which is how do you build PHP with an ODBC data access layer binding ( iODBC variant) using Mac OS X Frameworks as opposed to Darwin Shared Libraries.
]]>NETWORK WORLD NEWSLETTER: MARK GIBBS ON WEB APPLICATIONS
Today's focus: A Virtuoso of a server
By Mark Gibbs
One of the bigger drags of Web applications development is that building a system of even modest complexity is a lot like herding cats - you need a database, an applications server, an XML engine, etc., etc. And as they all come from different vendors you are faced with solving the constellation of integration issues that inevitably arise.
If you are lucky, your integration results in a smoothly functioning system. If not, you have a lot of spare parts flying in loose formation with the risk of a crash and burn at any moment.
An alternative is to look for all of these features and services in a single package but you'll find few choices in this arena.
One that is available and looks very promising is OpenLink's Virtuoso (see links below).
Virtuoso is described as a cross platform (runs on Windows, all Unix flavors, Linux, and Mac OS X) universal server that provides databases, XML services, a Web application server and supporting services all in a single package.
OpenLink's list of supported standards is impressive and includes .Net, Mono, J2EE, XML Web Services (Simple Object Application Protocol, Web Services Description Language, WS-Security, Universal Description, Discovery and Integration), XML, XPath, XQuery, XSL-T, WebDav, HTTP, SMTP, LDAP, POP3, SQL-92, ODBC, JDBC and OLE-DB.
Virtuoso provides an HTTP-compliant Web Server; native XML document creation, storage and management; a Web services platform for creation, hosting and consumption of Web services; content replication and synchronization services; free text index server, mail delivery and storage and an NNTP server.
Another interesting feature is that with Virtuoso you can create Web services from existing SQL Stored Procedures, Java classes,
C++ classes, and 'C' functions as well as create dynamic XML
documents from ODBC and JDBC data sources.
This is an enormous product and implies a serious commitment on the part of adopters due to its scope and range of services.
]]>Virtuoso is enormous by virtue of its architectural ambitions, but actual disk requirements are
Feed | Description |
Virtuoso Documentation | Product documentation available as a collection RSS feeds per chapter with a feed catalog in an OPML file. |
Data Access Driver Suite Documentation (Multi-Tier and Single-Tier) | RSS feeds and OPML file based feed catalogs for both the Multi-Tier and Single-Tier Drivers. |
Virtuoso Tutorials & Online Demos | Online tutorials and live demos cataloged in an OPML file with an RSS feed for each tutorial/demo. |
Animated HOWTOs | RSS Feeds for viewable features and functionality walk-throughs covering UDA and Virtuoso. |
By Bryce Curtis and Jim Hsu, IBM developerWorks
Many portable devices let mobile users send and receive e-mail over a wireless network. These portable devices include Short Message Service (SMS)-enabled devices, two-way pagers, cellular phones with e-mail service, and portable networked laptops or Personal Data Assistants (PDA) with e-mail.
Although these devices can send and receive e-mail messages, they cannot yet access and run Web applications and Web services. The Web application client is the predominant browser. However, as these portable devices become increasingly popular, using their e-mail capabilities to access the growing number of Web services and Web applications becomes increasingly beneficial. In this article, we detail an e-mail user interface that can interact with a Web application in a similar manner to that of a Web browser. In the architecture we propose, the HTML model combines with e-mail technology by routing incoming e-mails to a Web application server.
http://www-106.ibm.com/developerworks/webservices/library/wi-email/
]]>The thing that most surprised me today in the SoftEdge panel on Social Software was the reaction to RSS. I should be clear that I am an RSS true believer. It seems to me that metadata as a byproduct of social software engines (be it blogging or social networking or whatever) is not only enviable, it is inevitable. RSS and FOAF and other yet-to-be-determined social software data protocols will become standards because it simply makes good sense for them to be standardized. Anyone paying attention to the unbelievable development and adoption curve of wireless can appreciate the immense value driven by standards -- and, in particular, standards that are truly standard. So it came as a bit of a shock to me that when I questioned the panelists on the implications of RSS and the Semantic Web, they were less sold on the inevitability of it all.
When asked the question of whether the proliferation of RSS and FOAF might make it possible for reader technology to be the next killer application in knowledge management, I got very strong reactions from both Reid Hoffman and Meg Hourihan. Reid stated that he did not believe that RSS was sufficiently robust to provide significant value an any level. Meg followed up with a general indictment of the semantic web, which she views merely as a geek utopia. I will admit that I'm a fan of Candide (particularly at the hands of Bernstein), but I hardly view myself as Panglos. One need look no further than, for example, the tools that Oddpost has incorporated into its web email client to allow an integrated email and blog experience. Better yet, through a relatively simple web service, Oddpost can deliver an RSS feed of a particular Google News search so that you can keep track of keywords that are of interest to you without having to visit Google repeatedly to find out if your company or candidate or favorite band has been mentioned in today's news. The same is true of watch lists on Technorati. Rather than periodically check to see if someone has linked to your blog, Technorati will do the work for you and deliver the info to your inbox only when there is information to be delivered. These examples are just the tip of the iceberg but the demonstrate the nascent power of RSS and related standards. I'll have to wait for another panel to have that argument with Reid and Meg.
The MySQL-ODBC SDK enables you to make MySQL specific applications database independent via ODBC without wholesale re-writes of your MySQL specific application code. Thus, applications that are written directly to the MySQL Call Level Interface now end up being database independent via ODBC, and usable against any ODBC accessible database (including MySQL).
Why Is It Important?
The Open-Source community is rapidly producing innovative applications and in many cases these applications sit atop relational database management systems. Traditionally and historically, the tendency has been to look to MySQL as the default relational database service for Open Source Applications (the "M" in LAMP) which is unfortunately retrogressive since the concept of database independence has long been addressed industry wide via APIs such as ODBC, JDBC, OLE DB, and more recently ADO.NET.
In some case the existence of these APIs has been unknown to Open Source developers prior to application development, and in other cases the complexity of a port from the MySQL API to ODBC ends up being too difficult. There are numerous reasons why you can't mandate MySQL or any other database engine for that matter to every potential user of an Open Source database centric application:
ODBC as a concept has always been designed to be database-independent; iODBC as an Open Source project was devised to ensure platform neutrality for ODBC (just as Mono is pursuing the same goals re. .NET). When you write an application using the ODBC API database interchangeablity becomes a reality (the worst thing that can happen to you is a dysfunctional driver which is replaceable). Read on..
]]>This cannot be more true than in the case of universal data access (ODBC, JDBC, ADO.NET, and OLE-DB) and security. There is a recently published article on our web site that sheds light on how we have engineered our data access technology to enable our customers enjoy secure and high-performance database connectivity when utilizing any of our Multi-Tier Database Connectivity drivers.
It is no secret that technologies such as ODBC, and to a fair degree JDBC, have generated a good share of undeserved criticism over the years in relation to their fundamental value propositions (providing transparent access from compliant applications to backend databases via seperation of application and database connectivity APIs), and that one of the unfortunate offshoots of this negative press is the contradictory perception that these components are valueless (i.e. they are worth $0.00). Thus, the emergence of the "free is good enough" syndrome which is predicated on the misconception that data access drivers (data source connectivity API implementations) simply provide connectivity and that's it.
If you want to open up your organization (whatever your variation internal, external, internet, extranet, intranet etc.) for the worst of all worlds (deliberate or inadvertent attacks on your data) the FREE is GOOD. Otherwise, when dealing with data access drivers you have to bear the following in mind (covered in detail in the data access security article):