Semantic Web: Difficulties with the Classic Approach

Semantic Web: Difficulties with the Classic Approach

Alex Iskhold has stimulated an interesting discussion at Read/WriteWeb on the somewhat disappointing progress over the past decade toward realizing the semantic web vision. At the heart of the problem, he argues, is the "bottom-up" approach to converting information from the web into RDF and OWL format. Although not offered in this post, he promises a future article outlining an alternative "top-down" model. I, for one, am on the edge of my seat.


Dave said...

I take issue with the notion of the semantic web as a whole. Why turn to a solution of difficult, unreliable, automated high-level reasoning when we have the web community acting collectively to process this information?

Love it or hate it, "Web 2.0" is upon us, and there are enough people out there tagging pages and data in a *useful* way (del.icio.us, e.g.) that a move to some ridiculous metafile is simply too.. well.. meta!

Consider the motivating example FTA, where one would "ask a computer to find you a low budget vacation, keeping in mind that you have a 3 year old child." I may wish to taylor the results in ways I know ("I want a room with a balcony"), or -- and here's the difficult part -- in ways I don't yet know. Only after I start looking around, digging through a few pages (googling "traveling with children is sure a good start!) will I start to even see what my options are, or what sources of information I agree with. I do not see how a semantic web agent would help with this, and if I already knew these things, then I probably wouldn't have needed the semantic agent in the first place.

Long story short: isn't collective action enough??

Ken said...

Good point, Dave. And I suspect for most public "web" information, you're likely right that collective tagging is more than sufficient.

But in terms of the larger issue of information glut, there are still significant areas where content has not been made widely available, such as legal support and law enforcement (e.g., evidentiary corpora), or national intelligence.

The solution space for these areas is usually not called a "semantic web," but the underlying problem is the same - to develop (semi)automated tools capable of helping humans (experts and non-experts alike) filter "meaning" from an ocean of content.

There have been proposals to use "Web 2.0" tools within the intelligence community, but I suspect its relatively smaller size (relative to the vast data they monitor) will diminish the effectiveness of such tools.