Tuesday, March 18, 2008

The Semantic Web and the evolution of man and machine



Nova Spivack, www.radarnetworks.com



The Semantic Web is the next evolution of the Web and it’s important. Whoever you are, it will affect you. It is no less than the evolution of humankind in tandem with technology, moving towards the hybridisation of man and machine. If you’re not interested, or excited, or scared, you’re not listening. The Web is becoming part of who you are.

The Web has a dual role of connecting information and connecting people and, as Nova Spivack's chart above shows, the Semantic Web represents a higher order of both information connectivity and social connectivity.

There are a number of definitions of the Semantic Web – ironically having 'Semantic' in the title does little to aid understanding of the term. Even the title is work in progress, for it's also known as Web 3.0 or Web 3G.

But don't let that put you off. It's a thorny, difficult, contentious - and wildly exciting - topic. And it's already happening, as the Web evolves into a more conscious, intelligent entity, organising information and helping people understand things more easily.

Early examples of Semantic applications and services, already available, include twine, which learns about you as you use it and automatically tags content that interests you, letting you organise, share and discover relevant material more effectively. TripIt, the personal travel organiser, automatically generates a customised travel guide for you when you send it your itinerary.

On Friday, I attended an interesting seminar, run by AIMIA, on the Semantic Web. No forum on the Semantic Web seems complete without a quote from Tim Berners-Lee, creator of the World Wide Web and Yoda of the Web 3.0 Consortium (W3C):

“I have a dream for the Web [in which computers] become capable of analysing all the data on the Web – the content, links, and transactions between people and computers. A ‘Semantic Web’, which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The ‘intelligent agents’ people have touted for ages will finally materialize.” (1999)

Speaking at the seminar, Jennifer Wilson, Principal of Lean Forward, described the Semantic Web as a 'Context Consciousness’, which builds on the Communication of Web 1.0 and the Conversation of Web 2.0. This suggests an interpretative intelligence, which links information together.

To illustrate this, Jennifer cited Tim Berners-Lee's example of the Semantic Web's helping people to interpret their credit card statements, by automatically overlaying their calendar data, so they know where they were when the transactions took place. It could also overlay photos from Flickr, so people can place themselves visually. This means they can easily spot any fraudulent transactions. (Unfortunately, they're also reminded of shopping sprees and various other illicit activities.)

Dr. Kerry Taylor of CSIRO and W3C highlighted the fact that there’s a substantial amount of intricate Web architecture, such W3C's 'Double Bus Architecture', that underlies any version of the Semantic Web. Tools like OWL build rich ontologies from pre-existing data. Just as there are knotty issues in defining what the Semantic Web should be, there are different interpretations of the optimal architecture.

Kerry presented 3 interpretations of Web 3.0:
  • The Semantic Web,
  • The Mobile Web
  • The Sensor Web.
In addition, Ian S. Burnett of the University of Wollongong highlighted the importance of:
  • Video on the Next Generation Web
The Semantic Web is to do with context and meaning. People talk in terms of ontologies and taxonomies. In other words, it's about ordering information to represent the world intelligently and usefully. But, as I’ve observed in my blog, we live in a relativistic world in which people create culture. Not only are there many different interpretations of events in the world, but, there is no consensus on physical reality.

AIMIA speaker Darren Sharpe of Swinburne University of Technology highlighted that there are issues with ontologies that presuppose an existing order. The Semantic Web has its critics. Among the most vocal is Cory Doctorow, author of Metacrap, who points out that there’s more than one way to describe something, that metrics influence results and, that people are stupid, lazy liars, who can fundamentally never know themselves or the world. He sounds kind of cranky like House, so worth a listen.

Semantic Web sceptic Clay Shirky points out that, in today's user-defined digital world, ontologies need to be flexible, not rigid. Instead of being like a library with fixed, pre-determined file cards, we need an evolving system that can accommodate user classification (such as the ‘tags’ people use to label their photos, videos and information).

The Mobile Web is concerned with evolving the Web so that it's optimally delivered through mobile devices. As wireless networks have become pervasive, making the Web portable has become viable and desirable. And it’s not simply a matter of plonking the World Wide Web, designed decades ago for large screens, on to mobile devices. The Mobile Web has elements of the Semantic Web and overlaps with the Sensor Web, but is more concerned with delivery.

Through the Sensor Web, digital devices will sense the environment and help people respond optimally to it. These may be physical monitoring systems, e.g. traffic warning systems, or intelligent building sensors that regulate living environments. They may be human monitors, such as personal digital healthcare assistants that know people's medical histories and their current situation, hence can help patients continuously regulate their health.

The importance of video on the Web is clear from the popularity of sites like YouTube. As Ian Burnett pointed out, video content also needs an indexing system so people can access relevant, meaningful content, but accessing content is more difficult with film. Users need to be able to reference points in both space and time in video footage - they need metadata that gets inside the video. Temporal references are most difficult.

I pointed out earlier in my Time Merge blog entry, a system from Recreating Movement that allows users to extract frames of a film sequence allowing them to reference points in time. Recreating Movement is a programme created by Martin Hilpoltsteiner at the University of Applied Sciences Wuerzburg, Germany, Communication Arts.



Recreating Movement


The informal system of tagging, already employed by those who upload content, is part of the process of indexing video. But the information is unreliable and in need of verification, either by professionals or groups of amateurs who perform 'checks' on each other.

Google is currently attempting to tag its visual content with the help of amateurs. A verification system, used by Google Image Labeler, pairs together taggers and awards them points when they assign the same label to a particular image, effectively turning checking up on each other into a game.

The world is changing. We are becoming more intelligent. The machine is becoming more intelligent. We are not separate.

Welcome to Metaverse!

No comments: