The Semantic Web has been developing since some few years. The Semantic Web is the next stage of the World Wide Web. It is an extension to it providing (hopefully) sophisticated possibilities of successfully scanning thru resources. Tightly coupled with the concept of the Semantic Web is the concept of annotations. Also see my recent weblog Annotations: First Approach that covers this subject. Nowadays, the Semantic Web takes concrete shapes. Here’s what I can tell about it.
Main goal of the Semantic Web
The main goal of the Semantic Web is to make resources on nets – like the Intranet and the Internet – processible by a machine. That is, to allow a computer program to scan for and identify resources by means of attached information. Regarding to Aristotle, who officially invented the mechanism of formal implications (also called syllogism), with the following statements given
- Humans are mortal.
- Greeks are human.
we could imply: “Therefore, Greeks are mortal” (see The Semantic Web, Syllogism, and Worldview for details on this example).
The World Wide Web lacks this formality as almost any page that can be found on it does not follow any grammar or sort of formal rules.
Therefor, with the Semantic Web a language for expressing information in a machine-readable form is being developed. The whole bunch of technologies used is provided as sort of a framework.
It follows a list of major technologies contained in the concept of the Semantic Web.
RDF and XML
The Resource Description Framework (RDF) is a main technology of the Semantic Web using XML. It is representing a container for semantic information. I nearly have no guts talking about REST or SOAP? anymore because of some misunderstandings from myself about it (nevertheless I have not seen a realworld example – existing for sure – showing me the benefits of the REST approach). But I think one could say, RDF is related to REST as there are subjects, predicates and objects. And I remember DJ Adams talking about these three entities in Walldorf at the SDN Weblogger Meet.At least XML is nowadays under heavy use, like for sitemaps which are important for search engines to better crawl those websites, as underlined in an expert article.
The DARPA Agent Markup Language (DAML) extends XML in a sense that disadvantages of XML being significant with the Semantic Web will be reduced or abolished. DARPA is the Defense Advanced Research Projects Agency.
XML introduces formalisms that make it parseable by programs. This is in contrast to HTML which allows display of information but offers only weak capabilities of automatigally scanning the information contained in a HTML document. The DAML introduces an onotology. With this the goal of describing relationships between entities can be accomplished. The DAML snaps in to the RDF.
As the term “Agent Markup Language” indicates the DAML specification focusses on agents. Agents are semi-intelligent services fulfilling a specific task, mainly in a distributed environment. With the DAML it is possible to build up a fact base and then draw conclusions from it answering questions given to the system. That reminds me of Prolog a bit.
The Semantic Web Rule Language (SWRL) is correlated with the DAML. There is not very much information available about how they correlate. But the SWRL seems to be a part of the DAML project. Its specification was reached in to the World Wide Web Consortium (W3C) just some few month ago. BTW: For us Germans the acronym for the Open Mobile Alliance (OMA) sounds a little old-fashioned… Next paranethesis: There a too many TLA’s (Three Letter Acronyms) around there. It is not necessary to say MoU for Memorandum of Understanding. Sorry for leaving the road, now back again:
One essential concept of the SWRL is the dependency between a head and a body of a piece of data. If the conditions specified in the body hold, then the conditions specified in the head must also hold. Both sections, head and body, consist of zero to any number of atoms. An empty body is treated as trivially true. An empty head has the meaning of trivial false. The latter implies that the body must not hold true for any interpretation. Atoms can contain rules. Here, we cut the story short as the SWRL is too complex to outline it in detail.
Who provides semantic information?
Well, that sounds dull to me: should it be the human being providing semantic information used to scan resources on the Web? Does the heck it mean, when I create a webpage or a web resource, I have to concentrate on describing formally what’s going on with my resource? I mean, isn’t there enough work to do with putting together my offered resources? Even if the person trying to find information creates a mapping or a well-formed semantic description of his problem I don’t see it realistic. Are we all converted to mathematicians?
So in my understanding, mainly the human being has to provide those semantics. I don’t like this in any case, therefor I agree to some extend with Metacrap: Putting the torch to seven straw-men of the meta-utopia.
I cannot image this as the solutions to be commonly accepted. Let’s wait and see how many people have joy doing formal stuff they avoided during their whole life successfully.
Please tell me that I have misunderstood something here! Perhaps it is described in the concept of the Semantic Web to use generators for retrieving meta data, but I have not found anything about this.
Implications are not logical
I don’t believe in the common possibility of formalizing all concerned data to allow logically correct implications with them. Firstly, because of the formalisms required to manually implement. Secondly, because many pieces of information are contrary. Just think of a simple example orienting on Aristotle’s instance:
- Superman can fly.
- Clark is Superman.
- Lois knows Clark.
Does Lois know that Clark can fly? Is it really true that Clark is Superman? We all know the answer but not the Semantic Web (to be as polemic as The Semantic Web, Syllogism, and Worldview).
Implications about the Semantic Web
With what I think to know about the Semantic Web, it seems to me like the wrong solution for the wrong problem. Metadata for massive amounts of distributed resources provided by unknown sources cannot be a clever concept. The Semantic Web reduces its applicability to some very few resources around. But for these very few resources, we could invent something more domain specific. Take Amazon or eBay as popular examples. They as providers for buyable resources have found an easy-to-follow mechanism of describing the goods offered on their platform. They don’t need the Semantic Web. Smaller institutions could use it. But as I said, I tend to like domain-specific implementations over the Semantic Web.
Trust no human-being. The compiler rules!
Metadata itself is not bad. I even like it for several cases (like annotations). But don’t let every arbitrary human being apply these significant pieces of information. That is the same as if you let me try to develop a mathematical evidence for the Goldbach assumption being not an expert in this field. Don’t misunderstand me, but to make a general statement: Not everyone is adequate to do anything. And for applying metadata, people being able to think sort of formally should be there! With annotations, we could assume that a developer is able to formally follow the program on his screen in most cases. And with that he is able to annotate it to a certain extent.
The quality of metadata would depend on the human editor applying it. As this works with the concept of annotations because there is a compiler for the annotated data (the program code) and an interpreter for the annotations itself within a handy complexity, we can at least be sure to have some consistency. With metadata on a bakery webpage I hope they engaged an expert being able to implement everything as needed.
For me, the Semantic Web approach is OK if put into the correct context. If stated that the Semantic Web will lead us to the solution of producing and finding information where data has been before, I reply that this is not practical, IMHO. The Semantic Web should be regarded as a very first try to formalize things to make them machine-usable. It is alright to claim that it is possible to handle near-trivial cases like mappings of different data structures. See SAP Exchange Infrastructure (XI) for example.
The vision of the Semantic Web – AFAIK – does not include producing information out of data automagically as first class citizen. Perhaps this is an underground or inofficial vision. But not with the current conception of the Semantic Web at first place.