Skip to Content

IT experts are knowledge workers. We need to make the right decisions with short time. In 2003 SAP Mentor DJ Adams wrote an amazing Improving the OSS note experience about well designed URLs. He gave an example how to look up OSS notes in a very simple way by providing meaningful URL schemes and so URLs like:

http://service.sap.com/oss/notes/12345 

DJ gave some examples how the web infrastructure can help you in fast and efficient way to get information, think of an RSS feed that informs you about changes of an OSS note or new OSS notes a certain application component. This shows the power and potential of a proper use of web infrastructure and web standards.

But searching a certain information is only one aspect. To make the right decisions I need an overall view of many different sources: OSS notes, master guides, upgrade information, SAP Library and of course non-normative information like whitepapers and blogs. And this is what Linked Data principles is all about: it is about navigation through information landscapes.

A Linked Data Prototype

Every ABAPer knows the ABAP inline documentation: F1 & F4 helptexts of all ABAP development objects like reports, transparent tables, data elements, classes and so on. The ABAP package SDOC provides an infrastructure for this kind of inline documentation: think of the report RDOCFINDER that performs a full table scan of inline documentation.

So I decided to expose the information in an ICF web service – an web service of the Internet Connection Framework of an AS ABAP. I decided of expose the data as RDF – and XML based standard of the Semantic Web. I did similar things before and Semantic Web Technologies Part 3 – Looking Into AS ABAP about it. In fact, my last XML book explains those techniques in detail. So I created a REST web service that has an input an SAP application – in this case it is the Error and Conflict Handler (CA-FS-ECH) application component. This is SOA framework belonging to PI infrastructure SAP Mentors Michal Krawczyk and I Forward Error Handling – Part 1: Outline about. So the ICF service http://nsp:8920/zdocu/CA-FS-ECH of my prototype gives out

  • the name of the application component,
  • additional meta information,
  • links to SCN wiki and SAP Library and
  • the names of ABAP reports having inline documentation

This information is given out as RDF and can be displayed with any Linked Data browser like Tabulator:

A double click at any link I navigate to the corresponding resource. When double click at a report name like I get the inline documentation of ECH_R_CUSTOMISING_HDS_SYS in the web browser:

 

To make this possible because I created a new web service from /default_host/sap/bc/docu . You can test the latter by activating the service and call it: http://nsp:8920/sap/bc/docu?name=ABAPTRY – of course you have to alter the URL a little bit according to your AS ABAP.

So what is Linked Data all about?

In this prototype a created a REST web service that gives out information in according to Semantic Web standards. It contains links to other resources like ABAP inline documentation (which are calculated on the fly) and statical information like links to SAP Library and SCN wiki. If SAP would provide it’s information using meaningful URLs like DJ Adams suggested I could generate them on the fly and provide a link:

http://help.sap.com/library/application/CA-FS-ESH/NW701

This is the only drawback: If SAP would expose their documentation using meaningful URL schemes this prototype would be a powerful navigator between internal and external documentation. So Linked Data means the following:

  • Think of everything as a resource. A piece of documentation is a resource, for example.
  • Provide meaningful URLs referring to resources. Don’t hide them so that you need search engines to find them.
  • Use techniques like content negotiation or metadata to expose metadata and – most important – links to other resources.

The inventor of the web Tim Berners-Lee explained this in the following talk:

http://www.youtube.com/watch?v=OM6XIICm_qo&rel=0

 

Summary

In this weblog entry I explained Linked Data principles and showed how this can be used to link different kinds of documentation: online documentation, normative and non-normative documentation.

I thank DJ Adams for his inspiring weblog which was the basis for the prototype:

  • Thinking of documentation as a resource having a meaningful leads to the idea of providing metadata.
  • Metadata can inform us more easily about the resource and its changes – think of RSS.
  • But Metadata can be used to link to other resources. Using browser add ons we can evaluate and visualize links. 

In the next part I’ll explain how Semantic Technologies help the linked data approach. In fact to create the output above I used a small ontology for SAP specific terms. So the first use case of Semantic Technologies is to provide meta information of links as well as labels: readable names.

To report this post you need to login first.

34 Comments

You must be Logged on to comment or reply to a post.

  1. Tammy Powlas
    I have no experience with linked data.

    I didn’t understand what you meant by linked data until this blog.  I do agree DJ’s idea is great.

    I also enjoyed the YouTube and we should all chant “raw data now” – and “no excuses”.

    I think linked data would simplify SAP’s world for everyone – from development, to support, to implementation.

    Excellent work.
    Tammy

    (0) 
    1. Tobias Trapp Post author
      Hi Tammy,

      glad you liked it.

      Tim Berner Lees’s approach is very radical – but I think for the purpose of SAP documentation it would suffice that people from SAP start to think about documentation not as silos of artefacts like KM docu, PDF documents and so on that have to presented in portals and made accessible using search. All those documents (think of OSS notes, best practices, upgrade guide) refer to existing objects like software components, reports, report documentation and so on. It is the strength of SAP systems that they are realized within the system in an integrated environment. And in an highly integrated environment you can implement integrated processes and so a (linked) overall view.

      In fact it took me a long time until I understand linked data principles. I think the problem is that some evangelists to “preach” it in a very theoretical and high level way. I had to install a Firefox add on until I learned that working with RDF is not painful because there are cool AJAX apps that make life easier. So web 2.0 is an enabler for web 3.0.

      Cheers,
      Tobias

      (0) 
  2. Tom Cenens
    Hello Tobias

    I enjoyed reading your blog and watching the youtube video so thanks for posting such interesting content.

    Raw data now! It would make sense to use all the available data and incorporate that into the different areas you mentioned.

    Kind regards

    Tom

    (0) 
    1. Tobias Trapp Post author
      Hi Tom,

      yes, “raw data now” would help a lot, think of historical research to mention an non-SAP area. Researchers work on many interesting things but we’ll never find them because they are locked in libraries and perhaps data bases. In my opinion lot’s of data gets lost and I learned that it takes a hell of a time to search them.  The linked data approach could help to find them. The same is true in the area of eGovernment – there are very promising and some existing approaches in this area.

      Best Regards,
      Tobias

      (0) 
      1. Tom Cenens
        Hello Tobias

        Indeed. Even consolidating all the seperate content that was created by companies on internal silos (wiki’s, knowledge sharing platforms) would create a huge amount of valuable data and content.

        The invention of HTTP was to get everything accessible by everyone. While it succeeded in doing that there is also still the opposite that takes place.

        Information is kept for reasons such as job protection which is not a good reason. Sharing knowledge can provide added value. The information on it’s own doesn’t neccesarily bring added value but what you do with that information is of importance.

        I do believe this will change with the next generations that are stepping in which can quantum task instead of multi task and who have a lot of trust in placing content online.

        We will move towards what I call the information age where knowledge is accesible for everyone and where other skills will become more important.

        Wolfgang Grulke even talks about how a lot of jobs will disappear because information will be so easily accesible in the future. It could be literally in the palm of your hand in the form of a chip.

        Kind regards

        Tom

        (0) 
  3. DJ Adams
    Kudos to you yet again, Tobias. You’ve provided concrete examples of the next steps towards linked data in the SAP docu world, and have embraced key components and tech in (IMHO) exactly the right way.

    I take my hat off to you.

    cheers
    dj

    (0) 
    1. Tobias Trapp Post author
      Hi DJ,

      I’m very glad and honored  that you like this approach. I think I will work on that prototype to make it even better. At the moment I’m thinking about using the an alternative Linked Data browser like ODE (see http://ode.openlinksw.com/index.vsp ) although Tabulator is very neat and I like its cool AJAX UI  stuff. Maybe I’ll try to use SC wiki as central information hub and link from SCN wiki to SAP resources. What do you think?
      Last but not least I still have to learn about URL design. Can you recommend me something to read?

      Best Regards,
      Tobias

      (0) 
  4. Tobias Hofmann
    Can’t wait to see your next example using you ontology. Linked Data is nice to link data, but without a semantic notation and an ontology it’s just data. Would be nice if SAP adopts an ontology driven documentation (product -> problems -> solutions -> who, where, how, etc, including official SAP resources like Notes and “external” data loke the Wiki, blogs, forums) and your installation and daily work is an instance.

    br,
    Tobias

    (0) 
    1. Tobias Trapp Post author
      Hi Tobias,

      I think in my next weblog I will go more into detail of the protytpe and I will discuss some pescy RDF issues – and I hope no one will get bored.

      Best Regards,
      Tobias

      (0) 
  5. Chris Paine
    Hello,

    I very much like some of the ideas here, but I see one that I am not at all keen on.

    The idea that SAP (or anyone else for that matter) should provide URLs in a fixed format to allow the construction of a link to related data scares me.

    This sort of linking starts challenging the ability of the resource provider to make any changes to how they organise their resources. It introduces an inherent fragility into the links. The client should very very rarely be required to construct a resource link. Instead it should query some sort of factory/main resource for the current link URL – a resource that is likely not to change.
    Instead of “http://help.sap.com/library/application/CA-FS-ESH/NW701“, perhaps “http://help.sap.com/library/?application=CA-FS-ESH&version=NW701” the resource provider could also then provide useful information in the reply as well as the URI to the resource you are interested in. For example, if there was a newer version of the information, was this area deprecated in later release, etc. The main resource could even provide a  form for you  – so that your client could know how to query it and what the valid options were.

    I would be interested if you agree/disagree with this – and if disagree why 🙂

    Thanks for a very informative blog.

    (0) 
    1. Tobias Trapp Post author
      Hi Chris,

      can you tell me why http://service.sap.com/oss/notes/12345
      should ever change? Even if there would be king of note with another naming scheme the old objects would still exist and addressable because cool URIs don’t change – people are changing URLs and in most cases this is avoidable.

      Does it seem to me that you’re attacking the heart of REST principles? In fact I don’t see any reason for that. HTTP is a very powerful API and it has built-in mechanisms to deal with that – think of redirects. And we have an HTTP request if a document doesn’t exist anymore. In of course they are other possibilities to resolve URLs on a higher architectural level, too.

      I had this fear some years ago when I thought about the differences of URLs und URNs in XML context. Of course it makes sense not to download an XML schema every time you want to validate but it is neat if its URL provides some basic information about it in fact I appreciate standards like GRDDL . But back to URNs: I thought of them as a kind of guid that would allow to persist XML but they are not. If you want to use URNs in a you have to think about URN schemas as well – but you don’t have any canonical place for metadata and I have saw many failed XML based standards that got more and more and complex and no one is understanding them anymore. URL’s would help a lot!
      In the context of hypertext documents you have to use proper design principles. In my opinion Tim Berners-Lee gave had brilliant ideas how to deal with this topic: http://www.w3.org/DesignIssues/ One of his most important design papers is http://www.w3.org/Provider/Style/URI with the brilliant Name ‘Cool URIs don’t change’, written in 1998. I think you should give it a try .
      In my opinion modern KM tools like wikis supporting hierarchies have the possibility to provide meaningful (=composed) URIs sometimes using redirects. But of course a URL proper design is necessary and the basics are explained in above mentioned paper.

      But therefore an information architect is needed. Today we have many tools that support an information architect: companies already have taxonomies of their documents and ontologies help to make them understandable.
      This is necessary in a world of URLs because it don’t want to wake up every morning and asking myself: what is the URL of my favourite newspaper, the Fukushima’s wiki page or SCN. Tim Berners-Lee is right: cool URIs don’t change- maybe they redirect.

      And now we come to Semantics: Semantics not about first order logic, graphs and only and other things only mathematicians can understand – it is about things in the real world like books, sales orders. The ideas of Semantic Web are necessary to address things within in an internet of things and therefore we need to address things  – and therefore we need URLs.
      Last but not least I’m not an expert in the area of URI design. Of course I could use forms as you suggested – maybe I should ask a REST expert like DJ could give me an advice.

      Best Regards,
      Tobias

      (0) 
      1. Chris Paine
        Hi Tobias,

        Thanks for such a full and valuable response – some great references in there too!

        >an you tell me why http://service.sap.com/oss/notes/12345 should ever change?

        Answer: it shouldn’t. It is clearly one of those cool URIs – the concept of an SAP note being referenced by a number is pretty deeply ingrained – doing anything else is almost unthinkable.

        You have a simple REST service with a unique resource locator – excellent!

        But I also think that REST is very much about discoverable services – I should be able to nav from one to the other and I should be very careful about making my well know entry points so that they don’t change. To your example – I navigate to the International News headlines page of my favourite online news service – and then browse from there – I don’t construct the URL which takes me to International News / Japan / Fukushima – I browse there – or potentially I use a service of the main page to return the URL on how to get there (via a search).
        This is why I worry about “http://help.sap.com/library/application/CA-FS-ESH/NW701” this is clearly not a well known URL – how am I supposed to know as a client what all the possible application areas could be – or which versions are supported for each application area.
        A wiki is a good example (I’ve seen this happen in SCN several times) where composed URLs don’t work. A Wiki is by definition open to change – someone changes the top level folder and the linked references below all fall over. Yes there are ways and means as a site owner of ensuring you honour all existing URIs, and as TBL says there really isn’t an excuse for doing it, but it happens all the time. If you build your client to traverse instead of construct, it’s going to be a lot more successful out there in the real world.
        As a developer of the client that gets the link, I don’t want to have to know about the structure of the site I’m navigating. Perhaps once I’ve got the resource I’d like it to stay (as a site owner I can put an expiry date on the resource if I like, but it would be nicer for me to just honour that URI) but I very much doubt I’d ever like to construct it.
        Here’s some links for you – not quite the TBL pedigree but Joe Gregorio is pretty well known too..
        http://www.xml.com/lpt/a/1561
        http://bitworking.org/news/141/REST-Tips-Prefer-following-links-over-URI-construction

        When you realise that http://www.sun.com doesn’t exist anymore (OK so it redirects – but heh!) Cool URI don’t change – but the world does.

        Thanks again for the very full and interesting response!

        Cheers,

        Chris

        (0) 
        1. Tobias Trapp Post author
          Hi Chris,

          thx for your links proper regarding URL design and your point of URLs changes.

          If a domain like http://www.cogehead.com or http://www.sap.com will change – other things will be change, too: URIs will stay but URLs not. I don’t fear that because documentation will either vanish or will be reorganized. In such a case a consistent and coherent set of URIs will be changed and replaced by another set. So my navigator will have to be adapted but the linked data approach will be valid in the future.

          Your advice about wikis is really useful, but I don’t see this as a problem because it can be solved using a governance process. Not everyone is allowed to rename or delete wiki pages – and we can even define set of pages with special properties like proper naming conventions. These overview pages could work as central information hub – so there would be a very small “normative” part of the wiki with a special governance process. Please remark that this doesn’t mean the whole wiki will need a governance process – there are only rules for some parts which have to governed by special users. Within the wiki we would still search & navigate as usual. Linked data principles within a wiki would be possible, too, but certainly need a semantic wiki.

          Cheers,
          Tobias

          BTW: I don’t fear broken links because they can be checked automatically. Don’t forget we’re discussing an enterprise scenario that already possesses governance processes for URL schemes.

          (0) 
          1. Chris Paine
            Hi Tobias,

            This really is an interesting discussion – and a very interesting area – giving data the semantic information that is needed to make it machine discoverable/processable I think we both agree(?) is the way to allow linked “knowledge”.

            I don’t see that even if we start providing this semantic layer with the data/resources that we expose to the network, that it is going to mean that we get any better at governance. Indeed I’d see it as the best excuse to no longer need it (as much). Like data will be much easier to find (doesn’t need to be grouped together in a particular area), and relationships between data become much more obvious.
            Why waste our time in an attempt to provide governance for a small and limited section of data (small and limited when compared to the relevant data on that subject that might be available elsewhere) when the data in the semantic layer over that section can probably do a better (and more flexible) job of it anyway?
            Sorry, I think I’m being deliberately provocative now 😉
            Thanks for the discussion – and please do tell me if I’m traversing links to unrelated to your original blog!

            (0) 
            1. Tobias Trapp Post author
              Hi Chris,

              I think in many things we agree, but there is a misunderstanding: I don’t want to use semantics for doing governance. I only considered it possible that a little bit of governance is needed to link normative and non-normative documentation. But this is nothing new: SCN blogs have to be categorized for example. And if SAP would approve non-normative SCN documentation like whitepapers and would link to them, more governance would be necessary because a link to an approved SCN wiki page mustn’t break.

              And you are right – Linked Data principles are possible even without “meaningful” URLs – but they would help me for my prototype. And I’m convinced that the REST-like approach of thinking of documents as resources and using HTTP as API is the road to success. But a daily change of URLs would be severe. But in fact I’m very conservative: I don’t want to google the URL to SCN, Twitter… every day – I’m glad that they stay the same;) But it would be interesting to learn from Linked Data experts about Linked Data best practices.

              Cheers,
              Tobias

              (0) 
              1. Thorsten Franz
                @Chris: I have to tell you to be careful here. I know Tobias very well and in my experience, Tobias saying “of course you’re right” might mean that he’s beginning to feel that you’re beyond the reach of a rational argument. 😉
                When he starts agreeing to me without taking the idea further, I know I should retrace my steps and try to arrive at a new understanding. But of course Tobias would never do that to anyone else.
                Cheers,
                Thorsten
                (0) 
                1. Chris Paine
                  Thanks for the tip Thorsten!
                  I am being a bit deliberately provocative – and I apologise. It’s just so good to get such in detailed and well argued information from people like Tobias. So often you see people fronting up with an opinion on a topic without having any reasoning behind their thoughts. Not so in the case here! I have really enjoyed this exchange and have certainly learnt some things – and perhaps changed my opinions on a few points too 🙂
                  (0) 
              2. Chris Paine
                Thanks Tobias,
                I think you are right – governance and searching/organising data are two quite different things. I’m a big fan of using a tag cloud to find data that I’m interested in – but that has its pros and cons too. I cede this point gladly!

                And again – you are right – having to use Google to find SCN every day would be a nightmare, I wouldn’t ever consider that linked data should work in a similar way.

                I’ll go back and perhaps re-phrase my original point now that I’ve listened to your thoughts.

                I worry that although it does mean a more meaningful URI can be generated that sites which try to implement such schemes will cause themselves problems in the future if they every want to reorganize. Given how often SAP change the names of their products I can see this being a problem for us! The UUID approach here has some merit. Secondly I fear that such “nice” URIs lead to clients attempting to construct them – and there are limited ways (redirects etc) for sites to inform the clients that the schema used has changed/has been enhanced. I think rather than sites spending effort on trying to maintain “nice” URIs and a structure that allows for URI construction, that if they instead spend that effort on building machine readable link traversing data/resources then in the end, we are all better off.

                Hmm – perhaps I haven’t backtracked enough 😉
                Thanks hugely for the discussion – has been a learning experience for me!

                (0) 
                1. Tobias Trapp Post author
                  Hi Chris,

                  you analyzed the weaknesses of meaningful URLs very well. Perhaps this is a question than can’t be answered generally in terms of yes/no but in terms of guidelines which recommend either the solution under certain circumstances. I think this is a challenging topic that we can’t solve in comments to this blog entry – it would deserve an own blog ore perhaps even more.

                  I know people who work in the area of semantic web and sell solutions for smart sensor data and they are very successful combining REST principles (and in fact meaningful URLs) and semantic content. But you are right – this is something different than documentation.

                  I think I will interview them and ask them about their experience – and of course I’ll blog about it.

                  Cheers,
                  Tobias

                  (0) 
    2. Ethan Jewett
      Hi Chris,

      Just a quick note – nothing to compete with the discussion you and Tobias are having 🙂

      I don’t understand your example:

      http://help.sap.com/library/application/CA-FS-ESH/NW701

      vs.

      http://help.sap.com/library/?application=CA-FS-ESH&version=NW701

      How is the second any less fragile than the first or than http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/ba1bacca960611e10000000a42189b/frameset.htm for that matter? It just looks like it is loading more information into the query string instead of the path, but the URL contains the same information here.

      Cheers,
      Ethan

      (0) 
      1. Chris Paine
        Hi Ethan,
        perhaps I should have been more specific…
        http://help.sap.com/library/?application=CA-FS-ESH&version=NW701 should return exactly the same as
        http://help.sap.com/library/?version=NW701&application=CA-FS-ESH
        no implicit structure needed. But as I mentioned a better solution would be for the resource to provide a form that the client could recognise and POST to to get a response.
        I’d never get a client to generate http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/ba1bacca960611e10000000a42189b/frameset.htm – that’s got to be discovered by traversing from one point to another – linking data rather than constructing URIs. If your client is traversing from a known URI and getting the fragile results that’s fine – it could even expire/time limit the URI returned on me – I’ll traverse from the known entry point again the next time I need the data. As long as I have a simple discoverable machine readable path to the data that I want, what the actual URI looks like is (to me) irrelevant.

        A client isn’t (is unlikely to) have the smarts to recognise the constituent parts of a “nicely” formatted URI anyway. Holding that metadata against the resources that make the path the data I want to get to is important.

        if I have a URL http://www.wombling.com/fumanchu/honkytonky/wibble
        and calling http://www.wombling.com tells me I can get data about plants growing tips from http://www.wombling.com/fumanchu and that tells me I can get data about edible plants from http://www.wombling.com/fumanchu/honkytonky and that tells me I can get details about lettuce from http://www.wombling.com/fumanchu/honkytonky/wibble. I really don’t care what that URL looks like.
        What matters is that the metadata that allowed me to get there exists and existed in a manner that a client could read it easily. Would it matter if the URI’s were 1,2,3? Yes it would be less obvious to a human user that there was a hierarchy – but does that matter to my client that is just trying to link together bits of information from as many sources as it can discover?
        By moving the load of understanding structures and hierarchies from the data gathering client and placing that onto the provider of the data we should be able to link to more data faster.
        Anyway – enough of my drivel!
        Am I alone here? Where is Alisdair Templeton when I need him?
        Great discussion guys!

        (0) 
        1. Ethan Jewett
          Ah, I see what you are getting at. Thanks for clarifying 🙂

          I agree that the hierarchy is not implicit in this case, though ontologies are rarely implicit in data. On the other hand, I think there are a lot of real-world advantages to having a URL scheme that is understandable by actual human beings. Having a scheme like Tobias recommends is memorable for people (a big win), makes URLs relatively guessable (a dubious win), and it also doesn’t get in the way of a linked-data approach to make it discoverable by machines (neutral).

          So, I support the URL-path approach over the query parameter approach. But both are infinitely better than what we have now!

          Cheers,
          Ethan

          (0) 
        2. Chris Paine
          And when I said return the same…
          what I meant was return a reference to a resource which provides the detail. (along with any other relevant relationships like a reference to a resource to show all the items that match just version=NW701, and the other versions of CA-FS-ESH).

          Linked data (my understanding) doesn’t have to mean human readable URI – it means getting an easy machine readable relationships.

          (0) 
            1. Tobias Hofmann
              Hi,

              that’s easy when your data is already stored in an ontology or semantic way. It’s been a long time since I did something with Ontopia, but I believe it offers you a way to extract the stored information in human and machine readable ways (not sure if it does so out of the box or if you need to write an application that uses the API).

              br, Tobias

              (0) 
            2. Daniel Koller
              There are different discovery mechanisms for URIs in the Linked Data area, but the summary is any of them (as of now) need some helpers to retrieve them:

              First some hints on getting resources without specific API at hand: I’ll take the concept/location of Palmyra as an example

              – You can lookup the resource via a search engine and the first result, you will get, is http://en.wikipedia.org/wiki/Palmyra. (for humans mainly) A machine can do the same thing, retrieve the first result an consume the result. (This valid for a for number of limited, (semi-)authoritative data sources like wikipedia.)
              – A more useful resource in terms of machine-readability is http://dbpedia.org/resource/Palmyra. The DBPedia project extracts the content from Wikipedia and presents it in a machine-readable way. (mainly by evaluating wikipedia’s infoboxes) and provides unique URIs for Wikipedia resources.  More ways to access DBPedia are listed at http://wiki.dbpedia.org/Applications . A nice representation of the resource content is visible at http://dbpedia.org/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FPalmyra&sid=2853&urilookup=1 .

              –> This is approach would allow you now to query for similar resources  ( such as e.g. all world heritage sites in this country): visible at http://dbpedia.org/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FCategory%3AWorld_Heritage_Sites_in_Syria&sid=2853

              If you have a REST-enabled data source for a given subject, then resource discovery (and e.g. the specific URL format) is up to the documentation provided by the API provider.

              In terms of “information/application design” this means that resource discovery across more than one REST API is not possible (afaik): you have to know/at least to guess) how resources are described to retrieve them.

              Using Semantic Web technology “resource browsing/navigation” is possible, without the need for the software developer to know beforehand in detail, which resources will be consumed.   

              (0) 
              1. Chris Paine
                Hi Daniel,
                was interested in your point:
                >In terms of “information/application design” this means that resource discovery across more than one REST API is not possible (afaik): you have to know/at least to guess) how resources are described to retrieve them.
                Could not one resource return references to another resource that implemented an alternate API? Hopefully most people are starting to use OWL2 – but if a client were capable it could still understand/recognize an alternate representation? Or perhaps this is not quite what you meant (I’m a bit over my head here! and trying very hard to swim hard enough keep my head out of the water.)
                Thanks!
                (0) 
                1. Daniel Koller
                  …fast feedback on the last point:

                  (when I wrote this I thought of e.g. hypothetical apis auch as service.sap.com/notes/12345 and kb.microsoft.com/345624 ..)

                  You can implement – more or less – linear navigation scenarios, where one retrieved resource contains the link to the next one (even between APIs): but you need to know/persist the resources retrieved on the way.

                  Using e.g. SPARQL allows you get directly to the point where you want to start. (think of e.g. an combining #ms and #sap specific knowledgebases with one SPARLQ interface, even if they are in fact not connected)

                  (0) 
              2. Tobias Trapp Post author
                Hi Daniel,

                thanks for giving those examples from DBPedia. I learned two thinks:
                1.) Those OpenLink tools are really great and it was good to follow your advice you gave me on Twitter to use them.
                2.) If we are already working in a Semantic Web context we should try to take advantage of the semantic query tools like SPARQL. I already had in mind to evaluate it but unfortunately hadn’t the time last weekend…

                In fact I had the same question like Chris when reading your comment to REST APIs and like Chris I would appreciate to hear more about it. Another possibility would be that you blog  about it 😉

                Cheers,
                Tobias

                (0) 
  6. Daniel Koller
    Hi Tobias,

    thx for this overview about what you are doing to make use of semantic technologies in he context of SAP software…. good to read, inspiring and brings up the important questions.

    I want to comment on some of your statements and some from the other comments:

    – Resource Discovery:
    In another answer here I provided some examples how resource discovery can be done using semantic technologies: focus there was “find a resource/uri for a given topic”.
    An extended approach is querying RDF stores for contents with SPARQL as a querying language, which carries the prices of needing an RDF store, which can locally/remotely be accessed by an application.

    – URI construction is a valid approach, as long as you know, how the content at e.g. service.sap.com/notes is structured. (Downside is that inclusion of these data sources needs per-data-source-integration tasks to be done).

    – Fragility of URIs: My position is that best URIs are structured in a way, that the user (or a machine) can understand & interpret the schema behind. This holds true even when  the URI contains (like in the comments mentioned) a long unreadable identifier. The example talks about pages from help.sap.com, which are obivously created from a help authoring content management system. This is good/ok, when it means that the a help concept gets the unique ID after every revision in the CMS. (then the URI is stable and can be linked).

    – The approach Tobias took re. taking same URIs from help.sap.com as “fixed” values, is the best approach you can do, when you either don’t see the logic behind the identifiers and/or you cannot influence it. (so URI construction would likely not work and/or bring unpredictable results – due to e.g. CMS reorganization 😉

    – HTTP can be the API: Discuss! (quote from @qmacro on twitter): yes, it can be the API (but it requires also that the content provider thought so). So using clean URIs on a page, which was not designed with this in mind, requires hard work in mappings, which are structurally unreliable, as the site design can change anytime. The point where @qmacro comes in again, is that content providers should have good reasons for not using cool-uri-compliant/REST-like URI structures. (e.g. half the way of offering a separate API is already done then)

    – “Cool URIs do not change – but the world does”:
    Yes, the world evolves and there are events which cause even company names & domains to change. (in terms of IT transformation this has to be considered in every IT transition planning – there is consultancy available for that).
    The specific semantic web note here is, that handling large scale dynamics (=changes in datasets) is a weak point in today’s semantic web tooling and therefore needs thorough thoughts when being approached). Standards/vocabularies are still emerging.

    (Btw. I would not like to extract wiki contents (in a wiki which I do not own) on this way: I consider it quite difficult to monitor and reflect changes instantly in the form of RDF-isized descriptions. NLP-technology is not so far developed at the moment to do this reliably)

    – Governance for own datasets: Chris Paine had the remark to look at larger external contents stores, when managing own content creates more effort: you likely have different trust levels to your own maintained contents than to anything which is listed on e.g. the 250th page of a Google Search result.
     
    So far for now,

    (0) 
    1. Tobias Trapp Post author
      Hi Daniel,

      I’m glad that you joined the discussion and I like clarifications and your pragmatic point of view. And I liked that you addressed weak points and proposed solutions – either as standards can help or facades. IMHO this is so interesting that it would deserve one or more blog entries. I think more and more people within SAP and SAP ecosystem are getting more and more interested in Semantics – so it’s the right time to put some more fuel into the fire 🙂 I would really appreaciate a blog about this topic from a real expert like you and not like a Linked Data beginner like me.

      Resource discovery: I think this aspect be something that you could be worth a single blog entry. I guess a SPARQL query can be executed directly (my link would be a SPARQL query of an existing SPARQL endpoint) or it could be hidden within the web service who exposes itself as a typical REST query interface like Chris proposed. Both is technically possible, but what is the semantics in case of a result set consisting of more than one element? From the standpoint of RDF properties I’m linking to a result set but not to the elements of the set. I am right? What are the consequences?

      Cheers,
      Tobias

      (0) 
  7. Gregory Misiorek
    i couldn’t resist but post the link to the first web page, which is still up and running. the NeXt computer on which it was created is now immortalized by NS prefix in C(ocoa) which in turn makes iDevices running the world over. i’m all for world wide (knowledge) web. let’s leave silos in the unlinked world.
    i understand that XBRL is somehow related to XML, so we are closing yet another loop, this time in the finance and business world.
    (0) 
    1. Tobias Trapp Post author
      Yes, XBRL is an XML based markup language for business reports and SAP’s financial solution does support it up to me knowledge. Some people even believe that usage of XBRL could lead to transparency in business (of financial risks for example) and this could even prevent a financial crisis like subprime crises.

      To be honest, I’m no expert in this area but I know that there are even approaches to add more semantics (think of classifications) to XBRL. It would be very interesting to discuss this topic with standards architects from SAP.

      Best Regards,
      Tobias

      (0) 
      1. Daniel Koller
        …first: loved the remark “some people even believe”.

        For today’s casual web user XBRL does not provide the required additional transparency: it is XML but with very specific semantics and – on top – the detailed meaning can be different from country to country. (see http://de.wikipedia.org/wiki/XBRL#Zusammenfassung in german listing the issue of missing comparability due to different availability of different taxonomies, which CAN be used )

        (I don’t know whether even financial experts use specific XBRL software to analyze/compare XBRL reports)

        There is a link between XBRL and the semantic web:
        – The people from OpenLink created RDF representations of XBRL reports (via XSLT sheets)
        – They made it available to the RDF world via e.g. SPARQL: details at http://ode.openlinksw.com/example.html#ExampleXBRLfromSEC
        –> but I don’t know the last status of this activity on OpenLink side.

        For sure: it is a nice vision to compare company balance KPIs with one mouse click)

        (0) 

Leave a Reply