Skip to Content

Enter Semantics: Reasoning Using Pellet – Part 5

In the second part of this blog I explained what ontologies are. They are formal descriptions of knowledge consisting of two different parts: facts and a rule base that is able to classify the facts. In our showcase the rule base consists of logical conditions concerning messages of the business application log that is an ABAP application which exposes protocols. The set of facts describes an error protocol which has to be analyzed,

The knowledge database is extracted from a wiki and the BAL protocol can be consumed using a REST Web Service implemented in AS ABAP. Facts and knowledge will be mashed up into in ontology. You can perform this using XML transformation or with an API I will introduce later in this blog. The ontology contains a set of subclasses of an error class  where each error class corresponds to error situation which is defined in a wiki a special wiki page. Now we have to classify the error protocol and as the result we get a set of possible error classes and to each one a wiki page exists that contains additional information that can be checked by the user who uses the expert system to solve a problem. In the show case I presented here the result would be:

\     #Error_in_Transfer_of_FI-CA_Totals_Records_to_General_Ledger

The first bullet in the list contains the default classification: everything is a thing. The second entry is the genric error class. All further entries are error classifications which can be assigned to a wiki page. Now we show how to calculate such classifications using a reasoner.

Reasoning with Pellet and OWLAPI

In the following I show how to do reasoning. Therefore you need a reasoner like Pellet and an OWLAPI. In the following Groovy code I load an ontology (for the sake of simplicity it’s file) and classify an error messafe #ToCheck and asking for the classes to which it belongs to. As result I get the list above.

import org.semanticweb.owlapi.apibinding.OWLManager
import org.semanticweb.owlapi.model.IRI
import org.semanticweb.owlapi.model.OWLClass
import org.semanticweb.owlapi.model.OWLDataFactory
import org.semanticweb.owlapi.model.OWLLiteral
import org.semanticweb.owlapi.model.OWLNamedIndividual
import org.semanticweb.owlapi.model.OWLOntology
import org.semanticweb.owlapi.model.OWLOntologyManager
import com.clarkparsia.pellet.owlapiv3.PelletReasoner
String ont = ""
// create an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager()
OWLDataFactory factory = manager.getOWLDataFactory()
// read the ontology
File file = new File("1.owl")
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(file)
// load the ontology to the reasoner
PelletReasoner reasoner =
\     createReasoner( ontology )
OWLNamedIndividual toCheck = factory.getOWLNamedIndividual(IRI.create(ont + "#ToCheck"))
Set<OWLClass> types = reasoner.getTypes(toCheck, false).getFlattened()
System.out.println("Inhalt: " + types.toString())

Now it’s an easy to put it all together: extraction of the wiki, reading BAL messages, building an ontology and perform classification. If we found an result we open the corresponding wiki page in a browser. A simple Swing client using a browser as you can see in the picture below is programmed in few than 30 lines of code in Groovy which prooves that this is language is great for prototyping.



Creation of semantic apps is really simple because all necessary software modules and APIs are very mature. Groovy is well integrated with Java. As a consequence it is very easy to integrate reasoners like Pellet. With these tools you can perform classifications which is the most powerful reasoning technique.

If you want to create your first semantic app I suggest to install Helios and then the Groovy eclipse plugin or add this plugin directly to your NWDS. Download a reasoner, add the libraries to your eclipse project and start coding.

But there are drawbacks: For huge datasets OWL reasing is quite slow so we should try to use other techniques like SPARQL for querying the knowledge base. Another option would be splitting up the ontology into different smaller ones: some queries can be performed on a single ontology. In my opinion SAP’s in memory strategy has huge potential if they develop a solution for unstructured data.

In my opinion automated classificatios using ontologies have much potential: they can be used in expert systems for lots of business processes like checking compliance or cross and upselling processes. And perhaps they will be used for business process automation like goal driven process management.

Be the first to leave a comment
You must be Logged on to comment or reply to a post.