Why Logic Programming Matters
Logic Programming (LP) is a programming strategy for generating useful information by applying rules of transformations over data, with respect to a logic framework. In deductive logic – which is the main topic of this article – rules consist of very simple functions to convert the input (premise) to the output (conclusion) and they are activated whenever the condition defined in the premise is true. Conclusions follow conditions and it is the responsibility of the designer/programmer to pose valid rules. For being valid, a rule must adhere to this binding constraint: when the premise is true, the conclusion must always be true as well.
How can LP be beneficial for my business?
In very generic terms, LP is particular suitable for :
- Combinatorial problems.
- Decision making.
- Validate configuration.
- Reasoning and argumentation.
- Resource allocation.
LP warmly embraces logical reasoning – which is a field of philosophical speculation since ancient Greek – because human reasoning carries some kind of logical structure, and LP aims to formalize that way of thinking by transposing thoughts into computable programs. If we focus more in detail what reasoning is about, we can affirm that it is a generative process for assembling new assertions that aim to fulfil goals. If someone tells you:
Peter listens to the radio before he reads the newspaper. He reads the newspaper before he does his washing. He eats his breakfast while he listens to the radio. He plans his day while he does his washing.
While drawing the temporal sequence of events, it appears clear that some of them might be contemporaneous:
Novel conclusions might emerge, that do not correspond to any of these we already know:
Peter eats his breakfast before he plans his day.
Since an infinite number of conclusions may follow logically from a set of premises, one difficulty is to select which one (or at least a few number of them) will be helpful for reaching the goal we need to achieve. The ultimate objective of LP is about generating new assertions, testing them against the constraints of the expected goals, and filter out what is not valid from potentially useful candidates.
What differentiates LP and other software paradigms?
The main discriminator between LP syntax and many other programming languages is that it is declarative. Imperative languages are much lower in ranking of abstraction. Writing programs in C, Java or python means to express how the computer should elaborate the input, and this is done by executing a given algorithm. Conversely, in LP the problem is expressed in terms of what the LP solver is expected to accomplish, leaving the resolution of how to make it to the solver itself. The advantages of declarative language are:
- Abstraction. It leaves the final implementation choice and policy of optimization to the specificity of the platform and problem domain.
- Graphical Editing. The conciseness of LP allows us to use domain specific graphical tools for flattening the learning curve.
- Automatic program synthesis. The necessary logic grammar is really, really tiny compared to common programming languages. Automatic synthesis of LP code is facilitated.
What is the recipe for a logic program?
- Knowledge base of facts.
- Set of arguments, rules and constraints.
- Solver engine.
A set of simple rules (named arguments) are applied mechanically by the LP solver over a knowledge base and this operation is repeated recursively according to the type of logic inference. The solver’s engine might emphasize deductive rather than inductive logic, or integrating modal or deontic logic, depending on how sophisticated it is and the kind of problems it is going to be used for.
Henceforth, I will include some language syntax which is conforming to Datalog, since I’m building a framework which is inspired by this language. In deductive mode the arguments are applied in the straight direction, from the premise to the conclusion. The written form is:
<conclusion> :- <premise>
The symbol ‘:-’ stands for “imply” and you can literally translate it into “<premise> implies <conclusion>”. This is the simplest operation and it is called ‘modus ponens’.
mortal(Socrates) :- man(Socrates)
If Socrates is a man, then Socrates is a mortal.
The inductive mode is the exact reverse operation and it is applied when we need to estimate the premises that lead to a given evidence. Taking the example above, if we assume that Socrates is actually a mortal, it turns out that it is possible that he is a man. I mean ‘possible’ because it is just an interpretation. It is possible but we can’t be sure that it is actually the case. It is the job of the LP system to take care of meta-rules and side cases during the execution.
How is the data represented?
The data is represented in the form of tuples, or ‘atoms’, the single independent unit of information.
If we need to associate an attribute to an entity we can specify it with <attribute>(<entity>):
We attribute the property of color to the entity grey. If we want to express a relation between entities we write <relation>(<subject>, <object>, <relation attributes>…). Example:
The pen has a color blue. The advantages of this representation are:
- Abstract. It is extremely generic and can be integrated virtually with any kind of database. Though it is particularly suitable for Knowledge Graphs, it can also generalize the access to relational DBs.
- Concise and easy to manipulate in LP arguments.
What is Relational Algebra?
If you have a basic familiarity with SQL you already understand the concept. It is about joining different entities in order to make a union based on one or more joining attributes. For introducing the topic I step back on writing about variables in LP.
We can generalize the Socrates’ example by using variables, and the “for all men” will turn into:
mortal(X) :- man(X)
Where X is substituted with all occurrences of ‘man’. The variables might be finally used for composing unions:
isLocatedAt(X, Z) :- isLocatedAt(X, Y), isLocated(Y, Z)
The argument above states that if I am located in Munich, and Munich is in Germany, therefore I’m in Germany as well. If the LP solver will run this argument recursively, it will be able to catch information between nodes which are many degrees far from each other, and it will be possible to generate very complex data structures.
This example follows the transitivity property which is a common pattern in Knowledge Graphs. Can we make it even more generic and apply it to all similar properties? Of course! Let’s apply a transitivity property to ‘isLocatedAt’:
X(Y, W) :- transitive(X), X(Y, Z), X(Z, W)
In this way we have a meta-argument, an argument that regulates other arguments about the transitivity property.
Giancarlo Frison is Technology Strategist at SAP Customer Experience Labs
Great blog Giancarlo. Going back to 'basics' helps us to keep things simple, clean and purposeful.
Perhaps one nice thing to consider in the future is to share failures, yes failures. common errors, 'don'ts', bad practices and its consequences. With that,, an exercise of 'What could go wrong' is always welcome to learn from those who had the experience of experimentation. Elon Musk himself was one step away of shutting down SpaceX, with 3 failed attempts. His resiliency did not let it happen. Today no one remembers that though, because he SpaceX makes it look easy now 🙂
Good job 🙂
Thank you Luis. I'll keep in mind for the next post about LP. It will be more practical.