When can one say that a certain type of consensus is worth being called a standard? Let us exclude – for the moment; I might come back to this – the question of formality and ownership. So I am not going to discuss if the term standard depends on the body owning or maintaining this or what the difference between a standard and a de facto standard is.
Let us start looking at a simple example: the size of a sheet of paper. There are DIN and corresponding ISO standards (norms) defining very precisely the size of paper. E.g. in Europe A4 or the US-Letter have a clear defined set of attributes. And importantly one can easily measure if a standard is met… The norm however does not defined the material or structure (including the typical weight attributes). This leads to the situation that a paper A4 might differ quite well from another piece of paper following the same standard. Very often the definition of a standard incorporates some freedom regarding interpretation of certain ‘dimensions’ (like the weight or material in the paper example). Such freedom should be a wise and deliberate decision of course.
In IT there are certainly also a lot of standards – even if we leave the physical world with plugs, pins, voltages and the like aside. Just think about proposals of the IETF (the whole RFCs) , W3C-standards, again ISO (e.g. 20022 with recognized importance and acceptance in Financial Services) and the like…
If you have ever dealt with a standard like WSDL or Web Services you know that there is most often some room for interpretation (like the weight of paper with the A4 example) left to the implementer. Which partially leads to the situation that two standard compliant products are not fitting to each other and have proprietary aspects. Again the possibility of interpretations should be an active design decision and not a mistake of missing precision.
Though there are on the other hand a certainly a lot of reasons why a total precision (no room for interpretation) is not achievable, desirable or whatsoever. But let me focus now on my basic question:
Which ratio of precise definition to flexible interpretation (or extension of a standard) is reasonable.
There is certainly a span of this ratio (the usual answer: it depends…) and it should be clear that a higher approximation to 1 (means no deviation possible) increases out-of-the box matching of different products. But does that mean that a standard defining having a ration of 0.2 (only 20% of a ‘domain’ is precisely defined the rest is left open) is not worth going for it or does not help?
Let me transfer this question now to banking IT (the banking specific part)… If we view ‘banking IT’ as on domain for a standard (which is of course an oversimplification), what would be a meaningful ratio to go for?. I am pretty sure that in many ‘domains’ (this is quite a tricky word) one will not find much desire from the industry players (banks, vendors, service providers …. except maybe the regulators) to have a totally precise definition at hand. I have discussed this partially in an older Blog Standardization harms my unique selling proposition so the battleground for the question should be prepared.
In BIAN (the Banking Industry Architecture Network) an association I am engaged and involved in via my day to day business at SAP: we have defined (will define) a set of standard definitions at different levels of detail, which are building up on each other and which gradually increase the precision.
The ‘BIAN high level standard’ is the joint understanding on the coarse grained capabilities of a bank (terminology wise I am referring to the definition of capabilities by OASIS: SOA Reference Model). One could call it a domain model from a BIAN perspective we call it the Service Landscape as it is a hierarchical view on banking specific services. (Please check out the blog from David Frankel about the BIAN Service Landscape).
Wouldn’t it be helpful to standardize banking on this abstract level already. Imagine that banks would take the same model (some would say it is a picture only) to discuss the organization or processes of their banks for certain purpose (even if it is ‘only’ about purchasing decisions or in- and out-sourcing….).
The next detailed level is a finer description of these capabilities including a clear assignment of identified IT-Services.
A further level captures the semantic part of the service definition. Including a business description, a clear definition of pre- and postconditions, a high level message structure plus some more details….
A final level is comparable to a WSDL-type of definition (regarding the precision) but would of course include the agreed and consistently defined semantics of the upper levels as well.
Would you call all of these levels a standard or in other words are all worth being labeled a standard (depending on the excluded properties – see above – of course).
If I look a today’s situation in Banks: the existing landscapes are enormously complex and a mix of grown, self developed and bought pieces. That going back to the highest layer a standard would be an increasing benefit for the industry!
This would of course not remove integration issues completely but it would be a great starting point and I am sure this would proof to save an immense amount of money for integration already and would enable new ideas and business models immediately.
So I would say that this is worth being called a standard already – knowing that achieving a consensus on this level is a major undertaking already. But it is quite simple if we don’t move we will never be there.