Skip to Content

Gathering Feedback for SAP Java Product Maturity

   As part of a research project I am gathering feedback on the maturity of SAP Java products and technologies (applications, tools, UIs, IDEs, etc…). The feedback survey will run through the end of TechED 2008. I will be attending TechED and would like to conduct a few interviews there. If you are interested and willing to participate (and will be at TechED anyway) then please let me know by providing your contact details in the survey. I will only be able to conduct a few in-person interviews so I cannot guarantee you will be selected.

   You will find the survey link just a bit farther down the page, after some explanation and background. Please use the blog comments to ask any questions or provide your comments.

The Short Version

   Why are we interested in product maturity? A mature product is one which is simple to use and has a minimum of faults. So in this survey we are interested in SAP products based on Java and we ask how well do they meet your expectations. Your expectation of a mature software at a minimum should be that it is simple to use and has a minimum of faults. But further, your expectation should be that it satisfies your intended purpose and that it provides significant value in doing so. You will be asked to assess how well your expectations have been met when using SAP software to perform various tasks and activities that are part of the software lifecycle. From the results we hope to gain insight into how well various aspects or characteristics of the software meet your expectations. This insight should feed into our strategy for product planning and development, and should provide some perspective on how we measure quality vs. how customers perceive the finished product. In the end, the goal is to have a clear view of the customer’s realization of value from SAP software. Your expectations are a useful reference point – as we believe it is valid to assume that realization of value is a critical part of your expectations and ultimately your satisfaction. The hypothesis is that such a feedback mechanism will provide a useful point of reference for evaluating how well our development and quality processes deliver software that actually meets the expectations of the customers. In the detailed version below there’s more about quality and maturity and how those two are different but complimentary.

If you would like to jump right into the survey you can find it here. If you want read about the background and basis for the study of maturity keep reading the detailed version below – and then take the survey please.

The Detailed Version – Software Product Maturity

   Software product maturity is a derived measurement of product capability, stability, maintainability, and pedigree. Software product maturity is NOT the same as software process maturity (as assessed by the well known Capability Maturity Model or CMM[1]). Software product maturity models have been described primarily by Jan Bosch[2], Richard Turner[3], and John Nastro[4]. Further work published by the Journal of Object Technology[5] and materials published by Osman Balci et al[6] of Virginia Tech provide additional background but discuss the topic in terms of software quality. The CMM defines software process maturity and deals with the development of the software. It doesn’t address the finished quality of the software product.  Unlike software process maturity, product maturity is concerned solely with the final product, the executable software, and not necessarily with how the software was developed. However, process and product maturity are not mutually exclusive; the product maturity model can work in conjunction with or independently of process maturity.[7] Software product maturity is derived from further study of the characteristics of the finished software. The assessment of the characteristics of the finished software is derived largely from the field of software quality and testing. However, the quality testing methodologies are not complete enough to fully evaluate maturity as they typically do not consider aspects such as user satisfaction (as this cannot be measured prior to the release of the software product) or simplicity (since this significantly depends on the perception of the end-user). And as the product usage may ultimately be different than that conceived by its designers it would be impossible to develop tests to cover every use-case or integrated scenario. Further, characteristics such as simplicity are always subjectively considered by end users based on their own expectations and experience. There isn’t any quantifiable scale to define simple, hard, or the degrees of simplicity between the extremes. Maturity by our definition is based on subjective aspects and thus maturity is assessed rather than tested.

   In contrast, product quality testing requires well-defined metrics that can be considered in finite terms – you must be able to answer yes/no or pass/fail with regard to the characteristic you are testing. The ISO 9126 provides a framework for quality testing that is based on six characteristics.

  • Functionality (a set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs)
  • Reliability (a set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time)
  • Usability (a set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users)
  • Efficiency (a set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions)
  • Maintainability (a set of attributes that bear on the effort needed to make specified modifications)
  • Portability (a set of attributes that bear on the ability of software to be transferred from one environment to another)

   The point of reference in quality testing is typically the design specification. But in the end (when you have a finished product) the product is used by people who do not judge it by the specification but rather by how well it meets their needs or fulfills their expectations. Thus we use a simpler definition of maturity with a point of reference being the users expectations of what the software should do and how easy it is to use.  There are different uses of the terms “maturity” and “quality” throughout the listed references, however the fundamental concepts that are used to assess software products are consistent. For our purposes, the term maturity is used to refer to software “that has been in use for long enough… that most of its initial faults and inherent problems have been removed or reduced by further development.”[8] And although this definition is given for mature technology, it applies quite well to software as it notes “one of the key indicators of a mature technology is the ease of use for both non-experts and professionals.”[9] Ease of use is an important characteristic of a software, although it is quite difficult to quantify “ease” in the design or specification. But certainly all the dimensions of maturity have ease of use as an underlying aspect of a positive measurement. As we originally stated, software product maturity is a derived measurement of product capability, stability, maintainability, and pedigree. The “measurement” is the answer to the question “does the software meet your expecations.” Ease of use and simplicity are key considerations in that answer.

   This research project explores the topic of product maturity and examines the use of surveys to measure maturity. Such a methodology for assessing maturity should emphasizes subjectivity, and the primary data source should be the users, both non-experts and professional, who test the software in unique, interesting, and insightful ways without ever explicitely running a test. For them the real test is whether the software delivers the value and benefits it was intended to deliver, and their different perspectives collectively can provide a full report on the software’s maturity. That full report can then serve as another input into the product planning and development process and can compliment our software quality efforts.

 Now, on to the survey – SAP Java Maturity


[1] http://en.wikipedia.org/wiki/Capability_Maturity_Model
[2]http://www.janbosch.com/
[3] http://www.stevens.edu/engineering/documents/fileadmin/documents/doc/Turner_full_vita_July_07.doc
[4] http://www.stsc.hill.af.mil/crosstalk/1997/08/product.asp
[5] Francisca Losavio et al: “Quality Characteristics for Software Architecture”, in Journal of Object Technology, vol. 2, no. 2, March-April 2003, pp. 133-150.
http://www.jot.fm/issues/issue_2003_03/article2.pdf
[6] Osman Balci et al, Department of Computer Science, Virginia Tech: “Online Interactive Modules for Teaching Computer Science” http://courses.cs.vt.edu/~csonline/SE/Lessons/index.html   
[7] John Nastro AIL Systems
[8] http://en.wikipedia.org/wiki/Mature_technology
[9]IBID

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

  1. Markus Doehr
    …is in my opinion the (almost) total separation between the “Installed base maintenance” and “The Development”.

    Once a product is GA, it’s already “past” for development. The IMS creates notes and workarounds and thus recurring problems never reach development to actually change a root cause instead of “fine tuning” the workaround note (at least that’s what’s my customers impression, maybe I’m mistaken).

    During several OSS calls I have had the luck of phoning with developers directly and talk some minutes off the actual OSS call about our major issues and I could literally hear their eyebrows lifting because they were not aware of certain problems – because there was already a workaround.

    Markus

    (0) 
    1. Nicholas Holshouser Post author
      Hi Markus,

      Your example really speaks to the difference between quality and maturity. If the preference is for making it work (what else could a work-around be?) then I have to agree that doesn’t seem to demonstrate maturity. My definition was “that most of its initial faults and inherent problems have been removed or reduced by further development”, with the key being “further development”. Given that faults are not corrected in the initial release, but rather delivered in a patch or support pack, then a work-around is often necessary – but it isn’t sufficient to make a product mature. Fixing the root cause should always be the goal as that reduces the necessity for long lived work-around.

      regards, Nick

      (0) 

Leave a Reply