Skip to Content

Recently at Sapphire I attended several micro-forums and sessions on data governance and data migration.   It was interesting to listen about the problem of data governance and how it is so closely related to the key issues involved with designing new business processes.  The issues are similar to what workflow and business process experts face today.

When talking about workflow and process design, several issues always come up for discussion.  Three common issues include:

  • Ownership – who owns the process? Workflow is a business-focused process that technologists help implement.  The workflow itself is driving a business process and completely owned by the business, and will only succeed when there are clear business owners.
  • Understanding the results – defining what will be different after the workflow is in production is also critical.  Will PO’s get approved faster; will repair response time be reduced?  It is important to know what will be different after the workflow is in production.
  • User involvement – at the end of the day, the true end user determines the success of the workflow and they will judge the workflow by their experience with the inbox. Users and stakeholder alignment is key to the success of any workflow project.

At Sapphire when discussion data migration and data governance, from issues of ensuring the data can be used to support the business process to ongoing data governance, very similar issues came up during the discussion. (Note: Ina Mutschelknaus did an awesome job of leading some of these sessions and discussions.)

  • Define what data governance means at your company – ensuring you have executive sponsorship, having a definition of the real meaning and impact of data on your core processes
  • Who owns the data – this can be a big more vague than who owns the process.  Owning a process becomes challenging as the process crosses organizational boundaries.  For example, approval of purchasing records is owned by someone in the purchasing department, but the moment you put this into the context of procure to pay or involve finance and sales departments, then you need a process owner that crosses organizational boundaries.   You get the same issues with data ownership.   At a micro forum I asked why the business process owner doesn’t own the data for the data migration.  The question posed back to me was ‘who would own the material’; it is used in many processes by various departments.   This becomes analogous to the process that crosses organizational boundaries.  The core material may be owned by a one data steward, and each plant/sales view owned by others, and then owned at a higher level by a global data steward (so now you need workflow to enforce data governance!).
  • Choose a key initiative as a test case- when it comes to data governance, it is important you start with something that can really prove your case.   Oftentimes the best place to start is a project around customer data since it impacts so many areas of the business, and oftentimes you only have to look at postage and shipping costs to determine dollar savings with correct customer data.  

There are several other issues that are similar between the two, but I am convinced workflow and process experts should align and collaborate closely with the data governance and data steward experts because:

  1. Both groups understand the importance of getting the right people on board.  Without the right organizational and user support, both types of projects will fail.
  2. Both groups know what it is like to have to ‘prove’ yourselves and the results.   Workflow experts and data governance experts know how to start small and build on success.
  3. Data drives the business process.   Workflow folks understand the business process well and how the process flows through the organization, that knowledge can be key to ensuring the data is correctly driving the business process. 
To report this post you need to login first.

21 Comments

You must be Logged on to comment or reply to a post.

  1. Tammy Powlas
    Starting with the vendor master, material master – completely agree that a partnership between workflow and data governance is where to go.

    Great blog, Ginger!

    (0) 
  2. Vijay Vijayasankar
    I used to think that BPM/Workflow is the answer to the big problems in data governance. But a few projects later – I am convinced it is more nuanced than that.

    Master data for example has several attributes – all owned by different parts of enterprise. It is not possible to predefine all combinations of how a process should work, or get routed. My current thought is that what is practical is streamwork type integration, with the ability to change routing and invite people adhoc to solve the problem at hand.

    The other part of the issue is in showing what is the business impact of not doing any governance – not in technical terms like “23% data has wrong ship to zipcodes” but more like “in next 30 days, 345 shipments will get returned at a cost of $1.8M” type. Most governance tools just are not capable of that today.

    My 2 cents..

    (0) 
      1. Vijay Vijayasankar
        Hi Ginger

        No I do not have a blog – but now that you mention it, I think I should put my semi-coherent thoughts into a blog post.

        Governance is a real challenge all around in most projects – so I would like to hear what community thinks

        Cheers
        Vijay

        (0) 
    1. Community User
      Hi Vijay,

      Thanks for your comments. You have rightly pointed out that BPM is certainly a part of Data Governance, but doesn’t complete the picture. There are other aspects to it e.g. policies, quality, security/compliance, KPIs, etc. Also, you are correct the it is important to look at the operating model of the organization to understand who owns what pieces of data.

      I am not too sure though on your statement that data management processes are better addressed through adhoc workflows. May be I understand your point incorrectly so please accept my apologies in advance.

      Customers typically have process owners whose job is to define a repeatable, standard business processes e.g. order to cash, procure to pay, master data management etc. with all possible variants. In case of errors, you’d expect that the organization has a detailed plan to resolve these issues in data. Streamwork can probably add value when the organization wishes to carry out some adhoc processes e.g. developing consensus on whether to use central vs de-central MDM, or whether current OTC process needs an overhaul due to issues in customer profitability, etc. So I see SW as a tool which helps in decision making and I have used it to get conversations going and arriving at a conclusion. But the outcome is generally a formal decision logged in an internal system, with detailed system and process designs and anything else which is required to present to the business. SW usually ends there for us, it doesn’t become part of standard operating procedures to support any data related issues.

      Could you please expand on the cases where you considered adhoc workflows were more suitable than standard business processs workflows enabled via BPM, ABAP workflow, etc?

      (0) 
      1. Vijay Vijayasankar
        I haven’t finished my first coffee this morning – so apologies if this does not come out very clear 🙂

        The expectation of having a defined owner for data is not always satisfied in an organization. For example – who owns customer master? does shipping own it? billing? A/R? and so on. The answer actually is  – all of them own parts of it, with some overlap. So when things go wrong – who fixes it? except for some straightforward cases – most master data changes need co-ordination between multiple departments. The issue between predefined BPM vs adhoc process arises because it is very difficult to determine all possible combinations and sequences of activities needed to fix data.

        Eg: If you let a shipping clerk change address of a customer without the input of others in the larger organization – there might be tax implications, sales area assignment issues and so on. So the person changing data needs some judgment in pulling in right people beyond the “straight forward flow” defined in BPM.

        A large part of this challenge is not tooling – it is just that business does not have enough people to be data stewards. And business won’t have an interest in assigning stewards unless they know what is the tangible cost of not having them. The current steward dashboards from vendors I have seen – do NOT solve this issue.

        Cheers
        Vijay

        (0) 
        1. Ginger Gatling Post author
          Hi Vijay
          What about the role of data steward?  Have you got this such that each group is responsible for their part of the cusotmer/material -but conflicting issues or broader issues go to a data steward?
          Best
          Ginger
          (0) 
          1. Vijay Vijayasankar
            The challenge I have faced the most is defining “their part” of the data for customer master. It is so integrated that something as simple as zipcode cannot be pinpointed to one group or person in large companies.

            From a data steward point – it usually evolves very quickly into a scenario where they only get issues that need multiple parties to solve. At that point, email and messengers take over from BPM. And this is where I think streamwork type integration could be very useful.

            There is an additional challenge – where parts of master data is outsourced to an external agency to maintain. This brings all kinds of issues – security, routing of messages within agency and between organizations and so on.

            Not sure where you are based, Ginger – but if you are in Palo Alto some time, ping me and we can chat in person. This is a topic close to my heart, and I think we might both get something useful out of such a conversation

            (0) 
            1. Community User
              Hi Vijay,

              I appreciate where you are coming from. But I feel the problem you are higlighting is about maturity of data governance model in an organization.

              I am working on a project with a pharma with multiple independent business units. Currently, we are working with two of these BUs on central authoring of master data, expanding to a third BU in due course. We initially started with customer and now expanding into material and vendor. Given the target operating model of the firm, a Central Data Management Organization has been formed. They own the data. For data maintenance, we have identified resources who need to approve changes before these can be committed to the receiving systems. The CDMO engages BU leads and global process leads to ensure we have the right governance process. We are also working on KPI reporting for the CDMO.

              Customer is a good example. The pharma and consumer BUs of the organization do not share the the same customers, nor the data model. They don’t even share the same operating model but they are committed to the CDMO concept and willing to standardize on data maintenance process. In addition to the NW MDM system to store customers, the organization has a central CRM system which is non-SAP, and a SaaS based CRM system for sales force automation. There is also a non-SAP MDM system as well but that is not used for customer data maintenance. So a reasonably complex situation with multiple overlapping data repositories and non-standard processes. We won’t be able to fix a lot of the issues in the project timeframe. So we are working with the Enterprise Architecture function to help improve the situation over a longer term.

              Any way, I agree that adhoc activities around the data management process e.g. decision making could be addressed through SW. However, the actual data management process is a standard workflow with CDMO being the owner of the data in our case.

              My 2 cents.

              Regards,

              Shehryar

              (0) 
              1. Ginger Gatling Post author
                HI Shehryar
                Thanks so much for sharing your 2 cents – it’s good to hear about the setup and the standard workflow. 
                Best
                Ginger
                (0) 
            2. Ginger Gatling Post author
              Hi Vijay
              I’m out of Dallas but would enjoy talking with you and I’d like Ina – our data governance queen to join the conversation as well…check out her blogs.  /people/ina.mutschelknaus/blog

              I’m about to be out – but ping me in email and we can setup a time to talk when I return:  ginger.gatling@sap.com

              (0) 
        2. Community User
          Hi Vijay,

          I appreciate where you are coming from. But I feel the problem you are higlighting is about maturity of data governance model in an organization.

          I am working on a project with a pharma with multiple independent business units. Currently, we are working with two of these BUs on central authoring of master data, expanding to a third BU in due course. We initially started with customer and now expanding into material and vendor. Given the target operating model of the firm, a Central Data Management Organization has been formed. They own the data. For data maintenance, we have identified resources who need to approve changes before these can be committed to the receiving systems. The CDMO engages BU leads and global process leads to ensure we have the right governance process. We are also working on KPI reporting for the CDMO.

          Customer is a good example. The pharma and consumer BUs of the organization do not share the the same customers, nor the data model. They don’t even share the same operating model. But they are committed to the CDMO concept and willing to standardize on data maintenance process. In addition to the NW MDM system to store customers, the organization has a central CRM system which is non-SAP, and a SaaS based CRM system for sales force automation. There is also a non-SAP MDM system as well but that is not used for customer data maintenance. So a reasonably complex situation with multiple overlapping data repositories and non-standard processes. We won’t be able to fix a lot of the issues in the project timeframe. So we are working with the Enterprise Architecture function to help improve the situation over a longer term.

          Any way, I agree that adhoc activities around the data management process e.g. decision making could be addressed through SW. However, the actual data management process is a standard workflow with CDMO being the owner of the data in our case.

          My 2 cents.

          Regards,

          Shehryar

          (0) 
  3. Edward Diehl
    Good thoughts, Ginger.
    As I read this is occurred to me that in our current implementation we have four external systems feeding master data to us; personnel assignments, equipment specifications, material, civilian contractor data. 
    Each type can trigger a workflow to alert stakeholders of “significant” events. In each, some action needs to be taken by somebody. 
    The point is that our system has to reconcile any changes with other data in our system and we use workflow for that.  The affect of changes to the material file on purchase reqs, for example, is a “biggie”, as well as changes to units of issue for the warehouses, etc.

    The fact that the data is not originate with us  perhaps makes this more of an issue.

    Ed

    (0) 
    1. Ginger Gatling Post author
      Hi Ed
      Great to hear from you on SCN – I hope you are well!   When triggering the changes between systems, do the other systems accept the change, or does someone have to approve it? I guess there are rules on who and when the changes can be made?   It would be great to get more details as an example.  Thanks for taking the time to respond with this example!  I miss you!
      -ginger
      (0) 
  4. Pratik Talwar
    There were some issues which I identified from the blog as well as the conversation around it.

    1. do we need sequential or ad hoc processes for information governance ?

    Process by definition is collection of tasks that relate with each other logically and are bound by flow of linear / parallel activities. Any process has a defined starting and ending state. Why I quoted this is because when we create an information governance process, along with it what is also required is proper process performance metrics to be defined – like turn around time measurements, SLA’s etc. Ad hoc situations are bound to fail as ownership of process stages vanishes in that situation.

    This is required to build efficiency in the process so that business, who owns the data, gets the data in optimum time and with right quality.

    2. what we do about trouble shooting / conflict resolutions ?

    Ginger mentioned that apart from the process flow, policies etc are also to be considered. Correct. But how do we manage this integration over a period of time? Answer is correct application architecture where these policies are integrated with the workflow engine via independently maintained components and are accessed via services. This ensures a robust and flexible process.

    some other points that i would like to mention.

    -> the process design should be decoupled from the workflow engine and should be exposed as a separate independent component to reduce maintenance.

    -> the data model should be built in a way that the application is aware of data dependencies & validation requirements and thus proactively prevents creation of bad data.

    there are so many other small but important things that make information governance processes a success story.

    (0) 
    1. Community User
      Thanks Pratik.

      1. Agreed.

      2. Agreed

      Can you please shed some light on:

      1. “process design should be decoupled from the workflow engine…”.

      2. “data model should be built in a way that the application is aware of data dependencies & validation requirements and thus proactively prevents creation of bad data. “

      Assuming you are talking about master data creation. Theortically, yes. However, if your data resides in multiple systems, are you suggesting that the data validation should be happening across all of these systems before master data is created? Or even if only ERP is your main system, how much validation logic do you propose should be in MDM application to ensure bad data doesn’t get created? How would you avoid replicating ERP validation logic in MDM if you carry out extensive checks in MDM?

      Regards

      (0) 
      1. Pratik Talwar
        i will first give answer to your questions regarding point 2 (validations) and then to point 1.

        Validations & Process.

        Lets assume the process to be is for Master Data Maintenance. There are around 100 attributes. lets assume there are 3 functions who own certain attribute groups out of all 100, i.e. mktng 10, finance 60, sales 30.

        For this master data, lets assume that is as is process there is a transaction XXXX which is used for maintenance and creation of data. This transaction uses a data model defined by a set of table relationships. Also there are certain validations that are configured for this data model.

        When we design a to be process for this, there will be atleast 3 participants in process representing the 3 functions mentioned above. And the to be process will have its own UI’s for participants, which can be forms, dynpro screens etc. For these UI’s also there iwll be an underlying model which will contain both master data and process control data.

        One of the changes that come with introduction of such processes is that the transaction XXXX will be closed for access or will be open for attributes that are out of scope of the to be process.

        In this case the underlying validations are no longer exposed to the new UI. So we need to have means of exposing or recreating these validations for the to be process. The answer to your questions is 100% validations should made available in the process for data creation or else you are creating opportunities for bad data creation.

        I hope this clarifies why validations are very important to MDM applications.

        coming back to pint number one.

        What a workflow engine (Business workflow / BPM) is technically capable of is to create workitems for the user groups that are part of a process. When i said that the process design (which is Business Logic) should be kept out of workflow engine I meant, there can be a component in the application which has the process flow information and knows what task comes next in the flow and why. It also knows what is the nature of the task (decision / interactive etc) and supplies this information to another component in the application (workflow engine) which can use this info to generate workitems for users and runs a workflow instance until unless the next task is process end. (I can elaborate more on this if you are interested)

        What are the benefits of such an architecture:-

        -> the same workflow component can be used for different business processes

        -> the process related changes happen outside of workflow engine, and thus becomes easier and if properly implemented, becomes non technical.

        (0) 
  5. Ina Felsheim
    Hey, all. To chime in on what I’ve heard from customers around information governance… Definitely different LOBs will have different perspectives and requirements of pieces of the data. “Customer” is a great example. Finance has very different requirements for completeness, duplicates, and enrichment than, for example, Marketing. They key is to establish which attributes are GLOBAL and have a committee that can resolve policy differences. Local attributes can still be governed, but should be governed by local data stewards.

    (StreamWork has a great part to play in the upfront definition of policies, review/approval, and information checklists (ownership, access, rights, etc.).)

    If the organization cannot agree on how to treat the truly global attributes, then you have a larger problem. Perhaps the mission of information governance has not been clearly defined or information governance does not have an influential executive sponsor? Perhaps more attention needs to be paid to the change management nature of information governance?

    There is trouble when every organization feels like they have to organize the same way…it must be tailored to what your company can tolerate along multiple dimensions: culture, information management maturity, management sponsorship, and finally data domains and processes. For more on that topic, check out the recent SAPInsider article: http://sappro.com/article.cfm?id=5868.

    What do you think?

    (0) 

Leave a Reply