Success Measurement: Scorecard, Dashboards and Reports
Source-to-Pay (S2P) solutions offer quick-wins and long wins. Typically, the Return On Investment (ROI) is calculated on a 3-5 year horizon.
- The quick-wins are essentially based on the following criteria:
- Which solution is implemented, is implemented first
- The extent to which processes need to be re-engineered and policies rewritten
- The human change management required in terms of reorganisation, learning, communication etc…
- Last but not least, how resourced the project is with the different skills required to activate the quick-wins.
- Just to give you some use cases or examples:
- Implementing Sourcing first in general delivers quick-wins even if the customer only moves what was previously done through different means into a solution as the solution allows for a faster process, more competition without additional workload, and aggregation of spend sourced. The value delivered can be expressed in better unit prices, faster turn-around and transparency in the sourcing process.
- However, if the Sourcing solution is to deliver a breakthrough in terms of savings, the buyers have to be trained to use a variety of event types e.g. different types of auctions, 3 bids & buy and other techniques which are informed by the category, the marketplace, the sophistication of the vendors. This requires training, a sourcing strategy review, enhanced communication with the potential bidders, IT admin and support etc…
- Whereas, a Contract solution is implemented first, it requires to upload legacy active contracts in the tool, collaborate with Risk and Legal to prepare the templates/clauses for future contracts, to fully extract meaningful value from the solution. However, a quick-win in this case is clearly provided by the search capability (provided there is content), the digital signature of contracts, the redlining of the latter. While the value this provides at this stage, can be measured, it is fair to say that its quantification is not really straightforward.
- This is why it is best practice to:
- Focus on measuring content creation at the outset rather than measuring value per se. While the canvas to measure value should be in place, the focus should be on content. Without content, there will be no adoption. Content makes the solutions attractive to use. If there is no content in the solution, individual users will have to spend time creating their own, while the main benefits of the solution are providing speed, intelligence, synergy, structure, collaboration with regard to processes and users.
- A secondary focus is on measuring users and how they use the content which is provided. In order to understand, whether there is enough content, whether the content is relevant/appropriate, how they use the content, and whether it delivers value to them and they make it their own. This means that user satisfaction is key to improving the value users derive from the solution.
- A tertiary focus, roughly 3 months into implementation should be on adoption of the functionalities and adoption health, the adoption of sophisticated high value add functionalities. While basic functionalities will deliver value (as described in my point 2.1. here above), the more value-add tailored and relevant sophisticated functionalities, will deliver exponential value (see my point 2.2.).
- In parallel, to the above measurement, the value KPIs which will deliver the ROI or the business case will be tracked and as the project progresses from 1-3 will show meaningful results.
- Dashboard, Scorecard, Reports
The scorecard is the “dashboard” of the project in that it reflects 1) what needs to happen in terms of operations/processes adoption/usage to deliver 2)the outcome expected in terms of goals and value drivers. To illustrate this: If no Purchase Order is created from a catalog, it is unlikely that the PR2PO cycle time will be reduced or if number of bidders/event is flat, it is unlikely that the savings rate will increase. In short, the scorecard will monitor usage and adoption to allow to identify the source of the value. So the operational indicators measured, will have a direct correlation and causation relationship to the value KPIs. However, the scorecard does not provide alerts or triggers for immediate action to remedy. It is my advice, that if a company has identified quick-wins, calculated an ROI and/or a business case, the scorecard should reflect this: This means there will be a quick-win section of the scorecard, an ROI or business calculation incorporated to it or linked to it.
Besides allowing the various stakeholders to monitor their tasks, their events, their requests and suppliers, also allow stakeholders to do the following as part of the success measurement plan:
- Allow to intervene in processes of their competency which might derail key KPIs e.g. for a Buyer an event which did not receive enough bids, or for procurement a blocked PR or a PO rejected by the supplier = remedy/mitigate
- Allow to keep an eye on the scorecard KPIs which their work impact
- Allow access to reports which are generated regularly as part of the monitoring of the success measurement plan
Therefore, there should be a dashboard templated with is centrally created to ensure that stakeholders have access to 2.3. and the various stakeholders should be provided guidance to include 2.1. and 2.2. in their dashboard.
Reports usually are after the fact information which provide insights on exceptions, anomalies, trends and patterns which allow to correct, mitigate, review and discontinue processes which deliver no value or block value generation. Ideally, reports would also generate the data needed to populate the scorecard and provide the reports in 2.3. of the dashboards.
In conclusion, there is a hierarchy in success measurement in that you start measuring content, then adoption and adoption health, onto to measure value. There is also a hierarchy in what you measure, in that you measure operational indicators onto measuring value. There is also a hierarchy in how measurements are displayed, with at the top and overarching the scorecard, then the dashboards at the operational level and the reports as a regular mean to provide insights for reviews.