Running a Proof of Concept – it’s really a good idea (CUUC Series Part 3)
I am great believer in Proof of Concept (PoC) tests, as you can see here, they provide solid proof of an idea and often additional benefits in areas not previously thought about. In the case of the Upgrade and Unicode conversion from my previous post, there were a great many assumptions and things being attempted for the 1st time – so the need for a PoC was evident.
We defined a set of objectives from the PoC to ensure we got the data that we needed, this is detailed in the table below
One thing I firmly believe in is that asking for help is a strength and not a weakness, in this case my team and I were about to run a Unicode conversion on databases which were multi-terabyte and we needed this process to run quickly and smoothly. So from the beginning of the project, I ensured that there was budget to have assistance from Microsoft, HP and SAP if their consulting services were required. When beginning the PoC, I spoke to my friends in the Microsoft/SAP Competency centre about how best to enhance the Unicode process, they were as usual very helpful and technically brilliant. Microsoft provided an ABAP tool called EasyMig, which enhances the table and package splitting abilities of the Unicode process, it does this by using more targetted Where statements within the table splitting routines.
Example screenshots of Easymig Tool
Example of EasyMig statement
“POSNR” <= ‘0000000003’ and “AWORG” = ‘2009’ and “AWREF” = ‘4918931532’ and “AWTYP” = ‘MKPF’ and “MANDT” = ‘100’
“AWORG” < ‘2009’ and “AWREF” = ‘4918931532’ and “AWTYP” = ‘MKPF’ and “MANDT” = ‘100’
“AWREF” < ‘4918931532’ and “AWTYP” = ‘MKPF’ and “MANDT” = ‘100’
“AWTYP” < ‘MKPF’ and “MANDT” = ‘100’
“MANDT” < ‘100’
Example of standard R3ta split
Where (“AWREF” < ‘4918931532’)
The certification status of this tool is in question, as it uses a different R3Load than the standard SAP one due to restrictions on Where statement length, but for the purpose of the PoC we decided it was too good an opportunity to miss and we needed to see how exactly it functioned.
The PoC took a lot of effort, there was much work involved in providing the hardware and the right data set to accomplish our aims before we even got started on the Upgrade and Unicode process. Once we got the hardware and the systems in place, we then had to start work on the preparation of the Upgrade and Unicode conversion – it is important to read the notes associated with your upgrade and Unicode process as these are the fixes to the gotchas you will find. This process takes a few days, during which point you can have the client standing over you expecting things to be moving – it is important to be firm and explain the process and the consquences of not following that process.
Once we got the preparation out of the way and got down to some proper work, we hit a number of challenges, which were interesting to say the least as you can see from the table below.
The PoC was a great success and provided answers to each of the objectives, some positively and negated others, but ultimately there were more positive affirmations than negative – which is always a good result.
From the PoC we moved into the DEV and QAS Upgrades and Unicode conversions, which were done on VMWare environments – these posed their own challenges but I will follow this post with the 1st trial run instead.