Skip to Content

Copying Production data to DEV and TST

Copying Production data to DEV and TST

There has been a bit recently written in BLOGs and in the TwitterStream about whether to copy Production data around the SAP landscape or not. So I decided to write about the personal experiences of a NetWeaver infrastructure guy who has been cloning SAP systems since the R/2 days. (If you remember SAPUNLU and SAPRELU, like me, you also have grey hair!)


Examples of where system copies pay for themselves:-

  1. Creating training systems. Although CBT courses are all the rage now, for my money, there is nothing better than getting a feel for real navigation, real transaction data and real response times.
  2. Creating Volume and Stress systems. Unless you have properly sized tables, with representative data, your Mercury scripts will not be representative of real life response times.
  3. Applying Enhancement Packs. According to all the guides, EHP4 is a straightforward exercise. True if you try on a 200gb Sandbox ERP system – painfully untrue if you try on an 8tb Production cloned system. We found out the size of the biggest table EHPI can handle.
  4. Messing with the switch framework. The new switch framework is great, you can activate new functionality by running a single transaction. Tick the box, and the batch jobs will be initiated in the background to compile programs and convert data. Takes 2 minutes in a 200gb Sandbox – takes 2 hours in an 8tb Production cloned system. Take care here; the switch process cannot be reversed if you change your mind.
  5. Activating New GL in an ERP system. The New GL offers several functional enhancements including document splitting and real time integration with FI-CO. The conversion process must be performed and tested on the most up to date financial postings. Testing on a TST system gives wildly different results than on PRD. But you only get one chance on PRD (unless you have ultra fast backup and restore processes)
  6. Basis testing of database improvement initiatives such as online reorganisations, Oracle advanced index compression, Unicode conversions, Oracle partitioning, Oracle Flashback, Archiving, etc. Trying on a 200gb Sandbox is fine, but this is far from representative on an 8tb Production system.


Responsibilities when copying Production data


  1. Production data is still Production data when you copy it. You must control access to the cloned system or the copied data.
  2. HR data should be scrambled. There are legal and privacy implications here.
  3. All printing should be closely controlled, unless you want a bogus payslip, remittance advice or cheque being printed
  4. All SM59 connections in and out of the cloned system must be controlled.
  5. Login screens must be changed to prevent the cloned system being mistaken for Production.

Which tools should be used for copying a system?

I have used many, here is a list of my favourite tools:-

  1. Client export (or client copy). This is my old favourite, works well up to around 200gb providing you have plenty of disk IOPS and enough disk space in /usr/sap/trans. There are several profiles you can use here depending on what you are trying to achieve
  2. InfoShuttle. Works well for HR and Payroll data. Has a neat scrambling capability and allows selective copies between systems. Works well for low data volumes
  3. SAP Test Data migration Serer (TDMS). The latest offering from SAP works on ERP, CRM and BI systems. Can do selective copies based on date, company code etc. Has some scrambling capability, but these are optional extras. Works well on systems up to 2tb and long as you have flexible infrastructure (virtualisation capability) and a powerful Solution Manager (or other central) system. We extracted 24 months of Production data from a 6tb system into our DEV system over 3-4 days.
  4. Tape based homogenous System Copy and Migration. Works well if you have 2 spare Basis days and a target system with as much disk space as Production. Post copy tasks are labour intensive, but can be automated to some extent.
  5. Disk based system replication. We use FlashCopy SE to peel off a copy of our 8tb Production system in less than 4 hours (including system rename time). The target system is small (less than 100gb) because FlashCopy SE utilises a virtualised pointer based approach to cloning data. This weekend, we are performing the New GL conversion on the most current FlashCopy of Production data – without any disruption to Production at all. Next week, we will re-Flash and use the FlashCopy to test the application of EHP4.


Overkill or minimising risk?

You might call it overkill, but I call it a cost effective approach to controlling the risk of Production changes. My ultimate goal is to deploy FlashCopy SE across the entire Production SAP fleet. This will provide a vehicle to consistently clone all SAP systems in one hit (CRM, ERP, BI, SCM, EP etc). Taking the concept one step further, this process will form the foundation of a consistent recovery point across the entire suite of systems.


Learn More

Learn more during the ‘meet the expert’ sessions at TechEd 2009 in Phoenix. I have 3 sessions booked where I will explain how Australia Post use FlashCopy SE as an integral part of our virtualisation roadmap. Bring your USB key for a copy of my favourite IBM-SAP Redbooks.


You must be Logged on to comment or reply to a post.
  • Tony:

      Any suggestions on copying Solution Manager from production to Dev/QA?  We need to try this as production is Unicode and development is not.


    • Jim,

      We haven’t attempted a SolMan system copy yet. I reckon it would be similar to your typical Dual Stack copy process then a bunch of Agent configuration work afterward. Note 1276022 briefly mentions it, so I presume it has been done before and would be supported.

      Cheers, Tony.

  • Hi Tony,

    It is been a long time you have written this blog.

    You mentioned that Flashcopy SE helped to get the clone in 4 hours including renaming. However at the same time the Post steps are a DB copy are tagged as long tasks.

    I was wondering why those post steps will not be required in case of FlashCopy (SE)? e.g. BDLS, transports/ configruation etc.

    Isn’it is the same in both cases, or was it just to created a completedly isolated systems from others and use for specific purposes?


  • Does anyone have experience testing for upgrades with scrambled data?  My HR folks tell me there are issues testing HCM data if it’s scrambled.  What’e the Best Practice on that?

    Thanks for any ideas,