System Copy and Migration Observations
There are many blogs and documents available describing how to best migrate your SAP system to HANA. This isn’t one of those.
What this is, on the other hand, is a few observations, and some lessons learned, when migrating an ERP system to new hardware using the R3load, aka Export/Import, method of system copy. The overall process is well-described in the official System Copy Guide and in numerous documents available on SCN, so I won’t go into that detail here. What is not well-described, however, is how to go about choosing some of the parameters to be used during the export and import — specifically, the number of parallel processes. First, however, let’s address some background confusion prevalent among many customers.
- Export on Old Hardware
- Import on New Hardware
- Max Degree of Parallelism
- Minimal Logging During Import
- Adjusting Parallel Processes During Import
Homogeneous or Heterogeneous?
One point that seems to come up, time and time again, in questions posted to SCN is about whether a homogeneous system copy is allowed in the case of a database or operating system upgrade.
The answer is yes.
If you are upgrading your operating system, for instance from Windows Server 2003 to Windows Server 2012 R2, you are not changing your operating system platform. Therefore, this remains a homogeneous system copy (yes, you should be using system copy as part of a Windows operating system upgrade, as an in-place upgrade of the OS is not supported by either Microsoft nor SAP if any non-Microsoft application (i.e., your SAP system) is installed, except in special circumstances which generally do not include production systems).
If you are upgrading your database platform, for instance from SQL Server 2005 to SQL Server 2012, you are not changing your database platform, and so, again, this is a homogeneous system copy. It is possible and acceptable to upgrade SQL Server in place, although you might consider following the same advice given for a Windows OS upgrade: export your SAP system (or take a backup of the database), then do a clean, fresh install of the OS and/or DBMS and use SWPM to re-import your database while reinstalling SAP.
You are only conducting a heterogeneous system copy if you are changing your operating system, database platform, or both, i.e. from Unix to Windows or Oracle to SQL Server. Or migrating to HANA.
- Homogeneous: source and target platforms are the same (although perhaps on different releases).
- Heterogeneous: source and target platforms are different.
Export/Import or Backup/Restore?
The next question that often arises is whether an Export/Import-based migration or Backup/Restore-based copy is preferred. These methods sometimes go by different names:
Export/Import is sometimes called R3load/Migration Monitor based or Database Independent (in the System Copy Guide). Because this method is not reliant on database-specific tools, it is the only method that can be used for heterogeneous copies. However, it can also be used for homogeneous copies.
Backup/Restore is sometimes called Detach/Attach, or Database Dependent (in the Guide), or even just Homogeneous System Copy (in the SWPM tool itself). This method relies heavily on database-specific tools and methods, and therefore it can only be used for homogeneous copies.
If you are performing a heterogeneous system copy, then you have no choice. You must use the Export/Import method. If you are performing a homogeneous system copy, you may choose either method, but there are some definite criteria you should consider in making that choice.
Generally speaking, for a homogeneous system copy, your life will be simpler (and the whole procedure may go faster) if you choose the Backup/Restore method. For a SQL Server-based ABAP system, for instance, you can make an online backup of your source database without having to shut down the SAP system, which means there is no downtime of the source system involved. Copy the backup file to your target system, restore it to a new database there, then run SWPM to complete the copy/install. This is great when cloning a system for test purposes. Of course, if the goal is to migrate the existing system to new hardware, then downtime is inevitable, and you certainly don’t want changes made to the source system after the backup.
The Detach/Attach variant of this method is probably the fastest overall, as there is no export, import, backup, or restore to be performed. However, downtime is involved. You shut down the source SAP system, then use database tools (SQL Server Management Studio, for instance), to detach the database. Then you simply copy the database files to your target system, use database tools again to attach the database, then run SWPM on the target to complete the copy/install.
By comparison, the Export/Import method involves shutting down the source SAP system, then using SWPM to export the data to create an export image (which will likely be hundreds of files, but will also be considerably smaller than your original database), then using SWPM again on the target system to install SAP with the export image as a source. Lots of downtime on the source, and generally speaking a more complex process, but much less data to move across the network.
Obviously I am a big fan of using the Backup/Restore or Detach/Attach database-dependent method for homogeneous system copies, and in most cases, this is what I would advise you to choose.
When You Should Choose Export/Import
There is one glaring disadvantage to the Backup/Restore method, however. This method will make an exact copy of your database on your target system, warts and all. Most of the time, that isn’t really an issue, but there are circumstances where you might really wish to reformat the structure of your database to take advantage of options that may not have been available when you originally installed your SAP system, or perhaps to make up for poor choices at the time of original install that you would now like to correct. Well, this is your big opportunity.
What are some of these new options?
- Perhaps you are migrating to new hardware, with many more CPU cores than available on the old hardware, and you see this as a prime opportunity to expand your database across a larger number of files, redistributing the tables and indexes across these files, thus optimizing the I/O load. Backup/Restore will create a target database with the same number of files as the source, with the tables distributed exactly as they were before. You can add more files, but your tables will not be evenly redistributed across them. Export/Import, on the other hand, doesn’t care about your original file layout, and gives the opportunity to choose an entirely new file layout during the import phase.
- Perhaps you are upgrading your DBMS and would like to take advantage of new database compression options. Yes, you can run MSSCOMPRESS online after upgrading to a platform that supports it, but this can have long runtimes. SWPM will, however, automatically compress your database using the new defaults during the import, assuming your target DBMS supports these defaults, so you can achieve migration and compression in a single step. Compression does not add any extra time to the import.
Parallel Processing During Export and Import
At the beginning of the export and the import in the SWPM tool, there is a screen where you are asked to provide a Number of Parallel Jobs. The default number is 3. This parameter controls how many table packages can be simultaneously exported or imported, and obviously it can have a huge impact on overall runtime. The System Copy Guide does not give much in the way of advice about choosing an appropriate number, and other documentation is sparse on this topic. Searching around SCN will bring up some old discussion threads in which advice is given ranging from choosing 1 to 3 jobs per CPU, and so forth, but it is difficult to find any empirical data to back up this advice.
This is an area needing more experimentation, but I can share with you my own recent experience with this parameter.
Export on Old Hardware
I exported from two different QAS machines, both using essentially identical hardware: HP ProLiant DL385 Gen1 servers, each with two AMD Opteron 280 2.4 GHz Dual-Core CPUs (a total of 4 cores, no hyperthreading) and 5 GB of RAM, running Windows Server 2003 and SQL Server 2005. I think you can see why I wanted to get off these machines. The application is ERP 6.04 / NetWeaver 7.01 ABAP. The databases were spread across six drive volumes.
Export 1: 3 Parallel Processes on 4 Cores
The first export involved a 490 GB database, which SWPM split into 135 packages. I hadn’t yet figured out what I could get away with in terms of modifying the number of export jobs involved, so I left the parameter at the default of 3. The export took 8 hours 25 minutes. However, the export package at the end was only 50.4 GB in size.
Export 2: 6 Parallel Processes on 4 Cores
By the time I got around to the second export I had learned a thing or two about configuring these jobs. This time the source database was 520 GB, and SWPM split it into 141 packages. I configured the export to use 6 processes. During the export I noted that CPU utilization was consistently 90-93%, so this was probably the maximum the system would handle. This time the export took 6 hours 28 minutes, a two-hour reduction. As most of the time was spent exporting a single very large table in a single process, thus not benefiting at all from parallelization, I probably could have reduced this time considerably more using advanced splitting options. The resulting export package was 57.6 GB in size.
Import on New Hardware
The target machines were not identical to each other, but in both cases the target OS/DBMS was Windows Server 2012 R2 and SQL Server 2012. Both databases would be spread across eight drive volumes instead of the previous six.
Import 1: 3, then 12, then 18 Parallel Processes on 12 Cores
The target of my first export, and thus first import, was an HP ProLiant BL460c Gen8 with two Intel Xeon E5-2630 v2 2.6 GHz six-core CPUs with hyperthreading and 64 GB of RAM. Yeah, now we’re talking, baby! Twelve cores, twenty-four logical processors, in a device barely bigger than my laptop.
At the start of this import, I still didn’t really have a handle on how to configure the parallel jobs, so as with the matching export, I left it at the default of 3. After all the DEV system I had migrated earlier didn’t take that long — but the DEV system had a considerably smaller database.
Five hours into the import I realized only 60 of the 135 packages had completed, and some quick table napkin calculations indicated this job wasn’t going to be finished before Monday morning when users were expecting to have a system. I did some research and some digging and figured it would be safe to configure one import job per core. However, I really didn’t want to start all over from scratch and waste the five hours already spent, so with a little more experimentation I found a way to modify the number of running jobs while the import was in process, with immediate effect. More on this in a bit.
So first I bumped the number of parallel jobs from 3 to 12, and immediately I saw that the future was rosier. I monitored resource usage for a while to gauge the impact, and I saw CPU utilization bouncing between 35% to 45% and memory utilization pegged at 46%. Not bad, it looked like we still had plenty of headroom, so I again bumped up the processes, from 12 to 18. The overall import job took another impressive leap forward in speed, while CPU utilization only rose 2-3% more and memory utilization didn’t change. It’s entirely possible this machine could have easily handled many more processes, but I had seen an anecdotal recommendation that the parallel processes should be capped at 20 (I’m not sure why, but there is some indication that much beyond this number and the overall process may actually go slower — but again, that may only be true for older hardware), and in any case all but one import package finished within minutes after making this change.
The final package took an additional three hours to import by itself. This was PPOIX, by far the largest table in my database at 170 GB (I have since talked to Payroll Accounting about some housecleaning measures they can incorporate), and thus without using table splitting options this becomes the critical path, the limiting factor in runtime. Still, I had gained some invaluable experience in optimizing my imports.
My new database, which had been 490 GB before export, was now 125 GB after import.
Import 2: 12 Parallel Processes on 8 Cores
The target of my second export, and thus second import, was also an HP ProLiant BL460c, but an older Gen6 with two Intel Xeon 5550 2.67 GHz quad-core CPUs with hyperthreading and 48 GB of RAM. Maybe not quite as impressive as the other machine, but still nice with eight cores, sixteen logical processors.
Based upon my experience running 18 processes on 12 cores, a 1.5:1 ratio, I started this import with 12 processes. I noted CPU utilization at 60-75% and memory utilization at 49%. Still some decent headroom, but I left it alone and let it run with the 12 processes. Despite seemingly matched CPU frequencies, the Gen6 really is not quite as fast as the Gen8, core for core, due to a number of factors that are not really the focus of this blog, and to this I attributed the higher CPU utilization with fewer processes.
This time, 140 of my 141 packages were completed in 2 hours 4 minutes. Again, PPOIX consumed a single import process for 6-1/2 hours by itself, in parallel with the rest of the import, and thus the overall import time was 6 hours 32 minutes. Next time I do this in a test system, I really will investigate table splitting across multiple packages, which conceivably could get the import time down to not much more than two, perhaps two and a half hours, or perhaps even much less should I be willing to bump up the process:core ratio to 2:1 or even 3:1.
The source database, 520 GB before export, became 135 GB after import on the target. Yeah, I’m quite liking this compression business.
Max Degree of Parallelism
In addition to adjusting the number of parallel jobs, I temporarily set the SQL Server parameter Max Degree of Parallelism (also known as MAXDOP) to 4. Normally it is recommended to keep MAXDOP at 1, unless you have a very large system, but as explained in Note 1054852 (Recommendations for migrations using Microsoft SQL Server), the import can benefit during the phase where secondary indexes are built with a higher level of parallelism. Just remember to set this back to 1 again when the import is complete and before starting regular operation of the new system.
Minimal Logging During Import
The other important factor for SQL Server-based imports is to temporarily set trace flag 610. This enables the minimal logging extensions for bulk load and can help avoid situations where even in Simple recovery mode the transaction log may be filled. For more details see Note 1241751 (SQL Server minimal logging extensions). Again, remember to remove the trace flag after the import is complete.
Adjusting Parallel Processes During Import
During Import 1 I mentioned that I adjusted the number of processes used from 3 to 12 and then to 18 without interrupting the import. How did I do that? There is a configuration file that SWPM creates using the parameters you enter at the beginning called import_monitor_cmd.properties. The file can be found at C:\Program Files\sapinst_instdir\<software variant>\<release>\LM\COPY\MSS\SYSTEM\CENTRAL\AS-ABAP (your path may be slightly different depending upon options you chose, but it should be fairly obvious). Within the properties file you will find the parameter jobNum. Simply edit this number and save the file. The change takes effect immediately.
How many parallel processes to choose is not a cut-and-dried formula. Generally, it seems that a ratio of processes to cores between 1.5:1 and 3:1 should be safe, but this will depend on the speed and performance of your CPU cores and general system hardware. On the Gen1 processors, 1.5:1 pegged them to over 90% utilization. On the Gen8 processors, 1.5:1 didn’t even break 50%, while the Gen6 fell somewhere in between. The only way to know is to test and observe on representative hardware.
There is also a memory footprint for each parallel process, but with anything resembling modern hardware it is far more likely you will be constrained by the number of CPU cores and not the gigabytes of RAM. Still, a number I have seen mentioned is no more than 1 process per 1/2 GB of RAM.
I have seen a suggestion of a maximum of 20 processes, but the reasons for this suggestion are not clear to me, and I suspect this number could be higher with current hardware.
If you have one or more tables of significant size, it is worthwhile to use the package splitter tool (part of SWPM) to break them up into multiple packages so that they can benefit from parallelization.
Thanks for following along, and hopefully you will find the above useful. If you have your own experiences and observations to add, please do so in the comments.
Good Observations Matt,
Lol, didn't read! 🙂 This is not my area of expertise, obviously, but I can relate to not having enough information about making good choices. We face the same challenges in ABAP or configuration. E.g. 'number of parallel jobs' question is very similar to the eternal 'how many JOINs'. SCN is full of urban legends on this and lack of 'empirical data' (AKA facts) is evident.
But, on a bright side, this allows us to fill in the gaps with SCN blogs/documents and rake in some sweet, sweet points. 🙂
Quite educational, thanks for sharing!
Thanks! Yeah, I've been doing a lot of these migrations lately (and have a lot more to come over the next month or so), as we are wholesale refreshing our entire SAP hardware base. So, that has given me the opportunity to "experiment" a little bit, within some time constraints (I am really under the gun to get all these done by mid July), so I thought it was good material to share. However, those time constraints mean I don't really have time for a "polished" looking blog with lots of screenshots and pretty pictures, lol.
Great blog and detailed analysis Matt!
Thanks for sharing.
Tagging SAP on SQL Server.
Thank you for this great blog. We will be migrating our sap systems from aix to linux and db will remain same at oracle, no db upgrade. please can you clarify whether this will be homogeneous or heterogeneous one.
As you will be changing the OS from AIX to Linux, it will be an OS/DB Migration (even though the DB stays the same), and so you must use heterogeneous system copy.
thank you for the reply. please can you also tell file systems should be created in target system before we start the import. please correct me if I am wrong, the database software should also be installed during the import. how we do estimate the size of export directory? should we use the size in dbsize.xml? please clarify.
I'm not completely sure I understand the question. Also, I do not work with Oracle, AIX, nor Linux, so I can't speak to concerns specific to those platforms. However, in general, yes, you must install the database software first, but no, you don't install any SAP system first (except possibly a diagnostics agent). SWPM will install your SAP system for you.
Since you are doing a heterogeneous system copy, you also do not create your SAP database ahead of time. SWPM will create the database and load it using the data exported from the source system.
If this is for an ABAP system, then SWPM will use the dbsize.xml file to create a database of sufficient size with an appropriate amount of headroom. You will need to determine how many files to spread the database across (the tool will make a suggestion for you), and you will have an opportunity to adjust those file sizes if necessary.
If this is for a Java system, the same thing happens, except for some reason the tool suggests a default size for the database before it even looks at your export dump, so it hasn't yet interpreted dbsize.xml. Therefore, you will need to ensure you set the database file sizes large enough on your own during the parameter specification phase in the tool. For this, you could examine dbsize.xml yourself, or you could simply look at your source database -- how large is it and how much of it is allocated/used vs free space? Then divide that across 4, 8, or 16 files as appropriate for your target system.
If you are talking about estimating the size of the export dump before running SWPM on the source system, well, again you know the maximum it could possibly be, as it certainly isn't going to be much larger than your original database, is it? In fact, it will likely be much smaller, as the dump doesn't include indexes, only the raw data (and that is often compressed heavily), plus instructions for recreating the indexes. Again, the size of a full backup of your database should represent an absolute upper bound for how large the export could be, but it will probably be much smaller.
It is possible to run the tool on the source just to create a dbsize.xml on its own, i.e. to estimate the export size without actually exporting. I haven't tried this option myself, as it hasn't been necessary (if I'm that tight on disk space, then a better use of my time is convincing my data center manager to give me more).
thank you for clarifications. the system copy guide provides instructions on how to prepare target system. I believe that will clear the doubt on file systems in target system. The other doubts are clear now.
Please also be aware of that officially, heterogeneous system copies using Software Provisioning Manager are only supported, if performed by a certified consultant, as outlined in the system copy guide ("Only perform a heterogeneous system copy if you
are a certified system support consultant or a certified SAP Technical Consultant").
I believe it is only for Production system. Isnt it?
we will be migrating only DEV and QA systems of ECC and PI.
No, this is not correct (just cross-checked it), the official statement is:
I hope this clarifies things. I will make sure that we will sharpen the corresponding statement also in the system copy guides.
Thank you for the clarification. please can you clear this statement "if you are a certified system support consultant or a certified SAP Technical Consultant." can you tell which certification this statement refers to? is it tadm70 in specific ?
Yes, TADM70 would be the corresponding training (where you get the knowledge for performing a heterogeneous system copy), the actual certification exam would be C_TADM70_73, as far as I know.
When we migrate(heterogeneous migration) dual stack PI system, will the sapinst(swpm) export both ABAP & Java stack? also during import will inst import both stacks? please clarify.
Yes, the tool is capable of dual-stack copies, for both export and import. It has been a while since I performed a dual-stack copy myself (I copied a Solution Manager system a couple years ago), so I can't speak directly to how it behaves today, but I do know that the options appear in the menu when you start the tool.
Good Blog, Thanks for sharing your obervations
Thanks for the information even we had the same confusion whenever before starting the system activity.
In Homogeneous system copy, we can do backup/restore or Attach/Detach method.
My Question is
1)If Source system on ECC 6.0 EhP7 and the Target system is ECC 6.0 EhP6 , can we do system copy on different EhP systems? ( Our Scenario in the ECC Landscape all the system are moved to EhP7 except Pre-Production system )
2) Also can we do backup/restore or Attach/Detach on different Support Packs like one is SPS 11 and another one is SPS 09 ?
When you copy a system, you are bringing its software versions along with it. Anything that is in the database is copied, so that means the release, enhancement pack, and support pack versions will be identical between source and target. Only things external to the database are not copied directly, i.e. the kernel and instance/default profiles. It is possible to install the target system with a higher kernel patch level than the source, and sometimes a newer kernel release will be required (in the case of an OS or DBMS upgrade that mandates it). You should never install the target system with a lower kernel release or patch level than the source, however.
So, the answer to both of your questions is no. The very act of making the system copy is going to create (or refresh) your target system to be just like the source system. So, if the source is EhP7 and SPS9, then the target is also going to be EhP7 and SPS9 when you are done.
there not much to add to what Matt has written: a downgrade (e.g. EhP 7 to EhP 6) is not possible by system copy (irrespective of copy method).
To be honest I do not really understand your use case, because especially for the preproduction system it would very important to be (exactly) on the same release level as the production system, so that test results on the pre-prod system are valid for production.
This is why many customers regularly refresh their pre-production system from the production system by system copy. Would this be an option for you or are these lower versions required for some special reason?
Thanks Matt and Harald.
In our Landscape We completed upgrade from SBX to Prod one by one. During that time we user Pre-Production as Quality and Production system. Now we have completed in all SBX,DEV,QA and PRD.
Now want to make Pre-Prod as same level.
Before that I have 2 options for
1)Pre-Production alone, Can uninstall Pre-Prod entirely and build target system copy by using Attach/dettach from EhP7 dump in Pre-Pord?
2) Or If do System copy from EhP7 (Prod) to EhP6 ( Pre-Prod ) by using backup/restore , system will be same data?
With either option, as long as you follow the full system copy process (using SWPM), you can build a new pre-prod that matches prod, yes.
Can you please provide some information on table splitting options with some screen shoots or examples.
I didn't use the table splitting option when I did my export, although as I wrote, in retrospect it would likely have helped reduce the time involved. So, unfortunately, I'm unable to provide an example or a screenshot. However, the option for this is clearly displayed in SWPM when you are setting up your export, so it wouldn't be hard to figure out. The key would be to investigate your largest tables in advance to determine if there is one or more that is much larger than the average.
Also, as I wrote, it can be helpful to determine if such a table can be reduced in size through archiving and/or housecleaning measures. In my case, we later dramatically reduced PPOIX by implementing measures to remove old, obsolete posting records to FI from the table. That process itself took a long time (days, running a subset of past years data per day), so it wouldn't reduce overall time, but as it was done during uptime, it would have reduced overall downtime. However, as reducing table size is not always an option, that's where table splitting would come into play.
Thanks for sharing,
For dual stack copy(export,import) of Solution manager system, do we need to perform any pre steps for java stack if need what are the steps required?
For the most part, Solution Manager is copied like any other Dual-Stack NetWeaver system, though I think there may be some extra work around Wily Introscope Enterprise Manager, and so on. So, all the pre- and post-steps for both the Java and ABAP stacks are covered in the dual-stack system copy guide:
http://service.sap.com/instguides -> SAP NetWeaver -> SAP NetWeaver 7.0 (2004s) (if it's SolMan 7.1, otherwise choose appropriately for SolMan 7.2) -> Installation -> 2 - Installation - SAP NetWeaver Systems -> System Copy: Systems Based on SAP NetWeaver 7.0/7.0 EHPs -> <OS>: Dual Stack (ABAP+Java).
We're planned to do a System copy using Export-Import Method (R3Load) from Production (Source System) to Sandbox (Target System).
We assume that it is possible to perform the Source System Export without updating the kernel patch level to the latest patch level available in the SAP Marketplace.
During the Target System Installation using Import method, it is possible to use the latest kernel patch Level available in the SAP Markeplace?
We have to use the same Source system kernel patch also for target System Installation? SAP System Copy Guide, says, to update the latest kernel patch in target System after installing the central instance and before starting the SAP System.
Thanks for the Reply.
To be on the safe side, I would try to have the target system match the kernel patch level of the source system when you first perform the import/installation. However, if that isn't reasonably possible, then I would ensure that it has a higher patch level -- never a lower patch level -- than the source system.
After the import is complete, then I would patch it up to your desired patch level.
Our Intention is, not to update anything on the Production System before trying any activities on the sandbox, development and test Systems.
So, in this case and also according to you, i would do the following:
Can you also confirm from your side?
Mostly. I would do the import on the target system at the same patch level as production. Only after completing the import and validating that the target system works the same as production would I then apply the latest kernel patch (and anything else) that you intend to test before doing so in production.
During the target System Import, we have to Mention the Kernel DVD, in our case, the source System kernel Version 722 EXT UC has the Compiler time Nov 7 2016 and this DVD is not backed up anymore. So, we have use the new kernel DVD of same Version but with different data Version created and changed in the years of 2017 and 2019.
And also the installed kernel patch Level 216 (of Source System) not longer exist in SAP Marketplace.
I assume that, system Export works irrespective of kernel Version and during System Import we must use the new available kernel DVD of same Version due to release restrictions
That's right, you do what you have to in order to get the process to a successful conclusion. When you are dealing with very old kernel releases and/or patch levels, it may not be possible to use the same on your target system during installation, as you have found. In that case, you use the closest you can to what you had on the source system during the export. What I described is an ideal process, but it may not be possible or practical in all situations.
Thanks for sharing your opinion.
System Export activity processed the Task type "run R3SZCHK" and currently running task type "Migration Monitor".
"run R3SZCHK" activity ran for ~ 9 hours. Do you have idea what would be reason for the issue? We assume that the database tuning was not performed recently in the system.
Actually, during the process i selected the option "skip statistics update" in the screen "MaxDB Database Statistics". I assume that this won't be the reason for long running of R3SZCHK activity because according to SWPM task list, the task type "MaxDB: Update Statistics" Comes later than the task type "run R3SZCHK", that is, after "check database schema for ABAP".
Maybe, r3* Tools delivered along with the SWPM tool are on the old Version?
The export/import method will generally be slower than the backup/restore method, but yes, it will be heavily dependent upon a number of factors. Tuning of the database, up-to-date statistics, all those things can have an impact, but also the tuning of the SWPM export process in order to maximize (but not overrun) your hardware capacity, which of course was the real subject of this whole blog.
Unless your source system is really old and not supported by the latest SWPM, I would always use the latest SWPM tool for the export and the import, regardless of the kernel version you use for the import. Of course, you might be using the 70* version for older NetWeaver 7.0x systems, but that's still an up-to-date tool.
Beyond that, I can't really troubleshoot your situation here, so I would recommend that you open a question in the "Software Logistics" tag, with a secondary tag of "Software Logistics - System Provisioning." There you'll have access to experts from SAP and from the Community to help you.
Thank you for this great blog.
Will by Standard or homogeneous system copy a completely new SAP system be installed?
what is the best system copy method if a target system already exists (from PROD in TEST)?
Thanks a lot
Thank you for your comment.
Either system copy method can create a new SAP installation. These days, in the SWPM main menu, under System Copy, you will see alternative options for System Copy vs Database Refresh or Move (I'm going by memory, so I may have those options slightly mis-worded). System Copy will create a new installation as a copy of your source system, whereas Database Refresh is intended for refreshing the database of an existing target system using a copy of your source system's database.
It does not matter whether you choose Standard System Copy / Migration (Load-Based) or Homogeneous System Copy (Database Copy Method). Both methods will do this.
So, if your target system already exists, and you just want to "refresh" it with production data (for a test system), then I would select the Database Refresh or Move option in the main SWPM menu, and then when you get to the screen you showed, choose Homogeneous System Copy (Database Copy Method). You'll still need to have copies of the appropriate patch level of SAPEXE.SAR and SAPEXEDB.SAR on hand, but the process won't actually reinstall or overwrite the executables at the operating system level (it won't downgrade your kernel, for instance, if you already have a higher patch level on the target system).
Hope this helps!
We are doing a java system copy(homogeneous) from source to target using swpm tool.
1. While exporting from source system...exactly what files will copied to the export media...only db image.
2.we have mssql database...can we perform backup and restore using mssql studio for java systems? With out swpm.
For a Java system copy, you must use the SWPM-based export/import method. In this blog, I was focused primarily on ABAP systems, where backup/restore is a viable option. If I recall correctly, when you run the export on your source system, SWPM will create a large (but compressed) file containing all the relevant contents of both DB and filesystem needed to create your target system (along with the normal installation media). Then, when you run the import on your target system, it will ask you for the location of this export file, which you will have manually copied over to a file location on the target system.
The procedure is well-described in the system copy guide. Make sure you are choosing the right guide for your NetWeaver release and database platform, and for Java systems.
Thanks a lot for the article,
Is it a must to use SWPM-based export/import method for a Java System Copy ? because on the documentation "System Copy Guide For Hana DB for example" Backup/Restore Based method is also supported.
I have not performed a Java system copy in the past few years, so I cannot speak to recent changes in the procedure from personal experience. I can tell you that as of the time when I wrote this blog post, yes, the SWPM-based export on the source system was required, though there were beginning to be hints even then that this might not always be true in the future, as more and more of the Java system architecture is replicated in the database and not just in the filesystem.
According to the guide that you linked to, this is no longer a requirement, though it remains a requirement to run SWPM on the target system to install it after the copy. So, I would suggest you give it a try and see, as this would truly be an improvement to the procedure.
We are in the process to migrate SAP systems to new hardware (no change in the H/W architecture) and our current source systems follows below
Source - Distributed environment
SAP Application - On Windows (separate server)
Database - HANA on Linux (separate server) - No Change
Target as follows - - Distributed environment
SAP Application - On Linux (Only OS change for the application) (separate server)
Database - HANA on Linux (separate server) - No Change
Here only change is SAP Application OS change from Windows to Linux hence can take back/restore and build application using homogenous or heterogenous?
In the above scenario which one do we need to consider Homogenous or Heterogenous?
In this case, you're not actually doing anything with your database, so there isn't really a migration or copy, per se. So, there isn't really a need for a backup/restore; well, taking a backup before the procedure is always wise, but you shouldn't need to restore that backup.
Instead, what you're really doing is simply installing a new central instance (dialog instance plus ASCS instance) on Linux and removing the old one on Windows. The database doesn't change in your scenario. This would be similar to installing a new dialog instance with SWPM for the existing system, except you also want to install a new ASCS (message/enqueue server) and remove the old one. This should be pretty easy.