Skip to Content
Author's profile photo Former Member

Restore database ..Rename – parallel creating of dbfiles

I am currently migrating a relatively small IQ 16 database to a new location. However, the pre-allocated size of 4 Main and 4 Temp dbfiles is 1 TB each. Creating those files sequentially one-by-one  in an NFS based environment is taking forever.

RESTORE database ‘/sybase/PBW_NLS/data/PBW_NLS.db’

FROM ‘/sybase/PBW_NLS/saparch_1/SAPPBWDB.IQfullbkp ‘


RENAME SAPIQDBSPACE001_001  TO   ‘/sybase/PBW_NLS/nlsdata_1/PBW_NLS/PBW_NLS_01.IQ’ 

RENAME SAPIQDBSPACE001_002  TO   ‘/sybase/PBW_NLS/nlsdata_2/ PBW_NLS/PBW_NLS_02.IQ’


Is there any way to create those files in parallel?



Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Mark Mumy
      Mark Mumy


      I would recommend that you move this out of the blog area and into the message board for a wider audience.

      When IQ does a restore we must check each device for proper size so that we can actually do the restore.  For filesystem devices this means that we have to rebuild/touch the blocks so as to make sure that the device at size X can actually grow to size X.  Raw devices don't have this issue as one can simply go to the end of the device without having to traverse and check blocks.

      The process over NFS will certainly depend on two things.  First, the version of IQ.  IQ 16 SP8 PL20 and later have some NFS enhancements.  If you are on an older version then performance could suffer.

      Second, is the speed and performance of NFS.  We've had plenty of IQ environments in NFS that work just fine.The key, though, was to make sure that the end to end communication from the IQ server through to the NFS host had followed proper sizing guidelines.  I would try to get 50-100 MB/sec of network throughput per core on the IQ host.   If all your NFS mounts are over a single channel, that would likely lead to performance issues give the upper limit on speed that a single channel could support.

      Roughly, for every 10-15 cores, you would want a 10 gbit network connection to your NFS server.  That equates to 70-120 MB/sec per core of throughput.  You would also need to make sure that the NFS server and storage has the same throughput requirements.


      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thanks, Mark. Great information!