Skip to Content

You can face the performance issue while running alter table on a very huge table having many indexes created.

for example : alter table takes around 38 hours for a very large table which has around 538 columns and 15 nonclustered indexes.

The table size is approx 600 GB having 7.5 millions of rows .

The reason for the performance issue is because the indexes are created serially.

To fix this performance issue, Please follow the below steps :

1. drop the indexes,

2. run the alter table,

3. recreate the indexes in parallel

Hope this will be helpful.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Martin English


    How long did the alter take after dropping the indexes, and how long did creating the indexes in parallel take ?

    I’m willing to entertain the idea that there is an improvement, but I’d like to know what it actually was.


    1. Former Member
      Post author

      Martin ,

      In the above case with table having around 7.5 millions of records.

      Dropping the indexes, Run the alter table, Recreate the indexes in parallel – This whole activity takes approx. 9 hours.

      1. Former Member

        Hi Gaurav,

        Correct me if I’m wrong but when you say you create the indices in parallel, does it mean you’re using the clause with consumers = xxx ?

        if this is the case, how can you check it’s indeed creating the indices in parallel?

        I believe all the consumers part of an index creation will have the same FID in sp_who output, right?



        1. Former Member
          Post author

          Hi Simon,

          With consumers clause specifies the number of consumer processes that will perform the sort operation for creating the index.

          Creating the index in parallel, means you can create the non-clustered indexes in seperate sessions in parallel.



Leave a Reply