Skip to Content

Performance issue becomes a real problem for some SAP systems. Especially if your DB size is big, there are many customer codes.

In one of our customer’s system we had a performance issue. We had a meeting with all parts involved in SAP Systems. HW vendor, Network group, Business, System Admins and BASIS. The result of the meeting was an action list.

Beside all architectural actions, one of the action was finding fragmanted indexes and applying defragmantation and it was my duty to do the needful.


This blog is output of this action and about index defragmantation on SQL Server.

I will try to explain the topic in the following order.

  1. List the indexes that will be possibly defragmanted and effecting the system performance
  2. Analyze the indexes
  3. Defragmant the indexes
  4. Run new Statistics for the indexes
  5. Results and Chart
  6. Pros and Cons of defragmanting

And these are the details of the steps:

1. List the Indexes

When we were listing the fragmanted indexes our scope was on the indexes that are used in expensive SQL Statements. We also checked the indexes of the tables that are accessed sequentially on process overview.

Go to DBACOCKPIT –> Performance –> SQL Statements

SQL_1.jpg

Double Click on one of the SQL Statements.

Go to Explain Tree Tab to see the index names used in the SQL Statements.

SQL_2.jpg

For other indexes we use SM66 Global Work Process Overview.

Check the sequential reads and related tables.

SQL_3.jpg

After taking the list of related tables you go to SE11 and take the index names.

SQL_4.jpg

2. Analyze the Index

Go to DBACOCKPIT –> Space –> Single Table Analysis

Enter the related table name and select the index name.

Select DBCC showcontig  button.

     SQL_5.jpg

     Run one of the options Fast, All Levels and Normal.

     SQL_6.jpg

Yo can also use SQL SQL Server Management Studio to find DBCC Showcontig results using similar Queries like below.

     SQL_7.jpg

DBCC SHOWCONTIG allows you to measure fragmentation and page density levels on indexes.

The result is analyzed according to the document “Microsoft SQL Server 2000 Index Defragmentation Best Practices” mentioned in SAP Note 159316 – Reorganizing tables on SQL Server. You can check the “Analyzing Output from DBCC SHOWCONTIG” section in the document for details.


The document says that  “High values for logical scan fragmentation can lead to degraded performance of index scans. Consider defragmenting indexes with 20 percent or more logical fragmentation.


It states also that “indexes containing more than 10,000 pages realized performance gains, with the biggest gains on indexes with significantly more pages (greater than 50,000 pages).”

So we selected all indexes that has logical scan fragmantation more than 20% and more pages.

3. Defragmant the indexes

    

We used SQL Server Management Studio Query editor to run scripts for defragmanting.

Examples:   DBCC INDEXDEFRAG (<SID>, “<sid>.EKPO”, [EKPO~0])

                    DBCC INDEXDEFRAG (<SID>, “<sid>.RFBLG”, [RFBLG~0])

Duration of defragmantation changes according to the index sizes and Logical Scan Fragmentation percentage. As you see below screen it took 3 hours. Some lasts more than a day.

This process also generates logs. So you must monitor your log space during the process. Otherwise your system can stop due to lack of log space.

DBCC INDEXDEFRAG reports the estimated percentage completed every five minutes.

DBCC INDEXDEFRAG can be stopped at any point in the process, and any completed work is retained.

SQL_9.jpg

4- Update Statistics

     We updated statistics of the related tables after defragmantation to obtain the changed statistics.

     SQL_10.jpg

5- Results and Graph

    When we rerun DBCC Showcontig command after defragmantation we can easily see the decrease in Logical Scan Fragmentation.

     The value became an acceptable value as it is seen in below picture.

     SQL_11.jpg

     As a result, we took the I/O Stall read and write times from DBACOCKPIT –> Performace –> History –> Database Collector Data Time Series table.

     SQL_12.jpg

     We exported data to an excel sheet and took the averages of all read and write times for the whole period. (Red and orange lines in below chart).

     From the chart below it can easily be seen that I/O values decreased significantly. So it worth defragmenting.

     SQL_13.jpg


6- Pros and Cons of defragmanting

  • First of all, SAP explicitly recommends not to reorganize or rebuild any database objects on a regular basis. You should not even reorganize or rebuild objects as an attempt to solve a performance problem as long as it is not evident that fragmentation is the root cause of the problem (which it hardly ever is).
  • You can do this process for particular indexes that are fragmented with high percentages, has alot off pages and used in expensive sql statements.
  • It is time consuming job
  • Generates lots of logs, monitoring is must
  • Defragmentation can be done online. Instead DBREINDEX can be used for offline solution.

Resources:

159316 – Reorganizing tables on SQL Server

1660220 – Microsoft SQL Server: Common misconceptions

Microsoft SQL Server 2000 Index Defragmentation Best Practices

To report this post you need to login first.

8 Comments

You must be Logged on to comment or reply to a post.

  1. Romel David Rodriguez de la Cruz

    Yuksel, thakn you for share this,  Let me ask you something

    The  EWA report was showing that there was missing index on the SAP system? or even that you update your DB statistics you decided go with the index defrag?

    regards.

    (0) 
    1. Yuksel AKCINAR Post author

      Hello Romel,

      There is no missing index on the system. EWA does not give either.

      As you know and is stated in documents SQL server updates statistics automatically. You donot need to schedule a job for this.

      Also especially for RFBLG defragmanting was advised.

      And we have done it for the others that are in expensive SQL Statements and running sequential reads.

      Regards,

      Yuksel AKCINAR

      (0) 
  2. Manuel Herr

    Hi Yuksel, thanks for sharing this information. A also have sometime problems with fragmented indexes, and an DB optimizer which chosses the wrong index, because of the high fragmentation. I dont use the dbcc indexdefrag, i just do an complete indexrebuild with report msscompress.

    But i dont understand why do you have a EKPO with a size of 100GB? Thats very HUGE (ok i dunno your system.).

    Do you do archiving? And second, i your dbacockpit screenshot i didn’t see the column “compression”. Do you use “page compression”? We compressed a 2TB Database to 400GB. This will give you a performance boost like hell (if you did not page compress until now).

    Regards

    Manuel

    (0) 
    1. Yuksel AKCINAR Post author

      Hello Manuel,

      My customer is a retail company. And has 5.5 TB DB size.

      Thank you for your remind.

      We have done the work with compression. All tables and indexes are PAGE compressed.

      Archived some data but not related to real business mostly with BASIS part.

      You can use indexrebuild also but for online solution we used indexdefrag.

      Thanks and Regards,

      Yuksel AKCINAR

      (0) 
      1. Manuel Herr

        Hi Yuksel, ok nice to hear. 5,5 TB is a lot after page compression.

        We archived ~4 TB of data since this system is grown up from R2, and yes doing archiving of the business data is hard. You need a process (i’m lucky we have one..).

        Indexrebuilds are online in most cases (only a few tables which are offline). You don’t need a statisticupdate after. msscompress care if you set “online index rebuild only” to not block the table with an offline rebuild.

        The indexrebuildstatus can be checked with SQL command (i use it a lot, but didn’t write it myself):

        ;WITH cte AS

        (

        SELECT

        object_id,

        index_id,

        partition_number,

        rows,

        ROW_NUMBER() OVER(PARTITION BY object_id, index_id, partition_number ORDER BY partition_id) as rn

        FROM sys.partitions

        )

        SELECT

           object_name(cur.object_id) as TableName,

           cur.index_id,

           cur.partition_number,

           PrecentDone =

              CASE

                 WHEN pre.rows = 0 THEN 0

              ELSE

                 ((cur.rows * 100.0) / pre.rows)

              END,

           pre.rows – cur.rows as MissingRows

        FROM cte as cur

        INNER JOIN cte as pre on (cur.object_id = pre.object_id) AND (cur.index_id = pre.index_id) AND (cur.partition_number = pre.partition_number) AND (cur.rn = pre.rn +1)

        ORDER BY 4

        Regards

        Manuel

        (0) 
        1. Yuksel AKCINAR Post author

          Hello Manuel,

          I realized the option of rebuild index in msscompress.

          I will use it any more instead of indexdefrag.

          The query does not return any data. Can you say how to run it?

          Thanks and Regards

          Yuksel AKCINAR

          (0) 
          1. Manuel Herr

            Hello Yuksel,

            the query only returns something, if there is an index rebuild in progress.

            I started a compress of table SOOD with msscompress as a backgroundjob with datarebuild and indexrebuild forced.

            And then i started the query against the SAP Database, not the master DB as long as the job is running. You can see on which index it is working, and how far it is in percent (On the index 1! be aware, if the table has multiple indexes, the query will count from 0-100% for every index. It is not the overall status, just the status on the current index.)

            /wp-content/uploads/2015/02/indexrebuild_649036.jpg

            Joblog of the indexrebuildjob in SM37:/wp-content/uploads/2015/02/indexrebuildjoblog_649154.jpg

            *edit*

            i forgot, i’m sure u know that, but just a warning, there will be huge transactinlogs during the indexrebuild depending on the size of the table. Keep TA Log size and backup frequency in mind.

            Regards

            Manuel

            (0) 
            1. Yuksel AKCINAR Post author

              Hello Manuel,

              Thanks a lot for your info.

              Yes, I am aware of transaction logs. When we are doing these operations we must be careful about TA logs. And we must not start many jobs in parallel that will stop the SAP system.

              Regards,

              Yuksel AKCINAR

              (0) 

Leave a Reply