Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

We have seen many times that sybase optimizer generate sub standrad plan and we start saying the sybase optimizer is not good enough as it is not picking the correct plan. Then to fix such issues we start forcing indexes and order of scan.

Thus we see lots of the procedures with lots of forced hints. I have seen that in most of the cases Sybase optimizer picks the correct plan based on the statistics provided to it.Thus in cases where you think that sybase is picking sub standard plan. Look at the lava operator and see if there is too much

difference beween estimated rows and actual rows then it is potential cause of sub standard plan. Lava operators are great tool to fix the performance issues and I always use this. Showplan shows what plan was chosen but using lava operator you could get more information on why given plan is chosen.

Actually, lots of people had issues when they migrated to 15 as due to bad estimates the optimizer has chosen the merge or hash joins instead of nested loop joins and the performance of such queries got degraded as well as it had put lot of pressure on system resoruces and thus making response very slow. Everyone thought there were issues with the new Sybase optimizer though it was more of a not recently updated statistics.

Today, I will show such an example where bad stats caused optimizer to use a merge join instead of nested loop join.

Below is the code to create sample tables.

set nocount on
go
drop table mytab_1
go
drop table mytab_2
go
create table mytab_1(col0 int identity,col1 int not null , col2 datetime, col3 char(100) not null,col4 int not null)
go
insert into mytab_1
select id
,dateadd(ms,id,getdate())
,isnull(name,cast(colid as varchar(10)))
,colid
from syscolumns
go
create unique clustered index idx_col1_col4 on mytab_1(col1,col4)
go

select count(*) from mytab_1
go
select * into mytab_2 from mytab_1
go
create unique clustered index idx_col1_col4 on mytab_2(col1,col4)
go

Note down the number of rows in mytab_1 table. In my case i got 177797 rows.

Now I executed the query where I am joining mytab_1 and mytab_2 tables. In above case these are same tables but in real life we will have two tables with master -child or like that.

set plan optgoal allrows_dss

go

set showplan on

set statistics io,time,plancost on

go

select * from mytab_1 a

inner join mytab_2 b on a.col1 = b.col1 and a.col4 = b.col4

where a.col3 = 'id'

go

As we do not have any index or statistics on column col3. Optimizer doesnt have any idea on how many rows in mytab_1 will have col3='id'. Thus it uses default selectivity of 10% which is for '='. Thus based on my data it estimates 177780 rows. Now these are too much rows and then based on this data

optimizer is thinking that it is better to do a merge join with table mytab_2 as table mytab_1 has clustered index on joining column and mytab_2 also has clustered index and thus no sorting would require.

Now let us see what was the plan and what are lava operators says.


==================== Lava Operator Tree ====================

                        
                        Emit
                        (VA = 3)
                        r:30 er:1250
                        cpu: 100


             /
            MergeJoin
            Inner Join
            (VA = 2)
            r:30 er:1250


/                      \
TableScan               TableScan
mytab_1 (a)             mytab_2 (b)
(VA = 0)                (VA = 1)
r:30 er:17780           r:171698 er:177797
l:11113 el:0            l:10732 el:0
p:0 ep:0                p:0 ep:0

============================================================
Table: mytab_1 (a) scan count 1, logical reads: (regular=11113 apf=0 total=11113), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: mytab_2 (b) scan count 1, logical reads: (regular=10732 apf=0 total=10732), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Total writes for this command: 0

Execution Time 1.


Adaptive Server cpu time: 100 ms.  Adaptive Server elapsed time: 133 ms.

Look at estimated rows and look at actual rows it is 177780 vs 30 rows. Now if optimizer would have known that there are just 30 rows for col3 = 'id' it would have surely gone for the nested loop join but it did not know that and it estimated 17780 rows and in that case the merge join is better than nested loop join. Thus optimizer made correct decsion with whatever information was presented to it.

Now I will create the statistics on the column col3. You can even create the index as well if you want but that is for some other day. Today we are concentrarting the stats only.

update statistics mytab_1(col3)

go

Now run the same query agaiin.


set plan optgoal allrows_dss

go

set showplan on

set statistics io,time,plancost on

go

select * from mytab_1 a

inner join mytab_2 b on a.col1 = b.col1 and a.col4 = b.col4

where a.col3 = 'id'

go

Below is the lava operatort and io,time output.


==================== Lava Operator Tree ====================

                        
                        Emit
                        (VA = 3)
                        r:30 er:118
                        cpu: 100


             /
            NestLoopJoin
            Inner Join
            (VA = 2)
            r:30 er:118


/                      \
TableScan               IndexScan
mytab_1 (a)             idx_col1_col4 (b)
(VA = 0)                (VA = 1)
r:30 er:159             r:30 er:118
l:11113 el:0            l:90 el:0
p:0 ep:0                p:0 ep:0

============================================================
Table: mytab_1 (a) scan count 1, logical reads: (regular=11113 apf=0 total=11113), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: mytab_2 (b) scan count 30, logical reads: (regular=90 apf=0 total=90), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Total writes for this command: 0

Execution Time 1.
Adaptive Server cpu time: 100 ms.  Adaptive Server elapsed time: 53 ms.

Now it estimates around 159 rows based on the historgram and stats we have on this column. It is still not perfect but it is much better than we had without stats. Now optimizer has chosen nested loop join as it makes mroe sense to read the second table using clustered index for 159 times instead of reading full table into memory.

Look at the logical IO for mytab_2 in both cases. It has come down from 10K to 100. Also, elapsed time has been redcude from 133 ms to 53 ms. Imagine now that these were very large tables say in gigabytes and thus we would have had lots of physical IO's and we would have brought unnecessary data in cache and thus replaicing other useful tables. Thus other queries would have had a bad impact as well. Also, it would have loaded IO subsystem.

Now you could see that what an impact the stats could have on your queries performance. Thus keep the stats updated periodically. Period of updateing will depend on the activity on the table.

if you want to go a step further then add the steps for histograms and it will give you almost perfect match between estimated and actual rows.


update statistics mytab_1(col3) using 1000 values

go

Run the query again.

update statistics mytab_1(col3)

go

Now run the same query agaiin.


set plan optgoal allrows_dss

go

set showplan on

set statistics io,time,plancost on

go

select * from mytab_1 a

inner join mytab_2 b on a.col1 = b.col1 and a.col4 = b.col4

where a.col3 = 'id'

go

Below isthe lava operatort and io,time output.


==================== Lava Operator Tree ====================

                        
                        Emit
                        (VA = 3)
                        r:30 er:57
                        cpu: 100


             /
            NestLoopJoin
            Inner Join
            (VA = 2)
            r:30 er:57


/                      \
TableScan               IndexScan
mytab_1 (a)             idx_col1_col4 (b)
(VA = 0)                (VA = 1)
r:30 er:36              r:30 er:57
l:11113 el:0            l:90 el:0
p:0 ep:0                p:0 ep:0

============================================================
Table: mytab_1 (a) scan count 1, logical reads: (regular=11113 apf=0 total=11113), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Table: mytab_2 (b) scan count 30, logical reads: (regular=90 apf=0 total=90), physical reads: (regular=0 apf=0 total=0), apf IOs used=0
Total writes for this command: 0

Execution Time 1.
Adaptive Server cpu time: 100 ms.  Adaptive Server elapsed time: 53 ms.

Now estimated rows and actual number of rows are quite close and thus the plans should be perfect.

Please do not use the "using <step> values" as it will mean that the size of statistics will increase and thus optimizer will need to spend more time during optimization as well as increased size means it will take more space in the cache. Use this only when you have too much skewness in your data.

Thanks

Labels in this area