Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
harry_dietz
Explorer

Continuing...


First part was about the idea behind this trilogy, what and how I could or should measure, last part on the chart engine and this part will contain the "rest" of the code. I show a lot of macros. But don't be angry with me - all macros are small, really readable and help to shorten the code a lot. (I swear that I never use macros in normal development.)


To Do List


Now let's see what's to do...

  • Create structures and internal tables

  • Create helpers for the measurement

  • Create table fill routine

  • Create table read routine

  • Put the measurement data in chart engine and ALV grid




Create structures and internal tables


What I want to measure (and you can easily extend this) are a) standard, sorted, hashed tables and b) short, medium and long structures. So I created three structures like this:

BEGIN OF local_short,

 key1 TYPE char10,

 key2 TYPE char10,

 data1 TYPE char10,

 data2 TYPE char10,

 data3 TYPE i,

 data4 TYPE sydatum,

 data5 TYPE numc10,

END OF local_short.

This is the short structure, the medium and the long structures look similar with more fields like data1..5.
To measure the different fill-methods -- append and a sort at the end, insert with binary search, insert into sorted table (unique and non-unique), insert into hashed table -- I need these tables:

lt_short_apsort TYPE STANDARD TABLE OF local_short,

lt_short_binary TYPE STANDARD TABLE OF local_short,

lt_short_sort_u TYPE SORTED TABLE OF local_short WITH UNIQUE KEY key1 key2,

lt_short_sort_n TYPE SORTED TABLE OF local_short WITH NON-UNIQUE KEY key1 key2,

lt_short_hash_u TYPE HASHED TABLE OF local_short WITH UNIQUE KEY key1 key2,

And I need these tables as medium and long, too.


For the ALV grid I did it the simple way: create a structure with columns for each possible measurement. So one column for "short structure hashed table fill", one for "short structure hashed table read", etc. And because I wanted to see how optimized (with table key) read and non-optimized read behave, I have two columns for each table read kind. This sums up to 3 (short/medium/long) * 5 (append+sort/binary/sorted-unique/sorted-non-uniqe/hashed) * 2 (std.read/opt.read) = 30 columns. Adding the 3 * 5 = 15 fill-table columns this gives 45 columns. Then I have a column for the number of the "run", because I want to have several runs to fill/read a table; one column for the number of appends or inserts and one column for the number of table lines (this may differ to the number of insert/appends, because of identical keys). So the first part of the ALV grid table structure looks like

loopindex TYPE i, " # of run

iterations TYPE i, " # of tried append/inserts

tablelines TYPE i, " # of lines in table

fill_apsort_short TYPE i, " these are the runtimes

fill_binary_short TYPE i,

fill_sort_n_short TYPE i,

fill_sort_u_short TYPE i,

fill_hash_u_short TYPE i, etc...


Create helpers for the measurement


To do the measurement I have two small macros with "start/continue" and "stop" of a measurement. The macros look like

DEFINE runtime_start.

 get run time field lv_runtime_start.

END-OF-DEFINITION.

DEFINE runtime_end.

 get run time field lv_runtime_end.

 ls_time-runtime = ls_time-runtime + lv_runtime_end - lv_runtime_start.

END-OF-DEFINITION.


And in ls_time  you find a structure which contains the current measurement info like # of run, # of tried append/insert, # table lines, the name of the measurement like "fill_binary_short" and of course the runtime 'until now'. So at first I set ls_time-time to zero and then the three things

  1. call runtime_start

  2. do what is to be measured

  3. call runtime_end

can be done until everything is finished for one run.
Afterwards I optionally compute the runtime per iteration, so the growing tables effect can be seen. There is a small macro

DEFINE compute_perline.

 if perline = 'X'.

  ls_time-runtime = ls_time-runtime * 100 / ls_time-iterations.

 endif.

END-OF-DEFINITION.

Each time a measurement (one run of n reads or one run of n insert/appends) is done, a ls_time is appended to an internal table. And +collect+ed in a second table. Later on this +collect+ed table will be converted to XML for the chart engine and into the ALV grid structure table.



Create table fill routine



To ensure we have no influence from the creation of the table lines, I decided to first create a "template" table - a table containing all the entries which are to be inserted. Also I want to control the number of keys to be created... So I create a basis table and in the table fill routine I loop on this table and do this (this is implemented in a macro called 'get_line'):

  1. read basis table first line

  2. copy line content to internal table ITAB

  3. move first line of basis table to the end

(ITAB is the table I want to measure)


The table fill routine has two versions, one where only the insert/append (and in the standard+sort method the additional sort) is measured and one where the complete process is measured (including copy, looping, etc.):
    1. table fill with everything in measurement

      1. runtime_start

      2.  do iterations times

      3.   get_line

      4.   find index to be inserted (only in read binary case)

      5.   insert into ITAB

      6.  enddo

      7.  sort ITAB (only in case of the append+sort)

      8. runtime_end



    2. table fill with only the insert/append in measurement

      1. do iterations times

      2.  get_line

      3.  runtime_start

      4.   find index to be inserted (only in read binary case)

      5.   insert into ITAB

      6.  runtime_end

      7. enddo

      8. runtime_start

      9.  sort ITAB (only in case of the append+sort)

      10. runtime_end



(There are 5 macros for each case (append+sort, read binary, sorted non-unique, sorted unique, hashed) which includes both versions)



Create table read routine



The table read has also two versions where in the first case the complete loop is measured and in the second case only the read is measured. Again I use the basis table to have a fixed list of keys to be accessed (the basis table is created in a random order before the table fill is done).


    1. table read with everything in measurement

      1. runtime_start

      2.  do iterations times

      3.   get_line

      4.   read table ITAB

      5.   change_found

      6.  enddo

      7. runtime_end



    2. table read with only the read-table in measurement

      1. do iterations times

      2.  get_line

      3.  runtime_start

      4.   read table ITAB

      5.  runtime_end

      6.  change_found

      7. enddo



    3. You can see here an entry 'change_found': this is a macro accessing the read table and modifying it. I heard of some optimizations in internal table handling in case the table is not modified...therefore I inserted this.

(There are 5 macros as in the table fill measurement, but additionally I have one macro where the table read is a standard read: no read binary, no read with table key; this macro is used for each of the table types.)



Put the measurement data in chart engine and ALV grid



The measurement data now finally are two tables with the columns


  1. (only in the detailed data) "loopindex" which contains the index of the run (fix, ranging from 1 to 10)

  2. "groupname" which contains the name of the measurement (read_short_hashed)

  3. "runtime" which contains the overall or per line runtime

  4. "size" which contains the number of tried insert/appends


The detailed data (internal table lt_time) is shown in the ALV grid and the aggregated data (internal table lt_meantime) is shown in the graphics.

The ALV grid is filled via "ASSIGN COMPONENT" because the "groupname" is identical to the name of the column in the ALV grid structure. The chart engine is filled as shown in the last blog.

It should also be easy to use the GRAPH_3D (as proposed by Sharad Agrawal) instead of the chart engine, because the structure is simple and the examples for GRAPH_3D are nice and this is also available before 6.40



The coding and what else is to do



So now ...

  1. Copy and paste the content of the textbox at the end in a report

  2. Create a screen '0100'.

  3. Go to the screen flow logic and un-comment the modules so they are STATUS_0100 and USER_COMMAND_0100

  4. Add as ok-code 'OKCODE' and add a custom container (resizable and BIG) named 'CC0'

  5. In the GUI status set some button to 'EXIT'

  6. Give the selection parameters their texts:

      1. ALLTIME - Including data-fill and loop

      2. CHNGLINE - Change read lines

      3. CHRT3D - 3D Chart

      4. CHRTLINE - Lines (instead of columns)

      5. DOLONG - test long structure size

      6. DOMEDIUM - test medium structure size

      7. DOSHORT - test short structure size

      8. DOSLOW - test non-optimized read

      9. FAX - Number of reads

      10. I1 - Table size 1

      11. I2 - Table size 2

      12. I3 - Table size 3

      13. I4 - Table size 4

      14. I5 - Table size 5

      15. I6 - Table size 6

      16. I7 - Table size 7

      17. I8 - Table size 8

      18. I9 - Table size 9

      19. KEYS - Number of keys (% table size)

      20. PERLINE - Compute per read/insert-try

      21. SHOWFILL - Compute table-fill

      22. SHOWREAD - Compute table-read

      23. TYAPSORT - standard table with sort

      24. TYBINARY - read binary insert sy-tabix

      25. TYHASH_U - hashed table with unique key

      26. TYSORT_N - sorted table with non-unique k

      27. TYSORT_U - sorted table with unique key


  7. In case I didn't forget anything: run...






The measurement results



All times are done with the report above and use the runtime divided by the number of reads or by the number of tried insert/appends.

My machine is a Linux 64 bit machine with Netweaver 2004s and it shows for the table fill




as expected.



The table read (optimized) shows this





and finally a comparison of the slowest and fastest read from the read (hashed and non-unique sorted) with the standard table read







And for comparison a Linux 32 bit machine with 6.40:

The table fill

!https://weblogs.sdn.sap.com/weblogs/images/8499/4tablefill.jpg|height=332|alt=image|width=587|src=ht...!


The table read





And the hashed and non-unique sorted against the standard read