Hello everyone!

This one is for all of those and me too. Most of the posts I have seen on SDN where people asking for alternative READ_TEXT alternative fuvtion module for read_text | SCN or mass usage of READ_TEXT. Some good or better developers – I must say – are worried about performance issues. Few newbies are still looking for the usage of READ_TEXT. Lol. FM – READ_TEXT issue with data declaration | SCN

I was also looking for some alternate solution but all in vain. I found one good wiki about the usage of the FM: Function Example READ_TEXT ABAP wrapper function – Enterprise Information Management – SCN Wiki. This one is great but obviously I have two main concerns. 1. Performance, 2. Mass usage of reading long text of any object. There is another way to achieve mass read of long text by looping the READ_TEXT (lol, that’s funny), I don’t need this either because I need performance. I don’t want Basis guys cursing me! πŸ˜‰

So, what I came with was to avoid READ_TEXT, now the question is HOW? You might think of a big NO! Not possible! But remember

Lots of time people say no when they don’t know.

Let me assure you one thing I have done this and it is ready working like a charm.

All you need to do is just fetch the data, first from STXH then from the line item table STXL. Only question left is how to decompress the long text? Well, that’s pretty easy and not a big deal all you need is the use of IMPORT statement.

Now let’s see what we have to and how to do it? Below is the code that’s is working 4 to 5 times faster than READ_TEXT performance and is as simple as anything!


*&---------------------------------------------------------------------*
*& Report  ZMA_READ_TEXT
*&
*&---------------------------------------------------------------------*
*&
*&
*&---------------------------------------------------------------------*
REPORT  ZMA_READ_TEXT.
TYPES: BEGIN OF TY_STXL,
          TDNAME TYPE STXL-TDNAME,
          CLUSTR TYPE STXL-CLUSTR,
          CLUSTD TYPE STXL-CLUSTD,
        END OF TY_STXL.
DATA:  T_STXL TYPE STANDARD TABLE OF TY_STXL.
FIELD-SYMBOLS: <STXL> TYPE TY_STXL.
* compressed text data without text name
TYPES: BEGIN OF TY_STXL_RAW,
          CLUSTR TYPE STXL-CLUSTR,
          CLUSTD TYPE STXL-CLUSTD,
        END OF TY_STXL_RAW.
DATA:  T_STXL_RAW TYPE STANDARD TABLE OF TY_STXL_RAW.
DATA:  W_STXL_RAW TYPE TY_STXL_RAW.
* decompressed text
DATA:  T_TLINE TYPE STANDARD TABLE OF TLINE.
FIELD-SYMBOLS: <TLINE> TYPE TLINE.
DATA: T_STXH TYPE STANDARD TABLE OF STXH,
       W_STXH TYPE STXH.
SELECT TDNAME TDOBJECT TDID
   FROM STXH
     INTO CORRESPONDING FIELDS OF TABLE T_STXH.
*AND THEN
* select compressed text lines in blocks of 3000 (adjustable)
SELECT TDNAME CLUSTR CLUSTD
        INTO TABLE T_STXL
        FROM STXL
        PACKAGE SIZE 3000
        FOR ALL ENTRIES IN T_STXH "WITH APPLICATION DATA AND TDNAME
        WHERE RELID    = 'TX'          "standard text
          AND TDOBJECT = T_STXH-TDOBJECT
          AND TDNAME   = T_STXH-TDNAME
          AND TDID     = T_STXH-TDID
          AND TDSPRAS  = SY-LANGU.
   LOOP AT T_STXL ASSIGNING <STXL>.
*   decompress text
     CLEAR: T_STXL_RAW[], T_TLINE[].
     W_STXL_RAW-CLUSTR = <STXL>-CLUSTR.
     W_STXL_RAW-CLUSTD = <STXL>-CLUSTD.
     APPEND W_STXL_RAW TO T_STXL_RAW.
     IMPORT TLINE = T_TLINE FROM INTERNAL TABLE T_STXL_RAW.
*  access text lines for further processing
     LOOP AT T_TLINE ASSIGNING <TLINE>.
       WRITE: / <TLINE>-TDLINE.
     ENDLOOP.
   ENDLOOP.
   FREE T_STXL.
ENDSELECT.


Here is the output: I have not restricted it to any object (obviously you can do it for your need) and boy it pulls more then 1300 records within a nano second!

Boom!!

Output Long Text.png

There is another Function Module to fetch multiple texts: RETRIEVAL_MULTIPLE_TEXTS but I haven’t used it. πŸ™‚

Now the last thing, I want to thank Mr. Julian Phillips and Mr. Thomas Zloch. Thankful to Julian because he posted and Thomas gave the solution. Same solution I implemented with some additions. Here is the post I referred to: Mass reading standard texts (STXH, STXL)

I hope you will reuse this code to fetch multiple long text and your comments, suggestions and complaints are welcome! πŸ˜€

Note: Sourcecode attached!

To report this post you need to login first.

43 Comments

You must be Logged on to comment or reply to a post.

    1. Mansoor Ahmed Post author

      Thanks for the comment. Yes, it does give performance issues when you need to pull millions of texts at once. Obviosuly, you won’t loop READ_TEXT. Hope this makes sense! πŸ™‚

      (0) 
  1. Justin Molenaur

    Mansoor, very nice explanation. I had read the post you mention Mass reading standard texts (STXH, STXL) and had success implementing this as well. Now I have another question that some more skilled ABAP’ers may be able to help with.

    With regard to the IMPORT statement, does this work directly on the fields provided to it and convert this into the text

    OR

    Does this actually retrieve something else from the database based on the binary types that are provided, like a mapping technique?

    For example, if I have the values in CLUSTR and CLUSTD in any given ABAP system, can I get the values or do I need to execute this in the source where the STXH/STXL table exists?

    My requirement is related to SLT replication, whereas I am trying to convert the values that are extracted from ECC into SLT with the IMPORT statement. My assumption is that the IMPORT is able to convert the CLUSTR and CLUSTD values outside of the source system, but now I am not sure. Maybe the IMPORT statement is just converting the binary location in CLUSTR/CLUSTD and retrieving from some other location in the system?

    Hope this makes sense.

    Regards,

    Justin

    (0) 
    1. Mansoor Ahmed Post author

      Hi Justin. Thanks for the comments.

      As far as I know, if import is specified in parameter_list from a data cluster, the data is automatically converted to the current byte sequence (Endian) and character representation. You can make use of conversion_options to make adaptations to the current platform.

      Hope this helps!

      (0) 
      1. Justin Molenaur

        Not sure I understood you correctly there, you went over my head a bit with the first comment.

        If I have one single row of STXL, and extract that to another netweaver system, can I perform the IMPORT there or is there some link that needs to be maintained?

        Regards,

        Justin

        (0) 
  2. AdriΓ‘n Mejido

    Hi Mansoor,

    Good document and very useful document!!

    Only one question, don’t you think that use INNER-JOIN instead of FAE in the select would be better for performance?

    Cheers,

    Adrián

    (0) 
    1. Mansoor Ahmed Post author

      Hi Adrian,

      There has been a never ending debate to use or not to use FAE. But you can use it and track it via Performance Analysis tool and check which one gives you better performance. Thanks. πŸ™‚

      (0) 
      1. Gareth Ryan

        And I’m pretty sure in almost all cases, you’ll find a join is better than FAE.  Have you tried to make your code even better by changing it to use a join? πŸ˜‰

        (0) 
        1. Thomas Zloch

          Most of the code was taken from my original post, so please allow me to explain.

          In my approach the list of single values for the TDNAME selection does not come from an STXH selection, but rather from a selection of application data like BKPF/BSEG, to name an example. Long texts for FI document items have TDNAME values concatenated from BUKRS, BELNR, GJAHR and BUZEI. You need to build these TDNAMEs in an intermediate step before accessing STXL, that’s why a direct join is not possible, and FAE is the next best option.

          Thomas

          (1) 
          1. Naveen Rondla

            Hi Thomas,

            I have used your logic to read long texts. But sometimes texts I get have non-english characters(French characters in my case). When I have them the program dumps at import statement. Do you have any suggestion for it.


            Thanks for the assistance

            Naveen

            (0) 
              1. Naveen Rondla

                Error:

                Category               Error at ABAP Runtime

                Runtime Errors         IMPORT_CONTAINER_MISSING

                What happened?

                    Error in the SAP kernel

                    The current ABAP program “XXXX”

                     the ABAP

                    processor detected a system error.

                IF T_STXL_RAW IS NOT INITIAL.

                  IMPORT TLINE = T_TLINE FROM INTERNAL TABLE T_STXL_RAW.

                IF T_TLINE IS NOT INITIAL.

                Bold is the line where is it is going to dump.  I analyzed the text it is going to dump. I think special characters in words like Montréal, Québec are causing it. Other texts are loading fine.

                As the text would be in hexadecimal, is there anyway I could check for those special characters before passing it to Import statement.

                Thanks for the input.

                (0) 
  3. Sandra Rossi

    Hello,

    I think the code is not entirely correct. It may dump in case a standard text is very long. As you can see, T_STXL_RAW is always one line before IMPORT … FROM INTERNAL TABLE, but theorically there should be more lines if the original text is very long. I found the standard text ADRS_HEADER_LOGO_PRES provided by SAP, which is very long, and it should demonstrate the bug.

    The correction is not obvious as the loading is done by PACKAGE; consequently the last line of the package may not be the last line of a text. So we need to wait for the next package to complete the text and be able to use IMPORT. That requires both a sorted read (ORDER BY) and a buffer table.

    The following code should always work. Please feel free to ask me any question if it’s unclear.

    REPORT.
    TYPES: BEGIN OF ty_stxl,
              relid     TYPE stxlrelid,
              tdobject  TYPE stxltdobject,
              tdname    TYPE stxltdname,
              tdid      TYPE stxltdid,
              tdspras   TYPE stxltdspras,
              srtf2     TYPE stxlsrtf2,
              clustr    TYPE stxlclustr,
              clustd    TYPE stxlclustd,
            END OF ty_stxl.
    DATA: t_stxl        TYPE STANDARD TABLE OF ty_stxl,
          t_stxl_buffer TYPE STANDARD TABLE OF ty_stxl.
    FIELD-SYMBOLS: <stxl> TYPE ty_stxl.
    * compressed text data without text name
    TYPES: BEGIN OF ty_stxl_raw,
              clustr TYPE stxlclustr,
              clustd TYPE stxlclustd,
            END OF ty_stxl_raw.
    DATAt_stxl_raw TYPE STANDARD TABLE OF ty_stxl_raw.
    DATAw_stxl_raw TYPE ty_stxl_raw.
    * decompressed text
    DATAt_tline TYPE STANDARD TABLE OF tline.
    FIELD-SYMBOLS: <tline> TYPE tline.
    DATA: t_stxh TYPE STANDARD TABLE OF stxh,
           w_stxh TYPE stxh.
    TABLES stxh.
    SELECT-OPTIONS s_object FOR stxhtdobject.
    SELECT-OPTIONS s_name   FOR stxhtdname.
    SELECT-OPTIONS s_id     FOR stxhtdid.
    SELECT-OPTIONS s_langu  FOR stxhtdspras.

    SELECT tdname tdobject tdid tdspras
        FROM stxh
          INTO CORRESPONDING FIELDS OF TABLE t_stxh
        WHERE tdobject IN s_object
          AND tdname   IN s_name
          AND tdid     IN s_id
          AND tdspras  IN s_langu.

    DATA s_stxl         TYPE ty_stxl.
    DATA l_first_tabix  TYPE sytabix.
    DATA l_last_tabix   TYPE sytabix.
    DATA subrc          TYPE sysubrc.
    DATA process        TYPE abap_bool.
    CONSTANTS package_size TYPE i VALUE 3000.

    * select compressed text lines in blocks of 3000 (adjustable)
    DATA cursor TYPE cursor.
    OPEN CURSORcursor FOR
    SELECT relid tdobject tdname tdid tdspras srtf2 clustr clustd
            FROM stxl
            FOR ALL ENTRIES IN t_stxh “WITH APPLICATION DATA AND TDNAME
            WHERE relid    = ‘TX’          “standard text
              AND tdobject = t_stxhtdobject
              AND tdname   = t_stxhtdname
              AND tdid     = t_stxhtdid
              AND tdspras  = t_stxhtdspras
            ORDER BY PRIMARY KEY. “<=== new

    DO.
      FETCH NEXT CURSOR cursor
              APPENDING TABLE t_stxl
              PACKAGE SIZE package_size.
      subrc = sysubrc.

      IF subrc = 4.
        IF lines( t_stxl ) > 0.
          process = abap_true.
        ELSE.
          process = abap_false.
        ENDIF.

      ELSEIF subrc = 0.
        IF lines( t_stxl ) < package_size.
          process = abap_true.
        ELSE.

          ” put lines of last key aside, as there may be other lines for the same key
          DESCRIBE TABLE t_stxl LINES l_last_tabix.
          READ TABLE t_stxl INDEX l_last_tabix INTO s_stxl.
          READ TABLE t_stxl INDEX 1 ASSIGNING <stxl>.

          IF <stxl>relid    = s_stxlrelid
                AND <stxl>tdobject = s_stxltdobject
                AND <stxl>tdname   = s_stxltdname
                AND <stxl>tdid     = s_stxltdid
                AND <stxl>tdspras  = s_stxltdspras.

            ” The whole package has same key -> load next lines

            process = abap_false.

          ELSE.

            process = abap_true.

            l_first_tabix = l_last_tabix.
            l_first_tabix = l_last_tabix.
            DO.
              SUBTRACT 1 FROM l_first_tabix.
              READ TABLE t_stxl INDEX l_first_tabix ASSIGNING <stxl>.
              IF sysubrc <> 0.
                EXIT.
              ENDIF.
              IF NOT ( <stxl>relid    = s_stxlrelid
                   AND <stxl>tdobject = s_stxltdobject
                   AND <stxl>tdname   = s_stxltdname
                   AND <stxl>tdid     = s_stxltdid
                   AND <stxl>tdspras  = s_stxltdspras ).
                EXIT.
              ENDIF.
            ENDDO.

            ADD 1 TO l_first_tabix.
            APPEND LINES OF t_stxl FROM l_first_tabix TO l_last_tabix TO t_stxl_buffer.
            DELETE t_stxl FROM l_first_tabix TO l_last_tabix.

          ENDIF.
        ENDIF.
      ELSE.
        ” can’t happen
        ASSERT 0 = 1.
      ENDIF.

      IF process = abap_true.
        LOOP AT t_stxl ASSIGNING <stxl>.

          AT NEW tdspras.
            REFRESH t_stxl_raw.
          ENDAT.

          ” decompress text
          CLEAR w_stxl_raw.
          w_stxl_rawclustr = <stxl>clustr.
          w_stxl_rawclustd = <stxl>clustd.
          APPEND w_stxl_raw TO t_stxl_raw.

          AT END OF tdspras.
            IMPORT tline = t_tline FROM INTERNAL TABLE t_stxl_raw.
            DESCRIBE TABLE t_stxl_raw.
            FORMAT COLOR 5.
            WRITE: / ‘AA’, sytfill LEFT-JUSTIFIED, <stxl>tdobject, <stxl>tdname, <stxl>tdid, <stxl>tdspras.
            FORMAT RESET.
            LOOP AT t_tline ASSIGNING <tline>.
              WRITE: / <tline>tdline.
            ENDLOOP.
            REFRESH t_stxl_raw.
          ENDAT.

        ENDLOOP.
      ENDIF.

      t_stxl = t_stxl_buffer.

      IF subrc <> 0.
        EXIT.
      ENDIF.
    ENDDO.

    ASSERT 1 = 1. “(line for helping debug)

    (0) 
    1. Thomas Zloch

      Nice catch, and thanks for enhancing the logic. It seems that the threshold of STXL-CLUSTR is 7902 (bytes?), then a new line is started.

      My use cases so far were far below this threshold, so it never occurred to me.

      Cheers

      Thomas

      (0) 
    2. Naveen Rondla

      Thanks Sandra. I ran into the same issue you discussed. Some of the texts I had to deal with were more than 7902 bytes (about 8 page word document). I used your code to fix it. Couple of small misses. One is close cursor statement and checking for internal table not being initial before using FAE.

      (0) 
  4. Gurunath Kumar Dadamu

    Hi Mansor,

    Thanks for your effort. In  Lot of cases many Abapers are using READ_TEXT fm. Now onwards we will follow above code what you have mentioned.

    Thanks,

    Gurunath D

    (0) 
  5. Jelena Perfiljeva

    You might also want to add IF t_stxh[] IS NOT INITIAL check before ‘FOR ALL ENTRIES’. Or a program could display some kind of ‘No data found’ message and just exit in such case, I guess…

    (0) 
  6. RΓΌdiger Plantiko

    Hi Mansoor,

    the STXL storage mechanism that you proudly found out and exposed here (congratulations), is only a small part of the function module READ_TEXT.  There is

    • validation of TDID, TDOBJECT &c.
    • resolving of references: A text can be stored at another location than that your report is looking for – indicated by the fields TDREFOBJ, TDREFNAME etc.
    • a buffering in a CATALOG memory segment,
    • a transactional memory connected to the CATALOG is provided: If the text has already been deleted earlier in this session, NOT_FOUND will be issued, if it has been copied, newly created, etc., it will be served from the memory instead of the DB
    • reading from archive is possible

    Regards,

    Rüdiger

    (0) 
  7. Riaz Momin

    Had similar requirement and SCN directly came in to help.

    Thanks a lot as your program gave me base to tweak it for my requirements.

    πŸ™‚ πŸ™‚

    (0) 
  8. Mike Mathieson

    Hi all.  I hope someone is still monitoring this blog…

    I’m using this technique (without Sandra’s extra code) but getting a dump when the text is very long.  I’m going to implement her code, but before I do, I would really like to understand….

    Why does the long text dump?  I think my problem is – I don’t understand the IMPORT statement.  I read HELP, but it still doesn’t seem like this should cause a dump.  Can someone please explain?

    Thanks in advance,

    Mike

    (0) 
    1. Jelena Perfiljeva

      Sandra’s comment already explains the issue. Not sure what sort of short dump long text is causing exactly, but you won’t always find this kind of information in Help. It could be a generic dump, like conversion or out of memory. It is not necessarily specific to the IMPORT command.

      IMPORT converts data cluster into the “human readable” format. If you look at an STXL record in SE16 you’ll see some kind of gibberish in the CLUSTD field. It’s the “raw” data that needs to be converted to be readable.

      (0) 
      1. Mike Mathieson

        We are getting IMPORT_CONTAINER_MISSING dump in line 2523…

        2512       REFRESH: lt_stxl_raw, lt_stxl_out, lt_tline.

        2513

        2514       wa_stxl_raw-clustr = <stxl>-clustr.

        2515       wa_stxl_raw-clustd = <stxl>-clustd.

        2516       APPEND wa_stxl_raw TO lt_stxl_raw.

        2517

        2518 * Decompress text. The trick is to pretend to import it from a DB.

        2519       IMPORT tline = lt_tline FROM INTERNAL TABLE lt_stxl_raw.

        2520

        2521 * read the 1st decompressed text line and fill the short text of

        2522 * output text tab, then append the tab.

        >>>>>       READ TABLE lt_tline ASSIGNING <tline> INDEX 1.

        2524       wa_stxl_out-stext = <tline>-tdline.

        2525       APPEND wa_stxl_out TO lt_stxl_out.

        And it is occurring when processing long text (> 7902 bytes).  So, while I haven’t duplicated the error yet, I’m fairly certain/hopeful this is the problem Sandra solves with her code. 

        My problem is – I don’t see what she has done to prevent the dump, and in fact I don’t see why the dump happens in the first place.  Keep in mind, I’m not doubting the solution, I just want to understand why it occurs.  I could possibly see why the dump would occur on the IMPORT statement, but not the READ.  Sandra’s IMPORT-READ(LOOP) looks identical to me, so I don’t understand how this solves the problem?

        Mike

        (0) 
          1. Mike Mathieson

            I’m using a SELECT.  But, I’ve figured it out.  In case anyone needs more explanation of something most readers of this blog may already know (unless you’re new to the IMPORT statement like me), from the code above….

            1. T_STXL_RAW must contain all the STXL entries for the text you are trying to read.  Most of the time that is only one entry but if the text is over 7902 bytes, you will have multiple entries.  If you don’t have all the entries, the IMPORT will dump.
            2. T_STXL_RAW must have its entries in the same order as they are listed in STXL.  If not, the program will dump.

            Mike

            (0) 
  9. Dennis Keßler

    Hi,


    I just read your post concerning the alternative to the function module read_text.

    I have an issue concerning read and save text and bring it to an adobe form.

    I have to store texts with more than 132 characters in one row. Do you have any idea how to achieve this?


    Thanks in advance

    (0) 
  10. Naimesh Patel

    Also the logic doesn’t handle the TDREF like Rüdiger Plantiko mentioned in his comment

    The Reference Text is a critical part of the logic and it is widely used for many standard transaction where Text just flows from one “Object” to another object. Like Shipping text for customer goes to Sales order to Delivery. If you use the same logic on Delivery, it will not work as the text is being used as reference (STXH-TDREF). 

    By reading the STXH and STXL directly, you should know that it may not give you the correct output for certain scenarios.

    Thanks,
    Naimesh Patel

    (0) 

Leave a Reply