Skip to Content
Author's profile photo Former Member

Alternative to READ_TEXT Function Module (No more FM needed)

Hello everyone!

This one is for all of those and me too. Most of the posts I have seen on SDN where people asking for alternative READ_TEXT alternative fuvtion module for read_text | SCN or mass usage of READ_TEXT. Some good or better developers – I must say – are worried about performance issues. Few newbies are still looking for the usage of READ_TEXT. Lol. FM – READ_TEXT issue with data declaration | SCN

I was also looking for some alternate solution but all in vain. I found one good wiki about the usage of the FM: Function Example READ_TEXT ABAP wrapper function – Enterprise Information Management – SCN Wiki. This one is great but obviously I have two main concerns. 1. Performance, 2. Mass usage of reading long text of any object. There is another way to achieve mass read of long text by looping the READ_TEXT (lol, that’s funny), I don’t need this either because I need performance. I don’t want Basis guys cursing me! 😉

So, what I came with was to avoid READ_TEXT, now the question is HOW? You might think of a big NO! Not possible! But remember

Lots of time people say no when they don’t know.

Let me assure you one thing I have done this and it is ready working like a charm.

All you need to do is just fetch the data, first from STXH then from the line item table STXL. Only question left is how to decompress the long text? Well, that’s pretty easy and not a big deal all you need is the use of IMPORT statement.

Now let’s see what we have to and how to do it? Below is the code that’s is working 4 to 5 times faster than READ_TEXT performance and is as simple as anything!


*&---------------------------------------------------------------------*
*& Report  ZMA_READ_TEXT
*&
*&---------------------------------------------------------------------*
*&
*&
*&---------------------------------------------------------------------*
REPORT  ZMA_READ_TEXT.
TYPES: BEGIN OF TY_STXL,
          TDNAME TYPE STXL-TDNAME,
          CLUSTR TYPE STXL-CLUSTR,
          CLUSTD TYPE STXL-CLUSTD,
        END OF TY_STXL.
DATA:  T_STXL TYPE STANDARD TABLE OF TY_STXL.
FIELD-SYMBOLS: <STXL> TYPE TY_STXL.
* compressed text data without text name
TYPES: BEGIN OF TY_STXL_RAW,
          CLUSTR TYPE STXL-CLUSTR,
          CLUSTD TYPE STXL-CLUSTD,
        END OF TY_STXL_RAW.
DATA:  T_STXL_RAW TYPE STANDARD TABLE OF TY_STXL_RAW.
DATA:  W_STXL_RAW TYPE TY_STXL_RAW.
* decompressed text
DATA:  T_TLINE TYPE STANDARD TABLE OF TLINE.
FIELD-SYMBOLS: <TLINE> TYPE TLINE.
DATA: T_STXH TYPE STANDARD TABLE OF STXH,
       W_STXH TYPE STXH.
SELECT TDNAME TDOBJECT TDID
   FROM STXH
     INTO CORRESPONDING FIELDS OF TABLE T_STXH.
*AND THEN
* select compressed text lines in blocks of 3000 (adjustable)
SELECT TDNAME CLUSTR CLUSTD
        INTO TABLE T_STXL
        FROM STXL
        PACKAGE SIZE 3000
        FOR ALL ENTRIES IN T_STXH "WITH APPLICATION DATA AND TDNAME
        WHERE RELID    = 'TX'          "standard text
          AND TDOBJECT = T_STXH-TDOBJECT
          AND TDNAME   = T_STXH-TDNAME
          AND TDID     = T_STXH-TDID
          AND TDSPRAS  = SY-LANGU.
   LOOP AT T_STXL ASSIGNING <STXL>.
*   decompress text
     CLEAR: T_STXL_RAW[], T_TLINE[].
     W_STXL_RAW-CLUSTR = <STXL>-CLUSTR.
     W_STXL_RAW-CLUSTD = <STXL>-CLUSTD.
     APPEND W_STXL_RAW TO T_STXL_RAW.
     IMPORT TLINE = T_TLINE FROM INTERNAL TABLE T_STXL_RAW.
*  access text lines for further processing
     LOOP AT T_TLINE ASSIGNING <TLINE>.
       WRITE: / <TLINE>-TDLINE.
     ENDLOOP.
   ENDLOOP.
   FREE T_STXL.
ENDSELECT.


Here is the output: I have not restricted it to any object (obviously you can do it for your need) and boy it pulls more then 1300 records within a nano second!

Boom!!

Output Long Text.png

There is another Function Module to fetch multiple texts: RETRIEVAL_MULTIPLE_TEXTS but I haven’t used it. 🙂

Now the last thing, I want to thank Mr. Julian Phillips and Mr. Thomas Zloch. Thankful to Julian because he posted and Thomas gave the solution. Same solution I implemented with some additions. Here is the post I referred to: Mass reading standard texts (STXH, STXL)

I hope you will reuse this code to fetch multiple long text and your comments, suggestions and complaints are welcome! 😀

Note: Sourcecode attached!

Assigned tags

      53 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Praveen Nenawa
      Praveen Nenawa

      Hi Mansoor ,

      Nice shot !!!!
      Will use it for sure the next time . As of now bookmarked 😉 .

      Thanks for Sharing .

      Praveen .

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thanks for the appreciation. Do let me know when you use it. You can also add some filters to the WHERE clause to restrict your query.

      Author's profile photo YUNUS KALDIRIM
      YUNUS KALDIRIM

      Hi Mansoor,

      Nice investigation. Thanks for your kind share about read_text.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thank you YUNUS KALDIRIM keep sharing! 🙂

      Author's profile photo Bikas Tarway
      Bikas Tarway

      Hi Mansoor,

      Thanks for your blog.

      I never knew read_text  also gives any performance issue 😯

      Thanks

      Bikas

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thanks for the comment. Yes, it does give performance issues when you need to pull millions of texts at once. Obviosuly, you won't loop READ_TEXT. Hope this makes sense! 🙂

      Author's profile photo Former Member
      Former Member

      Mansoor, very nice explanation. I had read the post you mention Mass reading standard texts (STXH, STXL) and had success implementing this as well. Now I have another question that some more skilled ABAP'ers may be able to help with.

      With regard to the IMPORT statement, does this work directly on the fields provided to it and convert this into the text

      OR

      Does this actually retrieve something else from the database based on the binary types that are provided, like a mapping technique?

      For example, if I have the values in CLUSTR and CLUSTD in any given ABAP system, can I get the values or do I need to execute this in the source where the STXH/STXL table exists?

      My requirement is related to SLT replication, whereas I am trying to convert the values that are extracted from ECC into SLT with the IMPORT statement. My assumption is that the IMPORT is able to convert the CLUSTR and CLUSTD values outside of the source system, but now I am not sure. Maybe the IMPORT statement is just converting the binary location in CLUSTR/CLUSTD and retrieving from some other location in the system?

      Hope this makes sense.

      Regards,

      Justin

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hi Justin. Thanks for the comments.

      As far as I know, if import is specified in parameter_list from a data cluster, the data is automatically converted to the current byte sequence (Endian) and character representation. You can make use of conversion_options to make adaptations to the current platform.

      Hope this helps!

      Author's profile photo Former Member
      Former Member

      Not sure I understood you correctly there, you went over my head a bit with the first comment.

      If I have one single row of STXL, and extract that to another netweaver system, can I perform the IMPORT there or is there some link that needs to be maintained?

      Regards,

      Justin

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      You can have an addition of Conversion type with import statement. That should work.

      Author's profile photo Adrián Mejido
      Adrián Mejido

      Hi Mansoor,

      Good document and very useful document!!

      Only one question, don't you think that use INNER-JOIN instead of FAE in the select would be better for performance?

      Cheers,

      Adrián

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hi Adrian,

      There has been a never ending debate to use or not to use FAE. But you can use it and track it via Performance Analysis tool and check which one gives you better performance. Thanks. 🙂

      Author's profile photo Former Member
      Former Member

      And I'm pretty sure in almost all cases, you'll find a join is better than FAE.  Have you tried to make your code even better by changing it to use a join? 😉

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Yeah absolutely! That's it! I was just telling Adrián Mejido to check it and come back to me 😉

      Author's profile photo Thomas Zloch
      Thomas Zloch

      Most of the code was taken from my original post, so please allow me to explain.

      In my approach the list of single values for the TDNAME selection does not come from an STXH selection, but rather from a selection of application data like BKPF/BSEG, to name an example. Long texts for FI document items have TDNAME values concatenated from BUKRS, BELNR, GJAHR and BUZEI. You need to build these TDNAMEs in an intermediate step before accessing STXL, that's why a direct join is not possible, and FAE is the next best option.

      Thomas

      Author's profile photo Former Member
      Former Member

      Hi Thomas,

      I have used your logic to read long texts. But sometimes texts I get have non-english characters(French characters in my case). When I have them the program dumps at import statement. Do you have any suggestion for it.


      Thanks for the assistance

      Naveen

      Author's profile photo Thomas Zloch
      Thomas Zloch

      No practical experience with other languages. Please specify the dump: title, what happened, error analysis.

      Thomas

      Author's profile photo Former Member
      Former Member

      Error:

      Category               Error at ABAP Runtime

      Runtime Errors         IMPORT_CONTAINER_MISSING

      What happened?

          Error in the SAP kernel

          The current ABAP program "XXXX"

           the ABAP

          processor detected a system error.

      IF T_STXL_RAW IS NOT INITIAL.

        IMPORT TLINE = T_TLINE FROM INTERNAL TABLE T_STXL_RAW.

      IF T_TLINE IS NOT INITIAL.

      Bold is the line where is it is going to dump.  I analyzed the text it is going to dump. I think special characters in words like Montréal, Québec are causing it. Other texts are loading fine.

      As the text would be in hexadecimal, is there anyway I could check for those special characters before passing it to Import statement.

      Thanks for the input.

      Author's profile photo abilash n
      abilash n

      Very nice research Mansoor Ahmed... Keep it up.....

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Thank you abilash n

      Author's profile photo Sandra Rossi
      Sandra Rossi

      Hello,

      EDIT March 1st, 2019: here is a new version of the code at https://gist.github.com/sandraros/8b54e98166809fdb7ae39b1c11a846fe. It’s made as a local interface and classes, which should make it more easy to process each long text with your own implementation. It also includes the JOIN on STXH and STXL instead of FOR ALL ENTRIES as proposed by Kenneth’s comment, and the corrections proposed by Former member and Jacques.

      Also, you have the newest function modules READ_MULTIPLE_TEXTS and READ_TEXT_TABLE, as explained in note 2261311 – Function module for reading multiple SAPscript texts

      I think the code is not entirely correct. It may dump in case a standard text is very long. As you can see, T_STXL_RAW is always one line before IMPORT … FROM INTERNAL TABLE, but theorically there should be more lines if the original text is very long. I found the standard text ADRS_HEADER_LOGO_PRES provided by SAP, which is very long, and it should demonstrate the bug.

       

      The correction is not obvious as the loading is done by PACKAGE; consequently the last line of the package may not be the last line of a text. So we need to wait for the next package to complete the text and be able to use IMPORT. That requires both a sorted read (ORDER BY) and a buffer table.

       

      The following code should always work. Please feel free to ask me any question if it’s unclear.

      IMPORTANT: please use the NEW code in the GIST mentioned ABOVE (in the EDIT at the top of the comment). 

       

      REPORT.
      TYPES: BEGIN OF ty_stxl,
                relid     TYPE stxl-relid,
                tdobject  TYPE stxl-tdobject,
                tdname    TYPE stxl-tdname,
                tdid      TYPE stxl-tdid,
                tdspras   TYPE stxl-tdspras,
                srtf2     TYPE stxl-srtf2,
                clustr    TYPE stxl-clustr,
                clustd    TYPE stxl-clustd,
              END OF ty_stxl.
      DATA: t_stxl        TYPE STANDARD TABLE OF ty_stxl,
            t_stxl_buffer TYPE STANDARD TABLE OF ty_stxl.
      FIELD-SYMBOLS: <stxl> TYPE ty_stxl.
      * compressed text data without text name
      TYPES: BEGIN OF ty_stxl_raw,
                clustr TYPE stxl-clustr,
                clustd TYPE stxl-clustd,
              END OF ty_stxl_raw.
      DATA:  t_stxl_raw TYPE STANDARD TABLE OF ty_stxl_raw.
      DATA:  w_stxl_raw TYPE ty_stxl_raw.
      * decompressed text
      DATA:  t_tline TYPE STANDARD TABLE OF tline.
      FIELD-SYMBOLS: <tline> TYPE tline.
      DATA: t_stxh TYPE STANDARD TABLE OF stxh,
             w_stxh TYPE stxh.
      TABLES stxh.
      SELECT-OPTIONS s_object FOR stxh-tdobject.
      SELECT-OPTIONS s_name   FOR stxh-tdname.
      SELECT-OPTIONS s_id     FOR stxh-tdid.
      SELECT-OPTIONS s_langu  FOR stxh-tdspras.
      
      SELECT tdname tdobject tdid tdspras
          FROM stxh
            INTO CORRESPONDING FIELDS OF TABLE t_stxh
          WHERE tdobject IN s_object
            AND tdname   IN s_name
            AND tdid     IN s_id
            AND tdspras  IN s_langu.
      
      DATA s_stxl         TYPE ty_stxl.
      DATA l_first_tabix  TYPE sy-tabix.
      DATA l_last_tabix   TYPE sy-tabix.
      DATA subrc          TYPE sy-subrc.
      DATA process        TYPE abap_bool.
      CONSTANTS package_size TYPE i VALUE 3000.
      
      * select compressed text lines in blocks of 3000 (adjustable)
      DATA cursor TYPE cursor.
      OPEN CURSOR cursor FOR
      SELECT relid tdobject tdname tdid tdspras srtf2 clustr clustd
              FROM stxl
              FOR ALL ENTRIES IN t_stxh "WITH APPLICATION DATA AND TDNAME
              WHERE relid    = 'TX'          "standard text
                AND tdobject = t_stxh-tdobject
                AND tdname   = t_stxh-tdname
                AND tdid     = t_stxh-tdid
                AND tdspras  = t_stxh-tdspras
              ORDER BY PRIMARY KEY. "<=== new
      
      DO.
        FETCH NEXT CURSOR cursor
                APPENDING TABLE t_stxl
                PACKAGE SIZE package_size.
        subrc = sy-subrc.
      
        IF subrc = 4.
          IF lines( t_stxl ) > 0.
            process = abap_true.
          ELSE.
            process = abap_false.
          ENDIF.
      
        ELSEIF subrc = 0.
          IF lines( t_stxl ) < package_size.
            process = abap_true.
          ELSE.
      
            " put lines of last key aside, as there may be other lines for the same key
            DESCRIBE TABLE t_stxl LINES l_last_tabix.
            READ TABLE t_stxl INDEX l_last_tabix INTO s_stxl.
            READ TABLE t_stxl INDEX 1 ASSIGNING <stxl>.
      
            IF <stxl>-relid    = s_stxl-relid
                  AND <stxl>-tdobject = s_stxl-tdobject
                  AND <stxl>-tdname   = s_stxl-tdname
                  AND <stxl>-tdid     = s_stxl-tdid
                  AND <stxl>-tdspras  = s_stxl-tdspras.
      
              " The whole package has same key -> load next lines
      
              process = abap_false.
      
            ELSE.
      
              process = abap_true.
      
              l_first_tabix = l_last_tabix.
              l_first_tabix = l_last_tabix.
              DO.
                SUBTRACT 1 FROM l_first_tabix.
                READ TABLE t_stxl INDEX l_first_tabix ASSIGNING <stxl>.
                IF sy-subrc <> 0.
                  EXIT.
                ENDIF.
                IF NOT ( <stxl>-relid    = s_stxl-relid
                     AND <stxl>-tdobject = s_stxl-tdobject
                     AND <stxl>-tdname   = s_stxl-tdname
                     AND <stxl>-tdid     = s_stxl-tdid
                     AND <stxl>-tdspras  = s_stxl-tdspras ).
                  EXIT.
                ENDIF.
              ENDDO.
      
              ADD 1 TO l_first_tabix.
              APPEND LINES OF t_stxl FROM l_first_tabix TO l_last_tabix TO t_stxl_buffer.
              DELETE t_stxl FROM l_first_tabix TO l_last_tabix.
      
            ENDIF.
          ENDIF.
        ELSE.
          " can’t happen
          ASSERT 0 = 1.
        ENDIF.
      
        IF process = abap_true.
          LOOP AT t_stxl ASSIGNING <stxl>.
      
            AT NEW tdspras.
              REFRESH t_stxl_raw.
            ENDAT.
      
            " decompress text
            CLEAR w_stxl_raw.
            w_stxl_raw-clustr = <stxl>-clustr.
            w_stxl_raw-clustd = <stxl>-clustd.
            APPEND w_stxl_raw TO t_stxl_raw.
      
            AT END OF tdspras.
              IMPORT tline = t_tline FROM INTERNAL TABLE t_stxl_raw.
              DESCRIBE TABLE t_stxl_raw.
              FORMAT COLOR 5.
              WRITE: / 'AA', sy-tfill LEFT-JUSTIFIED, <stxl>-tdobject, <stxl>-tdname, <stxl>-tdid, <stxl>-tdspras.
              FORMAT RESET.
              LOOP AT t_tline ASSIGNING <tline>.
                WRITE: / <tline>-tdline.
              ENDLOOP.
              REFRESH t_stxl_raw.
            ENDAT.
      
          ENDLOOP.
        ENDIF.
      
        t_stxl = t_stxl_buffer.
        CLEAR t_stxl_buffer.
      
        IF subrc <> 0.
          EXIT.
        ENDIF.
      ENDDO.
      
      ASSERT 1 = 1. "(line for helping debug)
      
      Author's profile photo Thomas Zloch
      Thomas Zloch

      Nice catch, and thanks for enhancing the logic. It seems that the threshold of STXL-CLUSTR is 7902 (bytes?), then a new line is started.

      My use cases so far were far below this threshold, so it never occurred to me.

      Cheers

      Thomas

      Author's profile photo abilash n
      abilash n

      Thanks Sandra for amazing catch.

      Author's profile photo Thomas Zloch
      Thomas Zloch

      Author's profile photo abilash n
      abilash n

      Nice pic corresponding to my comment LOL. 🙂 🙂 🙂 🙂

      Author's profile photo Former Member
      Former Member

      Thanks Sandra. I ran into the same issue you discussed. Some of the texts I had to deal with were more than 7902 bytes (about 8 page word document). I used your code to fix it. Couple of small misses. One is close cursor statement and checking for internal table not being initial before using FAE.

      Author's profile photo Jacques Nomssi
      Jacques Nomssi

      Also, the buffer table t_stxl_buffer must be initialized before usage.

      Author's profile photo Kenneth Eriksen
      Kenneth Eriksen

      I know this is an old post, but just in case anyone is interested: This code is just what I needed for a prototype I am working on! I've been playing around with the code and I had problems getting it to run on our HANA system. The select on STXL would take a long time and then short dump.

      I think this is caused by using "FOR ALL ENTRIES IN" statement. I therefore decided to replace the two SELECTs on STXH and STXL with one, like so (the where clause is a bit different than the original as I have different requirements):

      OPEN CURSOR cursor FOR
      SELECT l~relid l~tdobject l~tdname l~tdid l~tdspras l~srtf2 l~clustr l~clustd
      FROM (
      stxl AS l
      JOIN stxh AS h ON
      l~tdobject = h~tdobject AND
      l~tdname = h~tdname AND
      l~tdid = h~tdid AND
      l~tdspras = h~tdspras )
      WHERE l~relid = 'TX'
      AND h~tdobject = l_tdobject
      AND h~tdname LIKE l_tdname
      AND h~tdid = l_tdid
      AND h~tdspras = l_langu
      ORDER BY
      l~relid
      l~tdobject
      l~tdname
      l~tdid
      l~tdspras
      l~srtf2.

      Also, I believe that the buffer table must be cleared once used:

      Change this:
      t_stxl = t_stxl_buffer.

      to this:
      t_stxl = t_stxl_buffer.
      REFRESH t_stxl_buffer.

      Author's profile photo Sandra Rossi
      Sandra Rossi

      Thanks Kenneth. I have added a GIST in my post, in which I included the JOIN and correction, and a better and object-oriented logic.

      Author's profile photo Sudhakar Devaraya
      Sudhakar Devaraya

      Hi,

      Seems like Open Cursor has limitation of 17.

      I am not able to use it. Please suggest!
      Author's profile photo Sandra Rossi
      Sandra Rossi

      Your question is not related to my post, please ask it in the SAP forum (and please explain in that question why you let so many cursors open).

      Author's profile photo Former Member
      Former Member

      Hi Mansor,

      Thanks for your effort. In  Lot of cases many Abapers are using READ_TEXT fm. Now onwards we will follow above code what you have mentioned.

      Thanks,

      Gurunath D

      Author's profile photo Thomas Zloch
      Thomas Zloch

      It's fine to use READ_TEXT. The approach described here is meant for performance-critical mass processing.

      Thomas

      Author's profile photo Jelena Perfiljeva
      Jelena Perfiljeva

      You might also want to add IF t_stxh[] IS NOT INITIAL check before 'FOR ALL ENTRIES'. Or a program could display some kind of 'No data found' message and just exit in such case, I guess...

      Author's profile photo Rüdiger Plantiko
      Rüdiger Plantiko

      Hi Mansoor,

      the STXL storage mechanism that you proudly found out and exposed here (congratulations), is only a small part of the function module READ_TEXT.  There is

      • validation of TDID, TDOBJECT &c.
      • resolving of references: A text can be stored at another location than that your report is looking for - indicated by the fields TDREFOBJ, TDREFNAME etc.
      • a buffering in a CATALOG memory segment,
      • a transactional memory connected to the CATALOG is provided: If the text has already been deleted earlier in this session, NOT_FOUND will be issued, if it has been copied, newly created, etc., it will be served from the memory instead of the DB
      • reading from archive is possible

      Regards,

      Rüdiger

      Author's profile photo Darshak Kathiriya
      Darshak Kathiriya

      Hi Mansoor,

      Good post, I have tried RETRIEVAL_MULTIPLE_TEXTS that you mentioned to read batch short text and It is working fine !

      Thanks,

      Darshak

      Author's profile photo Former Member
      Former Member

      Wow! Very useful! Thanks for sharing!!!

      Author's profile photo Former Member
      Former Member

      Had similar requirement and SCN directly came in to help.

      Thanks a lot as your program gave me base to tweak it for my requirements.

      🙂 🙂

      Author's profile photo SOMENDRA SHUKLA
      SOMENDRA SHUKLA

      Nice Alternative to Read_Text & Excellent use of field symbol and Import statement

      Author's profile photo Mike Mathieson
      Mike Mathieson

      Hi all.  I hope someone is still monitoring this blog...

      I'm using this technique (without Sandra's extra code) but getting a dump when the text is very long.  I'm going to implement her code, but before I do, I would really like to understand....

      Why does the long text dump?  I think my problem is - I don't understand the IMPORT statement.  I read HELP, but it still doesn't seem like this should cause a dump.  Can someone please explain?

      Thanks in advance,

      Mike

      Author's profile photo Jelena Perfiljeva
      Jelena Perfiljeva

      Sandra's comment already explains the issue. Not sure what sort of short dump long text is causing exactly, but you won't always find this kind of information in Help. It could be a generic dump, like conversion or out of memory. It is not necessarily specific to the IMPORT command.

      IMPORT converts data cluster into the "human readable" format. If you look at an STXL record in SE16 you'll see some kind of gibberish in the CLUSTD field. It's the "raw" data that needs to be converted to be readable.

      Author's profile photo Mike Mathieson
      Mike Mathieson

      We are getting IMPORT_CONTAINER_MISSING dump in line 2523...

      2512       REFRESH: lt_stxl_raw, lt_stxl_out, lt_tline.

      2513

      2514       wa_stxl_raw-clustr = <stxl>-clustr.

      2515       wa_stxl_raw-clustd = <stxl>-clustd.

      2516       APPEND wa_stxl_raw TO lt_stxl_raw.

      2517

      2518 * Decompress text. The trick is to pretend to import it from a DB.

      2519       IMPORT tline = lt_tline FROM INTERNAL TABLE lt_stxl_raw.

      2520

      2521 * read the 1st decompressed text line and fill the short text of

      2522 * output text tab, then append the tab.

      >>>>>       READ TABLE lt_tline ASSIGNING <tline> INDEX 1.

      2524       wa_stxl_out-stext = <tline>-tdline.

      2525       APPEND wa_stxl_out TO lt_stxl_out.

      And it is occurring when processing long text (> 7902 bytes).  So, while I haven't duplicated the error yet, I'm fairly certain/hopeful this is the problem Sandra solves with her code. 

      My problem is - I don't see what she has done to prevent the dump, and in fact I don't see why the dump happens in the first place.  Keep in mind, I'm not doubting the solution, I just want to understand why it occurs.  I could possibly see why the dump would occur on the IMPORT statement, but not the READ.  Sandra's IMPORT-READ(LOOP) looks identical to me, so I don't understand how this solves the problem?

      Mike

      Author's profile photo Former Member
      Former Member

      Mike

      Are you using Select or Cursor statement to get data from STXL table?

      Author's profile photo Mike Mathieson
      Mike Mathieson

      I'm using a SELECT.  But, I've figured it out.  In case anyone needs more explanation of something most readers of this blog may already know (unless you're new to the IMPORT statement like me), from the code above....

      1. T_STXL_RAW must contain all the STXL entries for the text you are trying to read.  Most of the time that is only one entry but if the text is over 7902 bytes, you will have multiple entries.  If you don't have all the entries, the IMPORT will dump.
      2. T_STXL_RAW must have its entries in the same order as they are listed in STXL.  If not, the program will dump.

      Mike

      Author's profile photo Bob Ackerman
      Bob Ackerman

      Thanks Mike,  I was just reading this blog because I was performance tuning a program that uses READ_TEXT in a couple places.  Using the tip from the original poster, I was able to get faster results.  But then in testing I got dumps.  Your post about how T_STXL_RAW needs to be processed for the IMPORT to work properly was a huge help.   Thanks again

      Bob

      Author's profile photo Dennis Keßler
      Dennis Keßler

      Hi,


      I just read your post concerning the alternative to the function module read_text.

      I have an issue concerning read and save text and bring it to an adobe form.

      I have to store texts with more than 132 characters in one row. Do you have any idea how to achieve this?


      Thanks in advance

      Author's profile photo Richard Harper
      Richard Harper

      Stick to the standard line length and concatenate your texts.

      Author's profile photo Former Member
      Former Member

      Mansoor,

      Thanks for such a valuable information.

      Author's profile photo Naimesh Patel
      Naimesh Patel

      Also the logic doesn't handle the TDREF like Rüdiger Plantiko mentioned in his comment

      The Reference Text is a critical part of the logic and it is widely used for many standard transaction where Text just flows from one "Object" to another object. Like Shipping text for customer goes to Sales order to Delivery. If you use the same logic on Delivery, it will not work as the text is being used as reference (STXH-TDREF). 

      By reading the STXH and STXL directly, you should know that it may not give you the correct output for certain scenarios.

      Thanks,
      Naimesh Patel

      Author's profile photo Simone Cattozzi
      Simone Cattozzi

      Now you can read faster removing the FOR ALL ENTRIES, see blog

       

      Author's profile photo Hammad Sharif
      Hammad Sharif

      Very helpful, Thanks alot Mansoor.

       

      Author's profile photo Thiago Felipe Tkac
      Thiago Felipe Tkac

      With the advantage of the SAP Note below(From 2016), 2 new FM are created to solve this performance issue:

      READ_MULTIPLE_TEXTS

      READ_TEXT_TABLE

      #2261311 - Function module for reading multiple SAPscript texts

       

      Author's profile photo Adnan Dodmani
      Adnan Dodmani

      what changes should i do to fetch the long text related to Q-info record ?