Skip to Content

What’s it all about?

This is just a short blog for me to share a solution to a common problem when you’re dealing with large volumes of data, where you can’t read it all at once from the database.

An ancient curse upon these cursors!

The old way of doing it was with a cursor:

DATA cursor TYPE cursor.
OPEN CURSOR WITH HOLD cursor FOR
    SELECT stuff FROM database_table WHERE condition.
DO
  FETCH NEXT CURSOR cursor INTO TABLE internal_table PACKAGE SIZE package_size.
  IF sy-subrc IS NOT INITIAL.
    exit.
  ENDIF.
" Do something
...
ENDDO

The problem is, at the point of  “Do something”, if there’s an implicit database commit, COMMIT WORK or a database rollback, you get an invalid cursor dump DBSQL_INVALID_CURSOR.

BI extractors often use cursors in this way – the sample function module provided suggests it. I’ve found you can implement parallel processing in such an extractor using CALL FUNCTION STARTING NEW TASK – and it will work. But you get dumps when testing in RSA3, since there’s some kind of commit going on.

Reading the help for OPEN CURSOR, there’s a hint of what to do. The help says:

If the addition WITH HOLD is specified, the database cursor is not closed by a database commit executed using Native SQL.

The addition WITH HOLD can be used only in reads performed on the standard database. It cannot be specified together with the addition CONNECTION.

A Native SQL database commit can be performed using the DB_COMMIT function module, for example.

A modern curse on these cursors

So this gave me a hint. The modern (and by this I probably mean post 1995 or something) way of read large volumes of data from the database in packages is to use something like:

SELECT stuff FROM database_table INTO TABLE internal_table
       PACKAGE SIZE package_size.
" Do something
ENDSELECT.

Now, if a commit occurs in “Do something” you’ll get an invalid cursor dump – this time it’s SAPSQL_SQLS_INVALID_CURSOR. I haven’t found any documentation that explains exactly when you get these dumps. You certainly do if COMMIT WORK is in the “do something”, and it appears also if there are implicit database commits.

The fundamental interconnectedness of all things

Remember the hint above from the keyword help? That second paragraph talks about the addition CONNECTION.

I’m currently developing a solution extracting large amounts of data and send it via proxies to PI. The problem is, the proxy call was causing SASQL_SQLS_INVALID_CURSOR dumps. It occurred to me that the proxy call must be making some kind of commit (implicit or otherwise).

I read up a little about what can come after CONNECTION. Normally, (or often?) you would use this to connect to secondary database defined in DBCON. But if you have say CONNECTION r/3*matt then this creates a second connection to your normal database server. What’s important is the r/3* – what comes after the asterisk specifies a unique connection. Call it anything you like.

So now we have:

SELECT stuff FROM database_table INTO TABLE internal_table
      CONNECTION r/3*matt 
      PACKAGE SIZE package_size.
" Do something
ENDSELECT.

A fall in the final furlong

Yes! Now my cursor is protected. Or so I thought…

You see, after my call to the proxy, I’d really like to do a database commit. So I added COMMIT WORK after the call… and hey presto – the dump is back.

I had another think and read the documentation for COMMIT. There’s a version of this command COMMIT CONNECTION. It’s used to – as you’d expect – commit database operations on the specified connection. The thing is I want a commit on the standard database.

From the help on OPEN CURSOR, I could use FM DB_COMMIT, but a closer look at the documentation for COMMIT CONNECTION shows I can simply use COMMIT CONNECTION default.

And that was it – problem solved, everyone happy. Or so I thought…

A curse on FOR ALL ENTRIES

I’ve now discovered the data is being extracted has a select using FOR ALL ENTRIES. On BSEG. And the help for SELECT … PACKAGE SIZE says:

If the addition PACKAGE SIZE is specified with the addition FOR ALL ENTRIES, all selected rows are read initially into an internal system table and the packages are only created when they are passed from the system table to the actual target table. The addition FOR ALL ENTRIES thereby negates the effect of PACKAGE SIZE in preventing memory overflow.

Oh dear. Fortunately, on the systems I’m working with BSEG is a transparent table, so we can rework the selects as INNER JOINs. But that’s a story for another day.

RTFM*

One comment I’d like to make is that recently in the questions, I’ve seen an influx of questions that can be easily solved by reading the documentation, especially the ABAP keyword documentation. I’ve been developing in ABAP for around 20 years. Did you see that I’m still reading the documentation? There’s no shame in it. Newbies – please try it. You might learn something wonderful!

 

 

 

* Read The Flipping Manual

 

 

 

 

 

 

 

To report this post you need to login first.

7 Comments

You must be Logged on to comment or reply to a post.

  1. Andre Schüßler

    Hi Matthew,

    I like your approach.

    When I had to take care of that Problem I just did the follwing:

    DATA: lv_last_vbeln   TYPE vbeln,
          lv_package_size TYPE i VALUE 10000,
          lt_vbak         TYPE TABLE OF vbak.
    
    DO.
    * get contracts as packages
      SELECT * FROM vbak INTO TABLE @lt_vbak UP TO @lv_package_size ROWS
        WHERE vbeln >  @lv_last_vbeln
        ORDER BY vbeln.
    
      IF sy-dbcnt = 0.
        EXIT.
      ENDIF.
    
      lv_last_vbeln = lt_vbak[ lines( lt_vbak ) ]-vbeln.
    *   do something with commit work
    ENDDO.

     

    But i think your solution might be better regarding performance.

    I will try this out.

    (2) 
    1. Matthew Billingham Post author

      Simple but effective. Very clever. Still doesn’t work with FOR ALL ENTRIES though.

      • If the addition FOR ALL ENTRIES is also specified, all selected rows are initially read into a system table and the addition UP TO n ROWS only takes effect during the passing from the system table to the actual target area. This can produce unexpected memory bottlenecks.
      (2) 
  2. Michelle Crapo

    BUT – it’s so much easier to ask than to read the documentation! I’m kidding of course. I even write documentation for my programs. Now the question is do I read it prior to making a change? “It depends”.

    Nice one – lucky me I haven’t had to use a cursed approach very often. This will help when I HAVE to do it.

    Michelle

    (1) 
    1. Matthew Billingham Post author

      Documenting programs? Hmm… an interesting concept. All my programs are self documenting.

      Seriously, I did once receive a knock back at code review for something like:

      METHOD integrate_over_all_spaces.
        LOOP AT me->all_spaces ASSIGNING FIELD-SYMBOL(<space>).
          ADD <space>-integrate( ) TO r_result.
        ENDLOOP.
      ENDMETHOD.

      on the grounds that it wasn’t documented…

       

       

      (2) 
      1. Michelle Crapo

        🙂 Well of course that needs documented. It makes no sense that you are adding a spaces dynamically.

        And of course, my code is self-documenting…..      Always. I’m smiling. It depends. My documentation not in the programs is sketchy at best. Inside is better.

        Ever have to go in and quickly make a  change? I know unit testing… I’m just starting to add that.

        Oh well! Better late than never,

        Michelle

        (0) 

Leave a Reply