Skip to Content

SAP BusinessObjects BI OnDemand Update to Datasets

In an upcoming upgrade this month, SAP BusinessOnbjects BI OnDemand will improve its support for large datasets.  Currently, datasets are restricted to 100,000 rows of data and will expand to 200,000 rows.  The opening and exploring of datasets will also be seen to be much faster.

New datasets that are published after the upgrade will display immediate results.  Pre-published datasets that are larger than 20,000 rows will require reactivating.

To reactivate the data, use these steps.

  • Open your large dataset.
  • In the Data tab, click Browse.
  • Find the original file that you previously uploaded to and click Open. The file must be in the same format as before.
  • Click Refresh.

NOTE: If you had previously applied a filter to the dataset, the filter will need to be reapplied.

The upgrade is planned for March 25th, 2011 (Subject to change)

You must be Logged on to comment or reply to a post.
  • Hi Tony,

    Can you clarify: Will existing data sets with more than 20,000 records continue to work with the same performance as before until they are refreshed? Or will they stop working completely until they are refreshed?

    In either case, I’d really recommend providing a prompt to migrate larger data-sets in place rather than requiring a refresh. What if the original data is no longer available? What if the data set has been edited in the context of the on-demand platform? It sounds like people in these situations are going to be stuck.

    Best regards,

    • The large datasets will be required to be refreshed. The data will need to be available,  either by downloading the data before the upgrade date,  or by having the data already existing on your local machine or server.
          • I don’t want to be too negative here, because this upgrade sounds like a great thing overall, but it is unacceptable for an OnDemand BI system to drop customer data without significant warning and coordination. It sounds like this is what’s happening here, as datasets with over 20,000 records will become unusable.

            Is this an accurate assessment? If I have a dataset with over 20,000 records and I do not download the dataset, I will lose that data during the upgrade?

          • I fully understand your concern.  The downloading of data should not be all too time concerning.  Prior to the upgrade, customers with large datasets will be emailed directly and made known of the impending change.
          • Maybe this is the wrong forum for this discussion, but the point I’m trying to make is that this is looking like a big mistake, and I’m putting a bit of urgency into my warning because we are talking about an upgrade in 10 days. This needs to be re-thought immediately if it is going to cause data loss.

            Let’s just take me as an example: I have not received an email about this. I don’t have a large dataset loaded currently, but what if I upload one tomorrow? What if someone adds a few thousand records to an existing data set? There should be an announcement on the site in big red letters and all users should be emailed plenty of time in advance (at least 1 month, as you need to allow for vacations).

            In reality, this should be transparent to users. This would eliminate the need to coordinate the migration with users, and is just generally the right thing to do. A large portion of the attraction of BI OnDemand is that we have a reliable, enterprise company managing our data. If SAP feels that it is acceptable to do migrations that result in data loss without extensive prior coordination with users, then I think SAP is giving up it’s primary edge in the competitive SaaS BI landscape.

          • Planned notifications include email to users with large datasets, forum postings and webite notification.  Unfortunately, the migration can not be handled with the larger datasets.
          • In discussing with Dick Hirsch, who I believe has been using the API for BI OnDemand in conjunction with his work on River, he asked what happens to large datasets that were populated by APIs. I think it’s an excellent question, as datasets created in this manner will be even more difficult to recreate than uploaded datasets. Is the situation the same for these datasets?

            Also, I’m trying to download a dataset and I’m not seeing how to do it. Can you explain how this is supposed to work?

          • Just to answer my own question: You right-click on the dataset, or you can download from within the dataset view.

            *Except*, this doesn’t work on the 70,000 record dataset I uploaded yesterday to test. It does work on my datasets of a few hundred records as well as a dataset of 3,000 records I uploaded today to test.

            Is it not possible to download large datasets? How should customers go about preserving these datasets?

          • Ethan,
              I’d like to address some of your concerns concerning the refreshing of large datasets.

            SAP Engineering has created a process to be able to export a customer’s large dataset to csv which in turn can be uploaded to  The database server currently used will remain accessible for a period of time in case a customer does not have a local copy to upload after the upgrade has been completed.  The period of time has not been determined yet.  That is a planned discussion for this upcoming week.

            Concerning datasets and the APIs; The API’s; I received word that the current API’s do not support large datasets so there should not be any required actions.

          • Hi Tony,

            Thanks for the update. Putting a process in place to allow customers to migrate their data sets after the migration is the right thing to do. Even better would be for SAP to do this migration in a way that is transparent to the customer, but I’ll take what I can get 🙂

            I look forward to seeing how this works.


          • Ethan,

            Unfortunately,  I was not able to address this means of acquiring datasets from the soon to be older database.  I was pretty confident that Engineering had this covered,  but until I received word on a more official basis,  I did not want to commit in posting it. 

              On a personal note,  I want to thank you for the comments and for expressing your concerns.  I truly believe that a lot of customers have been following these posts and some valid information was able to be shared. 

          • Hi Tony,

            I appreciate your responses as well!

            It appears that the migration has been completed, as I can no longer access by 70,000 record test dataset. It looks to me like viewing and “exploring” my 3,000 record test dataset is also broken, though the data appears to still be there as I can download it. Is this an unexpected side-effect of the migration?

            If a customer wanted to acquire their large dataset from the older database, how would they go about doing this?


          • Ethan,

              I do not believe there should be any issues with the smaller dataset.  In my test account,  the smaller datasets worked as expected.  I would be interested to know whether the same problem occurs now.  There were some BIOD issues that cropped up but have been remedied. 

              If the acquisiion of a large dataset is required to be extracted from the old database,  a customer need only contact support via the normal email channels and we can go from there. 

          • I just tried it in IE8 and it worked. It appears that in Chrome the dataset never loads. Bug?

            I’m not going to waste your time by requesting that my test dataset be restored, but would you mind sharing the expected amount of time it would take to restore a dataset after a customer requested it?


          • Ethan,

              Chrome is not one of the supported browsers and I have noticed, on several occasions, functionality that was not working correctly.  IE7, IE8 and FireFox 3.x are currently supported.  I have also not had too many problems when I was testing Opera as well.

              The time line for extracting data is not really something that is set in stone with any SLA.  After being provided with specifics such as account name and dataset information,  a ticket would be submitted to the back end operations/engineering group.  The turn around time would rely on the availability of an engineer, or whether one would need to be scheduled. A fair “guesstimation” would be 1 to 1.5 days on the outside.