Skip to Content

Yes! SAP HANA can make time travel happen. Literally!!

back to the future.png

(Image source: http://images5.fanpop.com/image/photos/26500000/Back-To-The-Future-Trilogy-back-to-the-future-26581615-1014-574.jpg)

During our exploration of SAP HANA, we found that SAP has made time travel possible via SAP HANA. This is because SAP HANA runs on SAP In-Memory Computing Engine, which provides support for History Tables. These tables, known as ‘HISTORY COLUMN’ tables, have session type as ‘HISTORY’ and they support time travel queries.

‘Time travel’ queries are the queries against historical states of the database. In a way, it allows to fetch the historical values (say, some inventory statistics for date 1 Jan 2011 in the table CURRENT_INVENTORY), without actually having to load the data from backups taken on 1 Jan 2011.

This concept (and specially this emotional touch of TRAVELLING through time) caught us, and we ended up exploring this concept in the following manner. All the steps and information sources were duly documented, so that other interested enthusiasts can also enjoy this experience in their own labs.

Following is the basic procedure for using Time Travel Queries in SAP HANA:

  1. Create History Tables
  2. Turn Auto Commit Off
  3. Keep track of commit IDs (or connection IDs) dealing with the History tables
  4. Execute Time Travel Queries on the History Tables

Creating History tables

The SAP HANA database provides support for a new table type called History Tables. In such History Tables, updates to and deletion of a record results in a new version of the existing record, with the original record being invalidated.

Syntax for creating a History Table:

Let us create a simple history table MODEL_HISTORY_TABLE, which holds the information about the ‘key’ as the primary key, and a column ‘value’ that holds relevant values.

CREATE HISTORY COLUMN TABLE “PRACTICE_TEST”.“MODEL_HISTORY_TABLE”

(“KEY” VARCHAR (2) NOT NULL DEFAULT ,

“VALUE” INTEGER,

PRIMARY KEY (“KEY”)

);

In case it is subsequently determined that a standard column table requires history, then the table can be modified into a history table using the alter table command. For more details, see the link CREATE TABLE, and the document ‘0204 – Modeling – Time Travel Example.pdf’, section ‘Introduction’,
which is available as a zipped content at http://www.experiencesaphana.com/servlet/JiveServlet/downloadBody/2171-102-4-4396/SAPHANA_TechnicalDocumentsV2.zip.

Notes: [Source: http://help.sap.com/hana/html/sql_create_table_history_time_travel.html

  • A history table must have a primary key
  • The session type can be checked from the column, SESSION_TYPE of the system table SYS.TABLES. Use the following code to check the session type:

Select SCHEMA_NAME, TABLE_NAME, TABLE_TYPE, SESSION_TYPE from sys.tables where table_name like ‘MODEL_HISTORY_TABLE’ 

Output:
 
  session_type for a history table.png

     Figure 1. Session_Type for a history table

Turn Auto Commit Off

In order for time travel to work, the Auto commit option of SAP HANA DB needs to be turned off. This limits the number of Commit IDs that needs to be worked with. Besides, if not done, an exception will be thrown with an appropriate error message. (Reference: http://help.sap.com/hana/html/sql_create_table_history_time_travel.html ).

Steps for turning off Auto Commit Option:

  1. Open the Properties tab of the SQL Editor: In SAP HANA editor, when the SQL Editor is open, click: Windows > Show View > Properties

      turn off autocommit step 1.png

   2. In the properties window at the bottom of screen, select ‘Off’ against “Auto Commit” option.

turn off autocommit step 2.png      

Adding initial data into the table

To insert the initial few rows of data, following syntax statements were used:

INSERT INTO “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” VALUES (‘AA’, 1);

INSERT INTO “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” VALUES (‘BB’, 2);

INSERT INTO “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” VALUES (‘CC’, 2);

COMMIT;

UPDATE “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” MERGE DELTA INDEX;

Since auto-commit is turned off, writing a commit statement and then doing a merge delta after every set of transactions is a must.

Keeping track of commit IDs / connection IDs

The historical state of the History table can be referenced in two ways, either by using the Commit ID, or via UTC time stamping.  Details on actual usage is described in the next section (Executing Time Travel Queries).

Since Auto Commit has been turned off, the COMMIT command is necessary to ensure that the data is inserted into the table. So after every transactions is complete, commit statement should be used to commit the values into the database. Each commit statement SAP HANA is marked internally with a commit ID, which can be viewed using following syntax:

SELECT LAST_COMMIT_ID FROM M_TRANSACTIONS

WHERE CONNECTION_ID = CURRENT_CONNECTION;

<commit_id> can also be obtained from m_history_index_last_commit_id after each commit. These values are used later on to reference back to any historical state of the database, when the transactions were performed on the database.

The history of transactions can be validated by viewing the default hidden columns ‘$validfrom$’ and ‘$validto$’ of the history tables:

SELECT *, “$validfrom$”, “$validto$”

FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” (‘REQUEST_FLAGS’=’ALLROWS’);

The hidden columns “$validfrom$” and “$validto$” (case sensitive) contain the Commit ID when the record was added – “$validfrom$” – and superceded or deleted – “$validto$”.

A question mark (?) under the $validto$ column represents that the data in these rows is still valid (they don’t have any end date yet).

Adding more tables and data

To make our example more lively, let us add few more tables and rows.

Let us add another history table ‘COLOUR_HISTORY_TABLE’ that holds information about the ‘VALUE’ (same as used in the above history table),
and another column ‘HUE’.

Creating another table

CREATE HISTORY COLUMN TABLE “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE”(

“VALUE” INTEGER NOT NULL DEFAULT 0,

“HUE” VARCHAR (10),

PRIMARY KEY (“VALUE”));

Adding data

INSERT INTO “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” VALUES (1, ‘GREEN’);

INSERT INTO “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” VALUES (2, ‘BLUE’);

INSERT INTO “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” VALUES (3, ‘RED’);

COMMIT;

UPDATE“PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” MERGE DELTA INDEX;

Viewing data

Now let’s do a simple join to view the data in the two tables

SELECT M.“KEY”, C.“HUE” FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” M

JOIN “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE”
C
ON M.“VALUE” = C.“VALUE”;

The result for the above query, based on the data entered till now would be as follows:

viewing data query result.png

Updating the records to generate some HISTORY

Now let us make some updates in the records, so that the records will have some history of updates since the time they were created.

UPDATE “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” SET “VALUE” = 4 WHERE “KEY” = ‘AA’;

UPDATE “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” SET “VALUE” = 5 WHERE “KEY” = ‘CC’;

COMMIT;

UPDATE“PRACTICE_TEST”.“MODEL_HISTORY_TABLE” MERGE DELTA INDEX;

Executing Time Travel Queries

There are a number of ways of using “Time Travel” queries. We can either track the historic values using the commit ID for the transactions, or by using the UTC time-stamping for the transactions, or by setting the session back to some earlier historic session.

For better understanding, just repeating what we have done so far. We created two History tables as follows:

MODEL_HISTORY_TABLE.png    COLOUR_HISTORY_TABLE.png

Also, the below graphic shows the entire history of the updates made in both the tables, along with respective commit IDs and timestamp values.

historic values in tables.png

The above representation is good for quick understanding in a demo purpose, but in real life, there are no such views available. For that, we need some alternate ways of getting all the required commit ID and the timestamps. Following section shows steps for doing that.

Getting all the Commit IDs and UTC Timestamps

The entire list of all the commit IDs/Commit-time can be obtained using the following syntax:

SELECT * FROM TRANSACTION_HISTORY ORDERBY COMMIT_ID;

This results into the following list of commit IDs:

list of commit ids.png

Note: In our case, the number of transaction were limited, so we have listed the Commit IDs for all the transactions made. In case this list is huge, the relevant commit IDs can be tracked by adding where clause in the above statement, as mentioned in following example:

SELECT * FROM TRANSACTION_HISTORY

WHERE COMMIT_ID >= 200610

ORDERBY COMMIT_ID;

Now the TIME TRAVEL queries can be executed, i.e. historical values can be retrieved in following possible ways:

SELECT VALUES AS OF COMMIT_ID:

Use the select statement with phrase ‘AS OF COMMIT ID’ to refer to some historical values of the history table.

The historical values of the table, when the tables were first committed (i.e. Commit ID= 200611) can be derived using the following syntax:

select * from “PRACTICE_TEST”.“MODEL_HISTORY_TABLE”

AS OF COMMIT ID 200611

select on model @ 200611.png

Until this commit ID, there was no data entered in the second table (“PRACTICE_TEST”.“COLOUR_HISTORY_TABLE”), so doing a join operation between the two tables for this commit ID would result into NULL return.

SELECT M.“KEY”, C.“HUE” FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” M

JOIN “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” C ON M.“VALUE” = C.“VALUE”

AS OF COMMIT ID 200611;

join query @ 200611.png

As expected, it was all blank result set, since at this commit ID, there is no data in the COLOUR_HISTORY_TABLE.

The output for the join between two tables will be available only after the instance since when both tables had some valid data, which is after the commit ID 200721. So repeating the same join query with this commit ID 200721 should fetch some valid results:

join query @ 200721.png

This way, simple (or bit complex) queries can be used to browse through historic data without having to actually load the old data, or keeping multiple copies of data in backup, saving enormous amount of storage space and efforts.

Alternatively, the time stamping information of transactions (called UTC timestamps) can also be used for browsing values in historic instances.

SELECT VALUES AS OF UTCTIMESTAMP:

Using the select statement with phrase ‘AS OF UTCTIMESTAMP’ also refers to historical values of the history table based on the UTC time stamps made in the history table.

As already discussed, the UTC Timestamp can be identified from the sys.transaction_history table.

If specific commit ID is known, corresponding commit time can be identified using following syntax:

select commit_time from sys.transaction_history

where commit_id = 201039;

Output: '2013-02-04 12:01:42.682'

This timestamp can be used to refer to historical values of the tables using the following syntax:

SELECT M.“KEY”, C.“HUE” FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” M

JOIN “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” C ON M.“VALUE” = C.“VALUE”

AS OF utctimestamp ‘2013-02-04 12:01:42.682’;

join query @ 201039.png

Please note that at the commit ID 201039 (or time stamp value ‘2013-02-04 12:01:42.682’), the data in the first table (MODEL_HISTORY_TABLE) was already altered, and so now I has only one common value, which appears as the output of above join statement.

SET HISTORY SESSION TO COMMIT ID

A database session can be set back to a certain point-in-time, based on the provided value for commit_id. For doing this SET HISTORY SESSION can be used.

SET HISTORY SESSIONTO COMMIT ID 201039;

SELECT M.“KEY”, C.“HUE” FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” M

JOIN “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” C ON M.“VALUE” = C.“VALUE”

This would result in same output as above, since we are referring to the same instance of the database at exactly the same historic time as the above query.

By setting the HISTORY SESSION to a particular Commit ID, all subsequent queries will return data as of that Commit ID, until the command is cancelled by setting the history session to another value, or cancelling it by setting it to NOW.

SET HISTORY SESSION TO NOW;

SET HISTORY SESSION TO UTCTIMESTAMP

In similarity to above method, the database session can be set back to a certain point-in-time, based on the provided value for UTC time stamp.

SET HISTORY SESSIONTO utctimestamp ‘2013-02-04 12:01:42.682’;

SELECT M.“KEY”, C.“HUE” FROM “PRACTICE_TEST”.“MODEL_HISTORY_TABLE” M

JOIN “PRACTICE_TEST”.“COLOUR_HISTORY_TABLE” C ON M.“VALUE” = C.“VALUE”

Again, we would get the same output. In a way, these (time travelling via using SELECT statement with commit ID, UTC Timestamps or by using SET HISTORY SESSION) are all different ways of reaching to the same results.

Important Notes:

[Source: http://help.sap.com/hana/html/sql_create_table_history_time_travel.html]

  • Autocommit has to be turned off when a session should be restored (otherwise an exception will be thrown with an appropriate error message)
  • Non-history tables in restored sessions always show their current snapshot
  • Only data query statement (select) is allowed inside restored sessions.

Conclusion

For demonstration purposes, implementing time travel seems to be bit easy task. But in real life scenarios, implementation would be much more complex, and would demand entire new set of plans and designs for fully utilizing this new concept.

We would love to hear more about real life scenarios where this concept could make a real difference. So do provide your feedback and suggestions about this topic.

References

To report this post you need to login first.

31 Comments

You must be Logged on to comment or reply to a post.

  1. Former Member

    Vishal,

    I enjoyed reading the article.

    Some questions comes to mind, how is the data retention for the history tables configured. I assume that there are limits of how much old data HANA will keep. Is there a way to cleanup/archive historical data past a certain time frame?

    Albrecht

    (0) 
      1. Former Member

        Is there no scoping for the DELETE HISTORY statement? What happens if I run out of memory; is it not possible to keep only 6 months of history/deleted records?

        Also what are conditions under which the records are not removed physically removed?

        I also have to admit that this statement confused me:

        You merge in the table delta storage with the table main storage.

         MERGE DELTA OF HIST;

        What is considered the main storage and what is the delta storage. Is there a architecture document that would shed some light on the inner workings of HANA?

        Albrecht

        (0) 
        1. Former Member

          Hi Albrecht,

              First rule of using the DELETE command – never use a DELETE statement without a WHERE clause – ever ๐Ÿ™‚ At the botton of the topic, there is an example of the DELETE HISTORY with a WHERE clause that allows you to control what to remove from HISTORY. Now, I haven’t tried this yet, but in theory, you can use the command:

          SET HISTORY SESSIONTO utctimestamp ‘2012-09-00 00:00:00.000’;

          to go back six months andd then do a DELETE HISTORY command (with a WHERE clause). Hopefully, someone from SAP will jump in here if I mis-spoke.

             With regards to the MERGE DELTA command, when HANA writes data to a COLUMN table, the default behavior is to write the data to the DELTA. You’ll see the amount of data in DELTA storage using HANA Studio with the Table Distrubution command. To move the data into the main storage for the table, you need to use the MERGE DELTA command. This is described somewhat in the http://help.sap.com/hana/html/sql_merge_delta.html topic.

             For the best description that I’ve seen – check out http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/70d16119-ad21-2e10-de8b-eaaedf86b9cd?overridelayout=true&50603304707848 starting at sllide 19. This is an awesone deck by Marc Bernard. If you want the nuts and bolts version, check out the ACM paper – http://sites.computer.org/debull/A12mar/hana.pdf

          Regards,

          Bill

          (0) 
        2. Lars Breddemann

          Hi Albrecht

          In addition to Bills comprehensive link list, let me add a bit of explanation.

          “main” and “delta” are two storage area or better data stages for tables in the column store.

          While the “main” structure of the column store is highly read optimized and compressed it’s not really easy to change data in it.

          To overcome this, the “delta” structure serves as an “write-as-append” store that makes it easy to add new and changed data.

          For the db user both structures combined (and a few others) are presented as one consistent view onto a table. Upon query time, HANA automatically figures out, what data from the “main” structure is still valid and what data has to be taken from the “delta” structure.  (and yes, this is counter intuitive, when you’re used to how classic relational DBMS implemented tables).

          Over time the “delta” structure obviously grows and the query performance might deteriorate. To fix this, the “main” and the “delta” structure can be combined into a new “main” structure.

          Now, coming back to the history tables, what is done here basically is just an extension of this multi-stage approach. When HANA performs the merge operation it obviously can tell what tuples will be changed.

          And these tuples are then moved to a hidden set of “main” and “delta” structures. Think of them as the “history”-shadow of your table that contains all the former versions of the data.

          Since this compound of “main” and “delta” structures pretty work similar to the normal column store, there’s also a separate merge command for them:

          MERGE HISTORY DELTA OF <table_name>

          By now, with very sharp eyes, one thing could have become obvious:

          If a lot of changes/updates happen to the tuples in the table, then the history-shadow-structures will contain a lot of entries.

          That could easily be multiples of what your “current”-table contains.

          And we all know that currently column store tables can hold a max. of 2 billion rows per partition.

          Now guess what: this limitation also is valid for the “history”-part.

          Luckily, the “history”-part inherits the partitioning scheme from the “current data”-part, so again partitioning can help to overcome this limitation.

          And this, the actual number of tuples, not so much the amount of raw data is what can become an issue (usually, updates to tuples change only smaller parts and thus in total the compression of the data will still be pretty good).

          Hope this explains a bit of how the history tables work.

          – Lars

          (0) 
          1. Former Member

            Lars, thanks.

            My heads spinning a bit but I think I can make sense out of it. I used to work on a natural language sort product, where we had similar issue of having to record changes for small parts of large datasets, we took a similar approach.

            Albrecht

            (0) 
          2. Former Member

            Hi Lars,

            How can we cleanup the history index from a BW-perspective? DSO Active data table is a temporal table and when we delete Change Log from BW the only thing deleted is are entries in /BIC/A<DSO>80 which I suspect to be just an admin table. Result is that the Change Log Calculation View shows less data.

            However if I check the DSO Active data table from HANA Studio using the “time-travel” functionality, I can see that all record-images are still there.

            What I would expect is that delete change log older than x days, all history records before that time should be deleted from the history index, or would that be wishful thinking ;-)?

            Cheers,

            Glenn Cheung

            (0) 
            1. Lars Breddemann

              Hi Glenn,

              Why do you think that the DSO management doesn’t include to take care of the history part of the table?

              I think in this case your expectation is just not right. Once you activated the data in DSO, the change log data information is still there in BW, isn’t it?

              Delete this and the respective entries in the history part of the DSO will be deleted as well.

              In any case, be very clear about that IMO-DSOs use the history table functionality for something totally different than doing “time travel”. It’s – again – a sort of data staging.

              – Lars

              (0) 
              1. Former Member

                Hi Lars,

                “Delete this and the respective entries in the history part of the DSO will be deleted as well.”

                Well unfortunately this is not what happens:

                When I delete Change Log from BW, BW only deletes records the /BIC/A<DSO>80 table which contains some admin entries that are used for displaying the Change Log records. Change Log data is not physically stored anymore but is a Calculation View that makes use of the history table functionality to reconstruct the Change Log as we know it in BW.

                Physically nothing is deleted from the Active-data table: if I query the Active-data table from HANA Studio using the hidden columns  “$validfrom$”, “$validto$”, I can still see the entire history of changes still there.

                Just to be clear, I don’t want to do “time travel querying” on my DSO-data, I want to be able to control the amount of history records that is retained in the History Index, preferably of course from within the BW application without having to do something separately on DB-level or through HANA Studio.

                Hope I was able to clarify my case, if not I’ll post a question complete with screenshots etc. to clarifiy some more.

                Cheers,

                Glenn

                (0) 
                1. Former Member

                  Hi Steve,

                      I had to chuckle a bit when I read your response regarding the DELETE operations as DELETE simply marks the record for deletion for time travel purposes. Same goes when you update a record. The updated record is really an insert and the previous record is marked as not current. As Lars pointed out in his 10:45 post, the Richard Snodgrass reference is a must read for anyone interested in the topic.This whole time travel history thing is like Heisenberg uncertainty principle in quantum mechanics – that is it can hurt your head if you think about it too much ๐Ÿ™‚

                      After readying the details of DELETE HISTORY more – it looks like there is no way to delete the actual history – just mark them as deleted.  Looks like a feature request for the HANA dev team.

                  Regards,

                  Bill

                  (0) 
                  1. Lars Breddemann

                    Hi Bill

                    Bill Ramos wrote:

                                           

                        After readying the details of DELETE HISTORY more – it looks like there is no way to delete the actual history – just mark them as deleted.  Looks like a feature request for the HANA dev team.

                                       

                    This feature is present and implemented since history tables are available in HANA.

                    And it actually does, what it is supposed to do, but obviously not in the way one would guess it would. The physical remove of the data happens in asynchronous data staging processes like MERGE or the garbage collection.

                    This makes it more difficult to actually observe the history data getting removed.

                    – Lars

                    (0) 
                2. Lars Breddemann

                  Hi Glenn,

                  Ok, let’s not over-complicate this:

                  • – yes, I think a thread on it’s own would be appropriate, as this is a topic on its own (screenshots not necessary though)
                  • the DELETE FROM HISTORY does not work like a standard DELETE and it can take some time until the HANA internal garbage collection removes the history information physically. You may try and run MERGE HISTORY DELTA and ALTER SYSTEM RECLAIM VERSION SPACE (both documented) manually to predate the garbage collection.
                    Yet, still the history data might be removed in multiple chunks – nevertheless, HANA manages to only deliver the correct rows for any queries against the history.
                  • If you are still in doubt about this working correctly in your system, please open a support message. There had been bugs and fixes for this – so maybe this is what you experience as well.

                  – Lars

                  (0) 
  2. Lars Breddemann

    Hi Vishal

    Nice blog post. I really like how much effort you put into it.

    And I’ve to agree: for real-life scenarios the available functionality is pretty limited and might not satisfy what you would expect, when you first read “time travel query”.

    As a matter of fact, developing time-dependency in relational databases is all but easy. It can be pretty tricky at times.

    If you’re interested in a really good starters book on this topic, I’d recommend the meanwhile free Developing Time-Oriented Database Applications in SQL by Richard Snodgrass.

    Right now, all HANA history tables can do for you, is to hide the tuple validity information in terms of successful transactions in the systems (commits).

    But database transactions are not necessarily tied to business transactions.

    Another point is: the factual validity of a tuple might be different than what can be deduced by the commit time. The super easy example for this is of course the time-dependent master data, that defines some information to be valid from next month onward but that had been loaded (and committed) into the system yesterday.

    That’s the kind of data we needed the temporal-join of SPS 5 for. It’s not at all about history tables.

    Other immediate want-have features like “get a list of the all the changes this tuple had gone through” or “get the creation time of the tuple” are currently hard to implement, if possible at all.

    The hidden columns you mentioned as well as the query parameter are actually not at all supported. One reason for this is obvious: by using these, you break the consistency of the current sessions point in time. You see tuples that actually not belong to the current transactions view to the data.


    Another reason (and maybe more important) is: the query parameter highly interferes with how HANA works out what tuples to show and which to hide. Usages of this parameter in the past have shown, that this can lead to serious nonsense outputs.

    And as this is not an official feature (the query parameter and the hidden columns) there likely won’t be an enhancement/correction for this.

    The strong advice on that must be: don’t ever use this for anything else than playing “waste your time”.

    Again, I really like this blog post as it’s well written and produced. I hope to see more of this quality here in SCN.

    – Lars

    (0) 
    1. Former Member Post author

      Hey Lars,

      From your inputs, and the other discussions going on in this article, it seems there are lots of hidden mysteries and risks behind this Time Travel concept (even in movies also, time travel is always shown as something full of secrets and dangers, which is what always attracted me).

      And if time permits, I would love to explore them more and share my experience. (Since highly risky things often are attached with good rewards). Might be this would result into something useful for the entire HANA user community.

      Thanks a lot for your inputs and also thanks for sharing such interesting material about developing time-oriented DB.

      (0) 
    2. Former Member

      Lars, if the query parameters you referenced are not Supported.  What is the valid supported way to make use of the History Tables?  Only using the

      SET HISTORY SESSION command?  Or are you just saying the hidden columns are not supported but a query with AS OF is supported?

      (0) 
  3. Former Member

    Very nicely documented. Time travel is primarily when the record was updated into the table, most business requirements are on Time dimension data.. any blogs on How HANA handles this?

    (0) 
    1. Lars Breddemann

      Hi Manish,

      the topic has been discussed in the comments section.

      SAP HANA is a database management system at this level of topic and you as the developer will have to come up with a data model to handle time dependency in your application data.

      Look for topics like “bitemporal data“, “data vault” and “slowly changing dimensions” to get some ideas on how to do that.

      – Lars

      (0) 
  4. Former Member

    Hi Vishal,


    Nice post. I am wondering how to alter the exist table to history table. I didn’t find ant syntax about this.


    If I use

    alter TABLE “SAPDBW”.”/BIC/AZPMCNDA032″ WITH PARAMETERS(‘SESSION_TYPE’=’HISTORY’)


    I will get error with:

    Could not execute ‘alter TABLE “SAPDBW”.”/BIC/AZPMCNDA032″ WITH PARAMETERS(‘SESSION_TYPE’=’HISTORY’)’ in 4 ms 185 µs .

    SAP DBTech JDBC: [2048]: column store error: process parameters error:  [2018] A received argument has an invalid value;WITH PARAMETERS: ( ‘SESSION_TYPE’=’HISTORY’) not supported


    Would you please help on this?


    BR,

    Molly

    (0) 
  5. Kingsley Ufumwen

    We’re trying to use the HISTORY TABLE capability in HANA for recoverability to a point in time if ever data values get hacked/broken and we don’t have the luxury of restoring a
    full db from backup, in the case where for example it’s impossible to restore the entire landscape to the same point in time, due to dependencies.
    We need to urgently understand a few items around Time Travelling in SAP HANA:
    1. Are there any differences in behaviour on HANA when using Time Travelling on a runtime version of HANA vs Enterprise Edition?
    (with Runtime we mean – HANA is a dedicated db under Banking Services or CRM) or are there alternative Time Travel methods available in each SAP Solution on HANA?
    With time travel we mean, the ability to take the entire application back to a point in time, restoring only particular values as per a picture a while back.
    2. As it is requirement for the history funtion in HANA to work properly, Should Auto-Commit ever be switched off in an SAP Solution Landscape on HANA?
    Especially in a runtime setup? Or is AUTO-COMMIT OFF only applicable when developing your own apps on HANA? (Eg Enterprise HANA)
    We don’ feel comfortable with the fact that we have to interfere with AUTO-COMMIT, eg issue explicit COMMIT commands while keeping track of last_committed_Ids from read from M_TRANSACTIONS
    What does SAP recommend in this case?
    3.Any ideas where to find Time-Travel info per SAP Solution?

     

    (0) 
  6. Thorsten Niehues

    Is there a guide about memory sizing.

    Assume we want to create history tables of transaction data.

    Is there a rule of thumb (or even more exact) of how the data grows compared to regular tables where updates overwrite values? Or in other words how much memory costs us the history?

    I would assume that it depends on the records changed and how much within the record – how can I get more information about it?

    (0) 

Leave a Reply