Skip to Content

Converting a Key Figure Model to Account model is a common enough use-case and has been the subject of much discussion in the HANA spaces, with people employing various techniques in the Graphical CV and SQLScript side.

In SPS09 SAP introduced this newfangled thing called Smart Data Integration that seems to show some promise of making this easier on users, and that’s what I’ll explore in this blog. Please have a look at this excellent series of blog posts on this new feature.

Scenario

Those of you who are familiar with the COEJ table model and/or the KF-to-Account-Model concept would probably like to just skim this section. For others, COEJ is the table that stores budget data (CO Object line items) and this is what it looks like*

COEJ.png

The green fields are the Client, Controlling Area and Document Number and Item. The red fields WKGxxx are the fields that contain the amounts for each period. So in the above example 68.17 is the amount for period 001 in the year 1995, and 221.56 is the amount for period 002 for the same year and so on. This style of modeling is known as the Key Figure Model.

This isn’t the only way to model such data though – there exists an alternative way called the Account Model. I’ll describe it below.

Our scenario is to transform and transpose this table, as below. Essentially we’ll generate one row for each period. In each row, the green fields (which are not period-specific) will be replicated. The WKGxxx fields (which contain period-specific values) will be transposed into each row of the output.

We’ll need to tell which amount is for which period, and for this we’ll also generate a PERIOD column. In addition we’ll concatenate the Year and Period to generate a “FISCPER” field. This field basically contains the Fiscal Year and Period in a YYYYPPP format, exactly as BW does it. This can be useful in some kinds of analysis.

This output is what’s known as an Account Model. Our objective here is to convert from the KF model into the account model as shown below. To keep the diagram small, I’ve not shown all fields.

COEJ2.png

Setting up the Flowgraph

Here we’ll look at how this can be done in SDI. I’m going to assume you already have COEJ in your HANA system, either replicated from an ECC box or copied.

First off, create a project in the Developer Perspective and share it as usual. Next, create a Flowgraph model by right-clicking the project –> New –> Other. Select Flowgraph and give a name for your new flowgraph. In the resulting dialog box, make sure to select “Flowgraph for Activation as Task Plan”. This is very important, as the Stored Procedure option for some reason doesn’t allow the transpose logic. Here I’ve named my Flowgraph “TRANSPOSATOR”, because the Terminator movie is out and this was the first name that came to mind

COEJ3.png



Click Finish and a blank flow graph screen will come up.

Now the first step is to add our COEJ table as a source for our flowgraph. Do this by grabbing the table from its schema and dragging it into the flowgraph. The system will ask whether this should be a data source or data sink. We obviously want this to be our data source, so select that.

COEJ4.png

Unpivot! Unpivot! Unpivot!

Hopefully we’ll have more luck than Ross Geller here.

Look at the Palette on the right side. From the Palette, select the Unpivot transformation from the Data Provisioning folder and drag it into the output. Our COEJ data should act as the input to UNPIVOT, so connect the DATA node of COEJ to the INPUT of UNPIVOT.

COEJ6.png

After doing that, select the UNPIVOT node above. Now we need to set a lot of properties for our UNPIVOT, to tell it what fields are to be transposed. Open the Properties view and go to the General tab. The inputs in the below screen are color-coded like the fields in the Scenario section above for comparison.

COEJ7.png

Who moved my FISCPER?

Astute readers might have noted that all the fields from the Scenario section have been added to the UNPIVOT properties, except FISCPER. This is because values for the FISCPER need to be generated – however the Unpivot transform allows us to generate only one field, which is the field called PERIOD above.

We’ll generate FISCPER using another transform called the Filter transform. This was the only transform I could get that would generate an extra field. So grab the Filter transform from the General tab in the Palette. The Filter transform also has an input and output node, just like Unpivot. Now the FISCPER field should be added to the output of the UNPIVOT, so connect the output of UNPIVOT to the INPUT of the FILTER. Then click on the filter step itself and go into its properties. In the properties, go to the Output tab. Add the FISCPER by clicking the Add button and enter the data type and length

COEJ8.png

So now we have the FISCPER, but we haven’t populated it yet. Recall that the plan was to populate it in the YYYYPPP format, for example 1995001 for the first period of 1995. So basically the logic would be to take the Fiscal Year (field: GJAHR) and concatenate it with the PERIOD field.

There’s just one problem: that would give us 19951, 19952 etc as PERIOD is an Integer. To handle that, we’ll pad the PERIOD with zeroes on the left. Thefunction lpad( PERIOD, 3, ‘0’) will put zeroes on the left side until it reaches a total length of 3, so a number like “1” will be padded with two zeroes on the left to become “001”.

Which is a great story, where do we actually do this stuff? The answer is: in the Mappings tab of the Filter’s properties view. In there, you’ll find the FISCPER field sitting by itself on the target side rather unhappily, while all the other fields have nice mappings. Let’s fix that. Select FISCPER, then click on the “Edit Expression” button.

COEJ9.png

In the Expression Editor, enter the following formula: Concat(“INPUT_2″.”GJAHR”,LPAD(“INPUT_2″.”PERIOD”,3,’0′)). You’ll notice this is just a glorified version of the formula we derived above Instead of manually typing the field names, you can drag them from the left side as well. In fact I’d recommend doing that if the name of the input node isn’t INPUT_2. Click OK and we’re done with the filter.

In fact we’re almost done with the entire scenario. Notice that even though this is a Filter transform, we didn’t actually do any filtering. Looks like this node is more like a Projection than it is a Filter.

All right, let’s finish this off.

Output

We want the output to go into a new table, so let’s configure that by dragging a Data Sink (Template Table) into the flowgraph. Obviously, the output of the filter step should go into this node. Now go into the properties of the newly created Data Sink.

Not much to do here, just enter a table name in the Catalog Object field. The SDI job will create this table, so it should not already exist in the system. The table will be created in the Authoring Schema, which here is _SYS_BIC. Leave the rest of the fields as-is.

COEJA.png

And that’s all. Activate the flow graph and it will get created in the system.

The Proof of the Pudding

So now how do we verify that this is working? Click on the Execute button in the SDI window. It will fire up an SQL console and begin the task for filling the result table.

You can also fire a select statement to pull from the results table.

COEJC.png

As you can see in the results, we have the amounts in a single column and the PERIOD/FISCPER fields tell us which period the amount is for.

Caveats

One caveat about this example is that it is insert-only. That is to say if you run the task again, it will generate the result and insert them into result table. It will do so repeatedly, which means you could end up with duplicate entries in the result table if you keep re-running it. I haven’t found a way around that, but if anybody has an idea, please share in the comments section.

Also, please do let me know if there are easier/better ways to achieve this using SDI.

* I have depicted a much simplified schema of the COEJ table with far fewer fields to illustrate the concept.

To report this post you need to login first.

7 Comments

You must be Logged on to comment or reply to a post.

  1. Werner Daehn

    How about adding a Table Comparison Transform right before the Target Table? Or setting the target table to writer_type=Upsert?

    Both require the target table to have a primary key, but that’s no issue, is it?

    Nice post btw!

    (0) 
    1. Suhas Karnik Post author

      Thanks for your comment Werner!

      I looked at table comparison transform. Yes, it will work, but for that the target will have to be a Data Sink, not a template table.

      The trouble with using a template table for this scenario (which I now realize in hindsight) is that there isn’t a way to define a primary key in the template table. Even if you do select a Key generation attribute, it doesn’t become a key – and the template table node will allow only inserts if the the target doesn’t have a key (the writer type must be “Insert”). I tried to select upsert but no luck.

      (0) 
      1. Werner Daehn

        I see, did overlook the fact you are using a template table.

        From my point of view you have two options then:

        1. Go to the upstream Filter transform and mark the primary key column as primary key. The template table’s structure is the input structure and primary key is part of that. Then you could use the writer type = upsert.
        2. Execute the flow once so that the table is created with the proper structure. Then remove the template table and replace it with its created table from the catalog. This table you could add primary keys and then you can use Table Comparison as well.
        (0) 
        1. Suhas Karnik Post author

          I tried to set the primary key in the Filter transform, but I don’t see it in there either. Am I missing something here?

          The second option you suggested is interesting. I will try it out, thanks!

          (0) 
          1. Werner Daehn

            aaargh. I filed a bug. The column edit dialog lets you set the datatype, column name and the not-null flag, but should allow you to set/unset the PK flag as well. 😉

            (0) 

Leave a Reply