Skip to Content

What is inside-out modelling?

It is a design paradigm that takes a business component and models it so that it can be exposed as a service. The entities and properties thereof are generally driven by the component interface. The most common form of inside-out driver is the RFC function module, although BOR objects and others like GENIL are available.

This is in contrast to outside-in model design , where the service that is required is modelled and the appropriate backend components are located or built to serve the consumption model.

The important phrase for me is “to serve the consumption model”; conversely, inside-out design can be paraphrased as “a service imposed by the business model”. Gateway consumers should not really be concerned with or know about the business model; it is generally a lot bigger and more complex than a particular service use case would need.

I’ve discussed that preferred approach here: Outside-in modelling (or “rehab” for RFC addicts).

I am not a huge advocate of inside-out modelling but where it is deemed necessary, it can often be improved to work in a Gateway OData service context.

Let’s take the SAP expenses application & data concept as a starting point. “Expenses” is quite a complex beast; it has not been designed with modular access in mind, it is very monolithic in nature. Despite a revamped UI in Web Dynpro, it hasn’t really altered in the backend logic or model.

Rather than tackle the whole of the expenses component, I’m going to focus on one part – “trip options”. These are essentially the lists of options that you can choose for data entry while completing an expenses form. They are typically used to provide context and fill dropdowns or other value choice controls in a UI. What I found interesting about this part is that it mirrors certain aspects of the expenses component on a macro level.

If you wish to obtain the trip options, you can get the data from the BAPI_TRIP_GET_OPTIONS function.

This function returns 20 tables of various data! Here is a prime example of where inside-out design fails for me – how am I going to map 20 outputs to one entity? Typically one RFC is mapped to an entity to satisfy the READ operation.

At this point I would abandon any hope of providing a well-conceived service and look at the outside-in approach – but more on that later.

Back to the modelling exercise : if I have to do it this way, how do I do it well?

One BAPI, twenty collections

Do I need all 20 tables for my service? Those I don’t need to fetch I can remove from the plan.

For example, I’ll use these:

  • EMP_INFO
  • DEFAULTS
  • EXPENSE_TYPES
  • COUNTRIES
  • CURRENCIES

(In reality I’ll probably need more but 25% is a good cut for an example.)

Now, the approach that the majority of beginners in Gateway will take is to try and send all five of these tables back in one read operation, based on the assumption that simple request/responses can return multiple tables (like a BAPI!).

That is not the way to do it. You can’t do it. Fundamental OData is based on flat structures.

Here’s where the promised improvement starts…

What you do is create five entities with corresponding entitysets (build sets only if required; EMP_INFO for example has a cardinality of 1:1).

Each entity/collection is read by a separate GET request.

This has the following advantages.

  • The service entities are decoupled from the backend model; after all they are siblings, not part of a dependency set.
  • A collection can be read when it is required, rather than obtained as part of another package of collections. One, a few or all of the entities can be accessed as required without any request constraints.
  • More of the collections can be exposed if the service evolves, or conversely, deprecation is easier.

It does carry some disadvantages however;

  • The BAPI logic obtains all 20 outputs regardless of those that are required; despite the outputs being optional, pretty much all data is returned.
  • Using the BAPI still accesses 100% of the data when I only want 25%.
  • Each read of an entityset obtains the other four entitysets again!

In performance terms, where I only want to read five collections with one BAPI call, I am reading 100 (5 calls x 20 tables). Not great!

In this inside-out model, there has to be a balance between the unnecessarily complex and unclear means of implementing a method for obtaining the five collections and the enforced constraint of excessive access.

…And the answer is..?

Moving towards a solution, some of you may ask, ”If we read the EMP_INFO entity with this BAPI, it would have read the other four data tables for the other entities. Why don’t we just store these tables in memory then use them to fill the other sets when required?”

Indeed that would be a good idea; except that the request for EMP_INFO is stateless. If we store the other four tables, they are gone when we try a request for DEFAULTS as a second request, so the BAPI will have to go and read them again, plus the EMP_INFO we already have. And so on for the other collections.

Statefulness can be introduced fairly easily. It is possible to get five entitysets in one request with one BAPI call. The key to this is using the $expand option – when a request is “expanded” the required feeds for the expansion are evaluated in the same connection session, therefore the state is maintained for the duration of the request.

One drawback is that the client needs to know how to place the call in the right way to take advantage of this feature, however the feature is at least available!

The final model design

$expand is commonly used to obtain related entities, however it can be used to chain “unrelated” GET’s into one request as well.

In order to place an expand request, there has to be some relation – but there is no clear relation between the entities, they are not hierarchical.

I’ll now take a technique from the world of outside-in modelling in order to help me realise the model. If the entity design is coming from the outside, it does not have to have a direct (or any!) correlation to a data model on the inside. As long as I can devise a way to place meaningful data in the feed or response, that entity is valid.

What I need is a common relation for all of my five (or even the full 20) chosen entities. This is quite obvious – the entities are all BAPI outputs, so it follows that I should look at the input.

In order to provide those 20 outputs, all I require is an employee number. To properly qualify the context I should also add a date and language, which are optional inputs but can make a difference.

Based on this information, I design an entity called TripContext with properties for employee number, trip date and language – I also make sure they are all keys.

Untitled-1.png

I can then provide an association from my TripContext to each entity and collection in the trip option service.

Because I am going to relate my collections to another entity, I do not pay any attention to the input parameters during the RFC wizard steps. Choosing and realising inputs is another feature of this process that creates a lot of confusion if the RFC is not “mapping friendly”.

I can create all five entities with one import.

Untitled-1 copy.pngUntitled-1.png

Ignore the inputs and choose the five collections from outputs.

Untitled-1 copy.pngUntitled-1 copy.png

Mark a key set within each entity block.

Untitled-1 copy.png

Returned entities need to be renamed (they are upper case and refer to multiples in some cases).

Untitled-1 copy.pngUntitled-1 copy.png

Create the sets.

Untitled-1 copy.png

Create associations from TripContext to each of the options collections.

Untitled-1 copy.png

Finally, assign navigation properties to TripContext. Referential constraints are not required and could not be maintained in any case.

Untitled-1 copy.png

With the right implementation, I can now obtain all of the five sets of options with one request.

TripContextSet(PersonID=’00000005′,Language=’EN’,TripDate=datetime’2013-10-01T00:00:00.0000000′)?$expand=Defaults,TripEmployeeInfo,TripExpenseTypes,TripCountries,TripCurrencies

Explanations:

TripContextSet(PersonID=’00000005′,Language=’EN’,TripDate=datetime’2013-10-01T00:00:00.0000000′)

The primary entity is the TripContext. The values that I use to GET the entity are actually used to establish the context, i.e. these values become known to the data provider as the initial phase of the request. The entity itself does not exist, it is a “state directive”; it is stating “this is the person in question and this is the date and language that may affect the outcome”. I do not need to access any further data in relation to this entity data, and the returned entity is the same as the request key.

The trick here is that I have now established the input for the following entities that I wish to obtain.

$expand=Defaults,TripEmployeeInfo,TripExpenseTypes,TripCountries,TripCurrencies

The expand option will locate each of the endpoints of the navigation properties that I defined. In the case of Defaults and TripEmployeeInfo, these are single entity feeds (cardinality 1:1) and the corresponding ‘GET_ENTITY’ method will be called. For the remainder, the corresponding ‘GET_ENTITYSET’ methods will be called.

Data Provider logic

I’ll make some assumptions for my DPC class.

  1. I’ll only ever want to access a single entity of type TripContext;  no collection logic is required.
  2. I’ll only ever want to access TripContext in order to provide the context for an expanded feed. The return form this request is pointless without an expand option.
  3. None of the trip options feeds will work unless the TripContext has been established.
  4. The more expanded elements per request, the more efficient the provider is; however the returned feeds must be required for consumption for this to hold true!

Based on the above assumptions, I can introduce some efficiency.

When TripContext is requested, what is really being requested are the trip options that fit into that context. At this point (TRIPCONTEXT_GET_ENTITY) it would be a good idea to call the BAPI, as we have the input values for it.

There is a slight problem here – the returned data from the BAPI isn’t required just now, the return feed is a TripContext. However I do know that the DPC is going to continue running during this RFC connection; the $expand option is going to invoke calls of the other GET methods.

I’ve got the entity data for those feeds already – so I’m going to store them.

In order to separate the concerns somewhat, I create an access layer class to manage the trip options.

The trip options manager object is then attached to my DPC as an attribute. It reads the BAPI and stores the results in its object space.

When I reach the second (implied by $expand) GET in the process, I can now ask the trip options manager to give me the return data.

I repeat this for each expected feed.

Improved?

I consider this a much better implementation of an inside-out design. A typical implementation of this service, solely based on the BAPI, would not have been very elegant, efficient or as simple to consume.

What could have been a very inefficient and cumbersome implementation is now well-scaled and fairly simple. In traced tests this service can return the full feed in under 100 milliseconds.

However, it still reads more data than required and could be written even more efficiently using the outside-in modelling approach. I intend to tackle this same scenario in an outside-in manner in a future blog.

Regards

Ron.


To report this post you need to login first.

26 Comments

You must be Logged on to comment or reply to a post.

  1. Vijay Vegesana

    Hi Ron,

    Thanks for the detail explanation, i am working on similar issue, can you please say how you are passing values to TripContext without any properties defined as you are saying it is State directive, what this means..

    Can you please elaborate..

    Thanks,

    Vijay

    (0) 
  2. b28 guest

    Hi Ron, really very useful document – thank you very much!!! I think yeah it is the “hardest” part for having this full-blown RFCs and transforming it to “just” those flat OData structures.

    Is there a guide for the “other side” – I mean more for the POST part, a lot of RFC accept tables as input – when creating sales orders for example. Do you also have an best practice guide for this? That would be great! I assume this will be also with expand, but I am not totally sure how I would put those things into a REST client for testing…

    (0) 
    1. Ron Sargeant Post author

      Hi Denise,

      Thanks for the feedback. I agree, the reverse operations of updating can also be troublesome, however I didn’t go into that side of things here because I think they are quite well covered by other posts in the Gateway forums.

      I suppose I can cover in brief though.

      The important aspect of multi-part inserting/updating/deleting is the maintenance of a logical unit of work. Whilst this could be managed by the client, it shouldn’t –  it is easier for the server to handle it

      Deep insert takes all data through one payload so it shouldn’t be an issue to collect the various parts and apply them to one update FM.

      Where various resources are created/updated as separate POSt/PUT requests, the requests should be “$batched”. Similar to the blog example, the various inputs could actually be stored within the backend session memory and only attempt to apply the values when the changeset end event happens.

      Cheers

      Ron.

      (0) 
  3. Felix Ortiz

    Thank you,

    I am struggling with the mapping wizard and how to associate these various tables in a BAPI return structure.  Thank you for putting this together.

    (0) 
    1. Ron Sargeant Post author

      Hi Felix, I am not surprised to hear that you are struggling. I have been looking at the mapping tools to see if the actually offer any benefits and I have to say I find them confusing and more of an obstacle than a help.

      (0) 
  4. Krishna Kishor Kammaje

    Hi Ron,

    Wanted to share some experience.

    Unfortunately so many demos by SAP sales teams on Creating oData services out of BAPI have really harmed the customer. Wherever I go, customers think that having a BAPI is all they require to create an oData service. And they want ABAPers to create oData services much before even a UI wireframe is ready. The question they ask is “we have BAPIs already in the system. SAP says that it has a tool to create oData services out of BAPI. so why don’t you start exposing all BAPIs as oData services?” Then I say “We have 22 tables in this BAPI and you may not use more than 3 to 4, so it might be a performance overhead.” Then they say “we have Suite on HANA, do not worry about performance”. Then I have to explain about network load in passing all these unnecessary data. In summary, marketing stress on creating oData from BAPI has made customers to focus on wrong approach.

    regards

    Krishna

    (0) 
    1. Ron Sargeant Post author

      Hi Krishna,

      I totally agree with your assessment. I cannot verify how this all came to be but I can make an educated guess – sales and marketing are definitely at fault here.

      When the Gateway framework was introduced, it didn’t produce OData feeds, that came later. There weren’t really any tools for producing services either, any custom tools were only there as a bare necessity. Creating a proper service required a lot of hand-written ABAP.

      There was however, one tool which would a) produce “results” with no coding and b) could hook people in on the basis of familiarity with BAPI and RFC. As is the case with all S&M exercises, the demonstration scenarios stick to best-case uses, which are generally simple and fit an expected pattern. The aim is to get customers to adopt Gateway, then the improvements will start to filter through.

      There are many problems with the BAPI approach.

      • BAPI is a flawed concept even before you start trying to retrofit GW onto it. It is incomplete and cannot be versioned.
      • It does not comply with any data protocols other than what is laid out in the ABAP Dictionary.
      • BAPIs are a mixture of SOA and ROA operations.
      • Many BAPIs try to accomplish too much in one interface.

      I don’t think this was done the right way. When the move to OData happened, the generation tools should have been deprecated. Developers should have learned the outside-in methodology. If a BAPI could be put to use, it is best done within a code-based implementation.

      I have used BAPI functions in many of my services but not one of them used the generator I coded it all myself.  Not only was the resulting service delivered much more quickly but the data model was light and simple.

      The one phrase that really makes me sad: “…and I’ve built a custom RFC function for my service” – reminds me of a puppy wagging it’s tail to get petted when it drops your chewed up shoe at your feet. 😏

      The simple fact here is that the majority of “Gateway developers” are anything but. It’s just another facet of technology that they are forced to explore without training or are exploring without any real desire to learn properly. Gateway isn’t the only SAP technology that has suffered from the same issues, a simple look at the state of most ABAP coding compared to 10 years ago is pretty shocking – this goes for some of SAP’s own code.

      Another question that comes to mind a lot of the time is “why the hell are you using Gateway for this?” There’s a perception that it can be used for everything but like all things, it has it’s advantages and disadvantages.

      I don’t know how we avoid the inevitable mess that such misdirection is going to lead to other than to keep trying to inform the community of better options and hope someone listens. But I think the damage has been done 😡

      (0) 
      1. Krishna Kishor Kammaje

        Thanks Ron, your reply made me feel better 🙂 .   As you said, the damage is done.

        I think I have to arm myself with a blog or a solid use case document demonstrating how inside-out and outside-in approaches fare, to counter this ‘quick oData service from BAPI/RFC’ argument.

        (0) 
  5. Nrisimhanadh Yandamuri

    Nice blog Ron and thanks for all that discussion by you & Krishna Kishore.

    I have come across with this problem of using existing RFC/BAPIs throughout. Its not just the case with Gateway, but with WD as well. Standard BAPIs have been used as Adaptive RFCs/Service Calls which only increased the weight of Runtime Context and thus reducing performance.

    When it comes to developing GW services, end users of service are supposed to be non-SAP developers who gets confused as the service gets complex. Lighter the service more readable is the metadata. I would prefer a outside-in approach rather than inside-out approach so that I just expose what is required and the service is tailored totally to the need.

    However, Gateway services are highly scalable and the version management should ensure seamless availability of services (haven’t tried it though). I opine that using a BAPI would reduce the scalability.

    (0) 
  6. Fernando Anastassiades

    Hi Ron, I have a question.

    In the section “Data Provider Logic” , you wrote the following paragraph :

    In order to separate the concerns somewhat, I create an access layer class to manage the trip options.

    The trip options manager object is then attached to my DPC as an attribute. It reads the BAPI and stores the results in its object space.” 

    Could you explain in detail the steps in order to do this process? I dont understand how to do it in Gateway Service builder (SEGW) .

    Regards,

    Fernando.

    (0) 
    1. Ron Sargeant Post author

      Hi Fernando

      You won’t do this with the service builder, it’s a modification of the DPC_EXT class in SE24.

      As with normal ABAP OO design, you can attach an object instance to a class as one of it’s attributes.

      In my opinion it’s better to keep the EXT implementation as simple as possible; if more complex code is required it can be separated into a different class which is focused on providing and storing the native-form data. The DPC_EXT can be agnostic of how this data is obtained.

      This also allows the same low-level data access to be reused in another service that accesses the same business model if a need should arise. For example, some of the trip data entities might appear in separate services for trip plans, requests and actual trips.

      Back to the proposed design: I would probably attach the BAPI call to the constructor of my native data class, thus ensuring that the call is made. The constructor will then fill attribute tables in that class, eg. COUNTRIES, EXPENSE_TYPES, etc.

      The construction of the instance happens when the TripContext is requested, as that contains the input data required by the BAPI.

      When the DPC_EXT runs a GET method, it then simply fetches the relevant pre-fetched table from the native data class (by direct access or using a GET_<somedata> method) and converts it to an entity or set.

      Does that help explain it better?

      Regards

      Ron.

      (0) 
      1. Matt Harding

        Hi Ron,

        Your design advice is very much aligned how I’m thinking oData development needs to be done now days based on the limited resuable GW functionality you can achieve with the current implementation. Worth a blog in itself if you get the time…(I’ve toyed with the idea but no time currently). And slightly off topic – wouldn’t it be nice if we had constants for property names generated by SEGW so we could avoid magic ‘Strings” in our code?

        Cheers,

        Matt

        (0) 
        1. Ron Sargeant Post author

          Hi Matt,

          I agree with the constants idea but there is at least one reason why it’s not practical at the moment. The DPC model has undergone some changes and has some pseudo-redundancy in the method signatures, i.e. there are input parameters that are replicated in the technical request context. For example, you can look at the keys table in the signature and also get the same thing from the IO_TECH_REQUEST_CONTEXT object. Except that they are not the same (hence my use of “pseudo-redundancy”). The model element names in the technical context are always in upper case, the semantic(?) interface is not.

          One advantage of using the technical names is that you can be sure your name string (or constant) needs to be in upper case.

          If you apply constants into the mix you either:

          • Have constants for both naming conventions, or
          • Provide constants for the technical naming only (my preference) 

          Maybe the GW designers will deprecate the separate parameters in the signature at some point and make us focus on using the technical context.

          If you are interested, this is something you could develop yourself and reuse by introducing a custom generation strategy that will inject some model logic for generating element name constants. I did a write-up of the generation strategy concept here:

          Custom generation strategy – how to enhance the generated data provider base classes

          I think your suggestion would involve MPC generation but the process isn’t limited to DPC as far as I can tell, I’ve just not explored the MPC side yet.

          Cheers

          Ron.

          (0) 
  7. Arshad Ansary

    HI Ron,

    Nice blog . But just to get my understanding correct .

    Please correct me if I am wrong.

    The whole thing can be done in GET_EXPANDED_ENTITYSET method where in I declare a structure(which contains nested structure of all five entities) and then call the BAPI and Fill the structure and map it to er_entity like in below link

    Implementing  Expand Entity/Entity Set

    I dont understand why you need to buffer the Individual entity sets and then fill the values during GET_ENTITY/ENTITYSET  of the indivuidual entity types

    Regards

    Arshad

    (0) 
    1. Ron Sargeant Post author

      Thanks Monika,

      I still have it in progress. I tried writing it as a comparison exercise, i.e. inside vs outside but that was too difficult to explain due to massive differences and problems with simply not being able to do some simple things via inside-out. There is a different version in the works, when I get time to complete it and publish I will link a mention here.

      Cheers

      Ron.

      (0) 
  8. Micky Oestreich

    Hi Ron,

    Is there any particular reason why you are using this method instead of going for the GET_EXPANDED_ENTITYSET (Arshad Ansary had a question about this too)?

    I read about this (optional) method in the SAP Press book (OData and SAP NW Gateway) and it seemed as a good alternative. Using this method it also prevents one from calling a certain functionality (Expensive BAPI Calls) more than once.

    Thanks for yet another good (great) blog. It’s been very useful. Allthough by getting to know more about this new tool (Service builder) and after reading quite a few related blogs about OData/NW GW/Service Builder, I can’t help but think that there are (as often is the case with SAP) more ways of achieving ones goal. And for me as a newbie, it is sometimes hard to tell what the pros and cons are of one particular method compared to another.

    Same here with ‘improved inside-out modelling’  and using GET_EXPANDED_ENTITYSET.

    Kind regards,

    Micky.

    (0) 
    1. Ron Sargeant Post author

      Hi Micky (and Arshad),

      There were various reasons for not using that method.

      1) This scenario wasn’t an actual requirement, hence the lack of any ABAP examples – it’s about concepts. Had I been building it for real I might have tried the expanding method.

      2) I’m not sure how flexible that method is in terms of the concept; I might have to do a bit more research.

      3) I wanted to use the buffering abstract in an example as it has other use cases, such as batched operations.

      4) The actual provision method is secondary to the theme, the blog is about fitting an oddly-formed RFC into an OData pattern rather than the reverse.

      In the end, it’s all about time; I wanted to publish the blog to explore the concept.

      It’s turned out to be a nice Socratic exercise anyway as it’s got people thinking about how they might do it differently and better.

      Regards

      Ron.

      (0) 

Leave a Reply