Concepts compared: BPC standard and BPC embedded
Following discussions and answering questions in this forum and also in the ‘Business Planning’ forum http://scn.sap.com/community/data-warehousing/business-planning I have the impression that it is worth to explain concepts used in BW-IP/PAK as exposed in ‘BPC embedded’ and to compare these concepts with ‘BPC standard’. This kind of information is usually not included in the documentation.
This comparison is not a feature list and also tries to avoid judgments whether concepts used are good or bad. The focus is on basic concepts, e.g. meta data concepts, data models, working with transaction data, how the ‘engines’ work and transaction concepts. Only planning is treated here, no consolidation.
I will also fix some notation since BW and BPC sometimes use the same terminology for different things. This is usually causing a lot of confusion.
So let’s just start with terminology for the very basic BPC objects, namely ‘BPC Environment’ and ‘BPC Model’. In my humble opinion (sorry, my first judgment) these names are not good: ‘environment’ and ‘model’ are too generic and may cause misunderstandings.The old names
- BPC Environment <-> Application Set
- BPC Model <-> Application
are much better, since a ‘BPC Model’ represents a planning application and a ‘BPC Environment’ is a grouping of planning applications that share planning dimensions and are linked to each other. The links e.g. may be represented by BPFs e.g. since the applications exchange planning data. Talking about BPC objects I will always use the full term, e.g. ‘BPC Model’. But talking about general concepts like in ‘data model’ I will use just ‘model’ (without ‘BPC’).
A glossary contains some definitions of the terminology used.
2. Why ‘BPC embedded’?
‘BPC standard’ is a planning (ok, also reporting and consolidation) solution based on BW technology mainly designed to be used by LOB. The technical BW objects needed (e.g. InfoObjects, InfoCubes) are generated and controlled by BPC and not directly exposed in BPC. BPC introduces BPC-specific concepts different from BW concepts. Thus in ‘BPC standard’ one has to copy over all master data, hierarchies and transaction data from BW to BPC and to align the copied data with the BPC concepts. In this sense ‘BPC standard’ is a data mart solution. To support this ‘BPC standard’ implemented again a lot of existing functionality in BW (but in the BPC way).
On the other hand there exists BW-IP (since BW 7.0). This is a planning solution in BW (by the way this where the name Integrated Planning comes from), i.e. BW-IP directly uses BW objects. Only very few new objects were added to BW to support Integrated Planning (e.g. the InfoProvider ‘aggregation level’ and the planning functions). BW-IP objects are thus ‘native’ BW objects. BW has a WHM and reporting part and thus the requirements were driven by IT scenarios (cf. central data models, single point of truth …). This also applies to BW-IP. As a result, BW-IP is completely generic in nature, ‘BPC standard’ concepts and data models come more from the financial planning (and – of course – consolidation) area.
There are two solutions, optimized for either LOB or IT use cases.
One might have an issue with the ‘either or’ and the ‘price’ one has to pay for this.
- In a data mart solution one has to copy over and map/adjust master data/hierarchies and transaction data, usually from existing high quality WHM data models. This forum is full of examples how this can be achieved. This takes some effort (not only once) and is by nature an ETL kind of problem. This is not an easy task for people from LOB. And all this has to be done just to get the flexibility (in the sense of empowerment, not product features) that LOB people are able to model and drive the planning process.
- One can just change the point of view, i.e. in IT scenarios LOB people need adjustments in planning processes. These changes lead to changes in the data model, etc. To get these changes from IT may take too long.
With respect to the required feature set there is no difference between IT and LOB scenarios. In both cases there exists the full range of scenarios, from simple data entry to complex process driven planning applications working on mass data. Complex requirements lead to some complexity in data model and usually also have an impact on the way data and planning processes will be handled in the application. One usually needs experts to design and maintain these data models and thus the foundation of planning applications. This inherent complexity can be hidden to some extend making design decisions based on the required feature set. So for the end user the application may still be easy to use. But in the end there exists the ‘conservation law of complexity’. This means that there is no such thing as a ‘one fits all’ solution. But this does not mean that LOB cannot be empowered to adjust/extent and control planning processes (in fact that is part of their job, too). How is this possible?
The ‘BPC embedded’ environment type was invented to solve the above kind of requirements by an ‘extension concept’, i.e. enhancing BW objects such that LOB-owned extensions of BW objects are also possible. The guiding principle is: don’t copy the data! This means first of all, use the BW objects and features. Thus one needs BW-IP/PAK data models and engines. Second, enhance central BW objects by LOB-owned extensions. Here are some examples:
- BW master data can be extended by LOB-owned master (new values, new attributes; in LOB-owned InfoProviders the LOB values ‘win’)
- Same for BW hierarchies
- LOB-owned InfoProviders
- Transaction data authorizations are granted by IT centrally (so to say a sandbox) and within this IT controlled range LOB-controlled restrictions are possible
- … and many more BW objects hopefully will support such a LOB-extension concept in the future …
Four of the above mentioned features already exist in BPC 10.1. In addition, also long established BPC concepts will be reused in ‘BPC embedded’ if this fits conceptually, take work status and BPFs as examples.
To summarize, ‘BPC embedded’ is designed to reuse BW data models as the foundation in a new type of a ‘BPC Environment’. These data models can be enhanced by extensions of BW objects owned by LOB. Technically these enhancements physically belong to ‘BPC environments’ or ‘BPC models’. This concept allows using BPC in range of pure IT scenarios to pure LOB-owned scenarios and anything in between.
But now it’s time to dig deeper and compare data models in ‘BPC standard’ and ‘BPC embedded’.
3. Data Models
Data models usually have a ‘visibility’. In a BW system all InfoObjects and InfoProviders have a global visibility. Please don’t confuse this with authorizations. With ‘visibility’ here I mean the following: in BW there exists no ‘container’ other than the BW system itself where the BW objects physically belong to.
In ‘BPC standard’ there exist two physical container as mentioned in the introduction, the ‘BPC Environment’ to group planning applications and another container for the objects specific to a planning application (e.g. the generated BW InfoCube, work status table). The latter is the ‘BPC Model’ (i.e. the planning application).
These containers are physical containers in the sense that an object X in ‘BPC Environment’ E1 is ‘compounded’ to E1. It does not make sense to talk about the object X in a different ‘BPC Environment’ E2.
In ‘BPC embedded’ these two containers still exist, but BW objects will only be assigned to a ‘BPC Model’ (and thus implicitly also to a ‘BPC Environment’). There is no physical dependency. Anyway, this is clear since one wants to reuse the BW data model as a foundation (no copy!). By the way, this assignment also is the technical way to make existing BW objects sensitive to BPC features like work status. But the LOB-controlled extensions mentioned in section 2 belong physically to one of the BPC containers.
3.2 Basic Objects
In planning (and reporting) one is interested in values (not only numbers) depending on a modeled set of dimensions. Take ‘Revenue’ by ‘Fiscal Year’, ‘Version’, ‘Product’ and ‘Customer’ or ‘Classification A, B or C’ by ‘Fiscal Year’, ‘Version’ and ‘Customer’ based on ‘Revenue’.
In ‘BPC standard’ the objects representing the values are called ‘measures’, the dimensions are called dimensions. BW uses the notions key figures and characteristics, respectively. In clients (Admin-Client, EPM Add-In) ‘BPC embedded’ objects use the ‘BPC standard’ terminology for UI consistency reasons.
There are four special dimensions (in fact dimension types) in ‘BPC standard’, following the CATE model
- Category (C) to model dimensions for versions
- Account (A); this special dimension represents the measures. There is only one technical measure (key figure) in the BW layer for the numeric values.
- Time (T) to model the time dimension
- Entity is motivated to model parts of companies for consolidation
These dimensions and their attributes are mainly used to implement business logic needed e.g. for a financial planning application or consolidation. The usage of the account dimension to represent the measures is usually called ‘account model’. The other dimensions in ‘BPC standard’ are pure generic in nature.
There exists no concept of compounded dimensions (characteristics). Hierarchies contain only dimension members of the hierarchy base characteristic. This corresponds to BW hierarchies with ‘postable’ nodes.
Since here BW data models are used, all dimensions (characteristics) are generic in nature except the 13 time dimensions and the unit/currency dimensions. Time dimensions know about the calendar and also about fiscal years. These concepts come from the Business Suite (cf. fiscal year variant).
Measures (key figures) are of type amount (refer a currency characteristic), quantity (refer a unit characteristic), of type number, date or time.
There is no special characteristic like the ‘account dimension’, So BW data models may be based on a key figure model, an account model or a mixture of both models. This depends how characteristics are used and interpreted in the data model and in BW queries.
Compounded characteristics are supported. Hierarchies can contain members of the hierarchy base characteristic, foreign characteristics values and text nodes.
|Concept||BPC standard||BPC embedded|
|Representation of dimensions||Dimensions, CATE model:
|Representation of values||
only one techncial key figure in
BPC generated BW InfoCube
|Key Figures, of various types
|Hierarchies, parent-child relations||
correspond to BW hierarchies with postable nodes
with different node types;
in addition display hierarchies in BW queries
The most important and fundamental difference is that in ‘BPC standard’ one can have ‘base members’ in a dimension and also calculated members, e.g. account C = account A + account B. This is never ever possible in ‘BPC embedded’, by definition any dimension member is base member and there don’t exist calculated dimension members. This comes from the design of the reporting engines, in ‘BPC standard’ this is an MDX engine and in ‘BPC embedded’ the BW Analytical Engine based on BW queries. In ‘BPC embedded’ this kind computations can be modeled as part of the BW query using the BW Query Designer.
InfoProviders are BW objects with persistence or compositions of BW objects with persistence. InfoProviders can also be virtual so the real persistence may be ‘outside’ BW. Most prominent InfoProviders are InfoCubes, DataStore-Objects and Multiproviders.
This BPC environment type contains ‘BPC Models’ and per ‘BPC Model’ one BW InfoCube is generated. In BPC this BW InfoCube is hidden, i.e. not exposed directly. From BW point of view this InfoCube has no time and unit ‘BW dimensions’ (do not confuse this with dimension = characteristic). The generated InfoCube has only one (technical) key figure of type number.
Internally BPC also uses a generated Multiprovider that just contains the generated BW InfoCube to decouple from other objects in case the BW InfoCube has to be regenerated. This Multiprovider is also not exposed in BPC. So per BPC model there is only one persistence object and no composition concept.
Write-back technically writes delta records into the InfoCube, always to the lowest layer. The concept of ‘unassigned value’ is not used.
This BPC environment type contains ‘BPC Models’. Any number of basic BW InfoProviders (supporting write-back) can be assigned to a ‘BPC Model’. Local InfoProviders physically belong to the ‘BPC Model’. Multiproviders are automatically assigned to a ‘BPC Model’ if at least one of the part providers is assigned to the ‘BPC Model’. This is also true for aggregation levels.
Supported basic BW Infoproviders with write-back (planning provider for short) are: real-time InfoCubes, DataStore-Objects (direct update, planning), local BW providers and virtual InfoProvider implementing a write interface.
Write-back technically writes delta records into the InfoCube (local InfoProvider, virtual provider) and ‘after-image’ records into the DataStore-Object. In any case there is an aggregation level involved here and only fields (characteristics, key figures) from the aggregation level are filled. The other fields have the ‘unassigned’ value (characteristics) or the neutral element (key figure) with respect to the standard aggregation SUM, MIN, MAX. Using the BW-IP concept of characteristic relationships characteristics outside the aggregation can be derived from characteristics in the aggregation level.
So the main difference is the usage of aggregation level together with the concept of ‘not assigned’ value. In addition ‘BPC standard’ uses the InfoCube as a pure storage: no build in BW concepts are used e.g. ‘time consistency’ with respect to BW time characteristics or currency/unit characteristics since these BW InfoObjects are simply don’t exist in BPC-generated InfoCubes. Time and currency ‘logic’ in BPC differs from BW built in concepts. Contrary, ‘BPC embedded’ uses build in BW concepts and supports more BW InfoProviders.
Reporting depends on consistent data, and this consistency is already ensured by applications delivering the data or by ETL processes. Planning should not be bound to static data models (e.g. from the actuals). This is why one can adjust and extent data models and build even new data models for planning. When planning data is created and changed the system has to ensure data consistency. To support these kinds of checks planning solutions provide some features to model ‘constraints’.
There exist (at least) two kinds of ‘constraints’:
- Constraints for persistent data, e.g. the dimension members used have to exist, combinations of dimension members have to be valid, values for some measures have to be in some range (e.g. balance is 0 for some accounts).
- Constraints of more temporary nature; in most cases this is a data protection concept. Typical examples: in a rolling plan one should not be able to change data in previous planning periods; a planning version is closed, no changes are allowed any more.
Often there exist also client features to model ‘constraints’ e.g. to protect data cells. This section is only about server side constraints.
For constraints of type 1 one can use ‘BPC Rules’; this is in fact a concept to validate data records based some dimensions and measures (allowed values and/or ranges of values).
For constraints of type 2 one can use ‘BPC Work Status’ to protect data from being changed, especially to control the planning process. To protect periods in rolling plans and to lock (protect) a version are typical examples.
For constraints of type 1 one can use compounded characteristics e.g. to model valid combinations of characteristics values. Example: Fiscal Year Variant and Fiscal Year; another may be Country and City. Compounding cannot be changed easily. Thus one can use characteristic relationships to model admissible combinations of characteristics values. In a retail assortment planning application one may create a relation for characteristics Product and Assortment. One can also derive characteristic values from other characteristic values.
For constraints of type 2 one can use data slices. Technically this is a filter that describes data regions that cannot be changed. Also ‘BPC Work Status’ can
|Concept||BPC standard||BPC embedded|
|Constraints for persistence||
BPC Rules (known as ‘validations’
in other products);
validations can also be implemented
using Script Logic
(methods CHECK, DERIVE, CREATE);
validations can also be implemented
using FOX or exit planning functions
|Data protection||BPC Work Status||
BPC Work Status
(mapped to technical data slices at run time)
Reporting and planning engines depend on design decisions made in the data model the engines are based on. So the design and the way the engines work can be quite different. Traditionally in planning solutions the reporting and planning engines are also quite different since reporting comes first and planning was added later. Technically planning is often more complex since one needs a reasonable transaction handling and one has to build ‘inversions’ of the reporting concepts (e.g. reporting uses aggregation, planning uses disaggregation). In any case reporting and planning engines have to support many types of calculations, here is where different kinds of data models lead to very different engines.
Take MDX as the first example. MDX is a query language to support multidimensional data access and computations (reporting). It does not work with just tables and views (the query language to be used here is SQL) but it used ‘Cubes’ with contain ‘dimensions’ and ‘measures’, there are special dimension like a ‘time’ dimension, and dimensions can have parent-child hierarchies. How to handle all these objects is build in the very heart of MDX. That one is able to compute within a given dimension is also a design decision of MDX, and the MDX engine is able to handle the consequences of such a design decision (cf. ‘solve order’ when calculated members are used in rows and columns).
MDX has a focus on reporting, planning features don’t really exist, with some exceptions e.g. disaggregation. There one can also not find a real concept for transaction handling. The calculations are mainly within the result set of an MDX statement. Thus calculations are modeled based on a grid like model (rows, columns, and data cells). But in planning one also one has to copy, transform and calculate ‘flat’ data records. To use a grid like format is not optimal in any case.
- Reporting and planning engines are based on design decisions of the data model; different models often lead to different engines
- Calculations are needed both for flat data records and – of course – also to retrieve ‘data cells’ in a grid like format (e.g. MDX result set)
- To support mass data handling (ETL like but also based on ‘business logic’) one also needs build in algorithms and/or a script language
- Transaction handling like concurrency control is needed in planning applications
Historically, main concepts in ‘BPC standard’ come from the MDX world. The SQE (shared query engine) decides whether the data needed in a report are simple to read (only flat records to be read and aggregated, no calculations) or whether besides aggregations, hierarchies and calculations are needed. In the first case SQE can use a simple aggregation engine in the second case SQE will use an MDX statement to read and calculate data.
For ETL like jobs ‘BPC standard’ is using the ‘BPC Data Manager’. Here one can also use batch jobs; typical are data load scenarios, preparations of planning data or adjustments after manual planning that are based on algorithms representing ‘business logic. The BPC generated real-time InfoCube has not to be switched to the load-mode to use the ‘BPC Data Manager’. BPC also has a script language to implement customer specific business logic.
In ‘BPC standard’ calculations are modeled in dimensions, so ‘formulas’ are dimension members. In this sense ‘formulas’ are defined on the server. Of course there also exist calculations on the client level, but we are talking here only about concepts used on the server. Calculations can also be used in a script language; also MDX snippets are supported there. In addition ‘BPC standard’ has a lot built in logic coming from data model. Take master data attributes as an example and here the sign handling together with the usage of hierarchies and the account type (cf. AST, LEQ, INC and EXP handling). As a result the SQE has to take all this information into account to create the ‘correct’ MDX statement to calculate values. To aggregate accounts without interpretation of the above information will lead to result containing only ‘garbage’.
The persistence data model in ‘BPC standard’ is simple, namely the BPC generated BW InfoCubes have one technical key figure. In simple reports or input forms thus there is a one-to-one correspondence of a record in the fact table to a data cell in the result set of a report (or input-form).
The BW Analytic Engine is based on the definition of a BW query. The BW query definition can be considered as a ‘template’ of all ‘reasonable’ MDX statements one can use for the ‘query cube’ (cf. glossary). The BW query also contains the definition of a default query view, i.e. how to use characteristics: the free characteristics (correspond to the ‘page axis’ in MDX), which characteristics are drilled-down on rows, columns and where BW hierarchies are used. This is quite similar to the corresponding parts of an MDX statement. On the other hand this analogy might be misleading since the BW query is no such thing as a concrete ‘query’ (like an SQL query or an MDX query).
The BW Analytic Engine prepares and optimizes the needed data read requests and delegates the data access to the BW Data Manager (not to be confused with the BPC Data Manager; the latter is something different). The BW Data Mangager then creates SQL queries to read and aggregate data.
Now we come to the fundamental difference of MDX compared with the BW Analytical Engine, namely how calculations are modeled. Calculations are defined and modeled in the BW query. The BW query definition contains so called ‘structures’. There is support for at most two structures, one ‘key figure structure’ and optional additional structure. One can consider a structure as an additional dimension where one can define ‘restrictions’ and calculations. Thus restricted members or calculated members in MDX or ‘BPC standard’ have to be modeled as restricted or calculated structure elements in a BW query. Also complex ‘exception’ aggregation types are modeled in the BW query e.g. where the order of aggregation and formula computation matters (take counting and special handling of time aggregation as examples). Special computations based on the ‘sign’ of an account can (and have to) be handled with formulas in BW queries. This also true for YTD reporting. BW queries are needed for reports and input-forms.
Now to the algorithms based on flat data records. ‘BPC embedded’ uses the concept of planning function types: the type represents the abstract algorithm, e.g. copy, revaluation, disaggregation, FOX (the BW-IP script language). Since algorithms have parameters the concrete executable instances of a planning function types are called planning functions. Here the parameters (e.g. copy from, copy to) are specified (BW variables are supported) and also the filter to specify data to be changed or reference data (read-only). A planning function is always assigned to an aggregation level. The aggregation level defines the set of characteristics and key figures that are to be changed.
The aggregation level is the concept to be able to ‘glue’ BW queries and planning functions together and to have a well-defined ‘level’ where data will be changed. This is especially important when BW queries and planning functions will be used together in Excel workbooks or web applications. The aggregation level gives an additional structure to the multidimensional data model that allows controlling data exchange (BW queries with planning functions in both directions).
Data exchange between BW queries and planning functions is also organized with some kind of planning buffers to support planning and simulation without the need to save the data on the DB. Cf. the next section for more details.
|Concept||BPC standard||BPC embedded|
BW structure element in a BW query,
restricted key figure
|Restrict dimension||Restricted measure, dimension||
BW structure element in a BW query,
structure element of type selection
|Formulas||Calculated measures, members||
BW structure element in a BW query,
structure element of type formula
technical a special hierarchy controlled by BPC
BW hierarchy or display hierarchy,
the latter to display sub-totals in rows, columns
in a hierarchical way
|Build in logic||
Build in account logic, e.g. sign, AST, LEQ, INC and EXP
Build in time logic, e.g. PERIODIC, QTD and YTD
No build in account logic, has to be modeled with restricted
key figures and formulas
restricted key figures
|ETL||BPC Data Manager, works also in ‘planning mode’||
Real-time InfoCube in load mode: WHM
Real-time InfoCube in planning mode: planning functions
Delivered BPC Script Logic or build in functionality
Delivered planning function types or build in functionality
like disaggregation and inverse formulas
|Script Language||BPC Script Logic||FOX, a planning function type|
|Write back||On the lowest level of the BPC generated BW InfoCube||
On level defined by the aggregation level for all
InfoProviders supported in BW-IP/PAK
5. Transaction Concept
Using a client-server kind of architecture one has to decide whether one uses a stateless or stateful programming model. Examples for stateless applications are most web applications; stateful applications are useful when on the fly simulations are needed without storing intermediate results on the DB. The transaction handling and enqueue concept also depend on design decision of a stateless or stateful application.
In a stateless planning application one usually retrieves only data needed for one ‘interaction step’ or in the other extreme all data one might need in the application. Read data will usually be buffered on the client (a flat client is the latter case). But the essence here is that the information available on the client is sufficient to write back changed data to the DB only based on data known in the client and persistent data available in the server. In other words no kinds of buffered data in the user session (on the server) are required.
Stateful planning applications use ‘state’ in user session (on the server), i.e. in server roundtrips information will be read, buffered and maybe changed. The changes are done only in the user session not on the DB. This are simulation kind of scenarios but also performance may benefit from such a design since resources can be required only once in the user session.
Stateless versus stateful applications usually also differ with respect to concurrency control. In any case one has to have a transaction data lock concept (cf. SAP enqueue concept) to avoid inconsistent data when two different users change the same data region at the same time.
‘BPC standard’ uses a stateless programming model. The data records read are not protected by SAP enqueues, i.e. in two user sessions the same data records can be changed. Only the time interval needed to process ‘Submit’ (i.e. save on DB) will be protected by SAP enqueues (using the BW planning enqueue server). The time interval is defined by the time to compute the delta records to be saved plus the time to write the data records in the InfoCube. The implemented logic is: the last one wins. Simulations are possible but one has to save ‘intermediate’ data on DB (‘submit’ and ‘refresh’).
With respect to changed data the system filters out inconsistent records; consistent records can be saved, inconsistent records cannot be saved on DB.
‘BPC embedded’ uses a stateful programming model. From the context of planning objects the system always knows that the data regions to be used in change mode or in display mode.
- Data region in change mode: defined by the (static) filter of a BW query and input-ready structure elements; in a planning function the filter used to run the planning function.
- Data region in read mode: defined by the (static) filter of a BW query and the display-only structure elements; in a planning function this is the filter used to run the planning function merged with the reference data. Reference data come from planning function parameters, take ‘copy from’ as an example.
Data in change mode will be protected by exclusive locks, reference data in a planning function will be protected by shared locks (cf. SAP enqueue concept). The locked data region is described by filters (technically ‘selection tables), i.e. ‘BPC embedded’ does not use DB like locks on DB records but a description of data regions to be locked. This mechanism protects existing records as well as records to be created in the planning session.
This concept allows to buffer already read data and changed data in the user session. ‘BPC embedded’ uses the following ‘buffers’
- Planning buffer: buffers green requests of an InfoCube, technically the buffer is the OLAP cache; this is a buffer for persistent data
- Delta buffer: buffers changed records in form of ‘delta records’ for all basic InfoProviders supporting a ‘delta handling’, e.g. InfoCubes
- After-Image buffer: buffers changed records logically in the form of after-image records for planning enabled DataStore-Objects
As a result, in ‘BPC embedded’ the clients provide two buttons to process changed data: ‘Transfer’ and ‘Submit’:
- ‘Transfer’: Check the changed data and run the algorithm to process changed data; write changes to the delta or after-image buffers. All BW queries (input-ready or not) or planning functions in the same user session will display/use the most-recent session data automatically. One can also return to the last saved data on the DB via ‘Reset’.
- ‘Submit’, i.e. save data to DB. Technically the delta records will be taken from the delta or after-image buffer and saved on DB. Locks will be released and set new for the BW queries still used in change mode in the planning application.
Assume user U1 has acquired an exclusive lock for data region F1 (filter). A user U2 working on overlapping data regions F2 protected by exclusive locks (data in change mode) will get a lock conflict. So the first user U1 acquires the resource and can change/create data in the date region defined by the filter F1. The second user U2 cannot change data for all filters F overlapping with filter F1: a query will be switched to display mode, a planning function will send an error message.
With respect to write to the delta or after-image buffer the design is an ‘everything or nothing’ concept. This is also true for ‘save to DB’. What is in the delta or after-image buffer is considered ‘consistent’. Exception: A planning function the runs before ‘save to DB’.
Don’t confuse ‘locked’ data with ‘BPC work status’ or ‘locked cells’. ‘Lock’ as used above means transaction data locks (cf. SAP enqueue concept), i.e. the enqueue concept of the BW planning enqueue server.
|Concept||BPC standard||BPC embedded|
Save: Yes, exclusive in time interval
delta computation + save on DB
Change mode: exclusive
Read mode: shared
Save: No locks necessary; release locks, relock still required resources
|Locked data region||
Only at ‘save on DB’, extracted from changed records,
i.e. computed ‘selection table’ that contains all changed
Effect: Last one wins
Based on filter and reference data (planning function)
and filter and BW query definition
(input-ready property of structure elements in a query).
Effect: Only one user can change data the specified data region
|Lock server||Enqueue server: BW planning||Enqueue server: BW planning|
|Transaction data buffers||No||
Planning buffer: Green requests (InfoCube)
Delta buffer: User session delta records
After-Image buffer: User session after-image records
|Simulation||Via ‘save on DB’||
Session data will be taken into account in planning functions and
BW queries, no ‘save on DB needed.
‘Reset to last saved state’ possible
‘Commit’ on DB for consistent data records,
inconsistent records will not be saved
Write into delta or after-image buffer contents using ‘everything or
nothing’ principle, i.e. all changed/new records have to be consistent
to be written into the delta or after-image buffer.
Data in delta or after-image buffer is considered to be consistent
and can be saved on DB.
Exception: Planning function before ‘save on DB’
Information about ‘BPC standard’ and ‘BPC embedded’ in general:
SAP Business Planning and Consolidation 10.1 NW FAQs
A small example how to use inverse formulas:
BW730: Input ready formulas in BW Integrated Planning
Information about the SAP Enqueue Server
Relationship Between SAP Locks and Database Locks – SAP Lock Concept – SAP Library
BW Enqueue Server (used in BW-IP and BPC), general information and sizing
Terminology in bold setting is explained in this glossary.
- BPC embedded: The ‘BPC environment’ type used when BW-IP/PAK objects are exposed in BPC
- BPC standard: The ‘BPC environment’ type used when no BW objects are exposed directly; all BW objects are generated and controlled by BPC
- BPF: Business Process Flow; a concept to create task lists and to assign tasks to users; allows to group planning tasks
- BW: Business Warehouse
- BW Analytic Engine: The analytic engine reads, aggregates and computes data based on BW queries
- BW Aggregation Level: A subset of characteristics and key figures of the underlying InfoProvider; is a concept to add structure to an InfoProvider. Is used in BW-IP to define the fields to be changed in the underlying InfoProvider.
- BW Dimension: A set of characteristics in an InfoCube, usually semantically related; part of the InfoCube definition
- BW Enqueue Server: Server infrastructure used to implement an enqueue concept for transaction data records based on data regions defined on selection tables
- BW Hierarchy: Structure characteristics values in a hierarchical way (parent, child). Defined with respect to a characteristic, the base characteristic. Inner hierarchy (nodes with children) that are master data values of the base characteristic are called postable nodes
- BW-IP: BW Integrated Planning
- BW-IP Data Slice: A filter that defines a protected data region; records in this data region cannot be changed
- BW Multiprovider: A union of BW basis InfoProviders
- BW query: A definition to read, aggregate data from InfoProviders and do computations with data; has also a visual part, i.e. design of the initial result set (rows, columns). A rough analogy is to consider a BW query as a ‘template’ for all ‘reasonable’ MDX statements one might create to get result sets from a query cube. BW queries are the basis of the BW Analytic Engine
- CATE: Category, Account, Time, Entity
- Compounded characteristics: Characteristics may depend on other characteristics, so that only the combination has a meaning, e.g. fiscal year depends on fiscal year variant
- Disaggregation: Also top-down distribution, i.e. taking aggregated values and to distribute values down to lower levels using weight factors
- Fiscal Year Variant: Defines properties of fiscal years, e.g. the mapping of posting periods to intervals of calendar days, number of posting periods, year shifts of posting periods, cf. transaction GVAR
- ETL: Extract Transform Load
- FOX: BW-IP/PAK script language, technically a planning function type. FOX comes from formula extension.
- Green request: A real-time InfoCube can have green and yellow requests; green requests come from loaded data, e.g. using WHM, or from closed yellow requests. To avoid too many requests, real-time InfoCubes have a yellow request which is closed after some ‘water mark’ of delta records contained in the request. Green requests can be contained in the planning buffer; yellow requests are never (and cannot be) buffered in the planning buffer.
- InfoCube: A BW InfoProvider with persistence. Uses an ‘insert-only’ design and is optimized for handling delta records.
- InfoCube, real-time: A BW InfoCube, optimized for write-back. In load-mode one can use WHM to load data, in planning mode one can use BW-IP/PAK planning functions and input-ready queries to write data into the InfoCube.
- InfoObject: Basic BW object, namely characteristics and key figures
- InfoProvider: BW abstraction of a BW data persistence or a composition of data persistence
- Inverse Formulas: Concept used in input-ready BW queries to make formulas input-ready and to calculate back to operands of the formula
- LOB: Line of Business
- MDX: Multidimensional Expressions, a query language designed for multidimensional data access and computations
- OLAP Cache: A cache containing persistent transaction data based on a BW query definition and a data read request; contains usually aggregated data in an optimized form needed by the BW Analytic Engine (also sometimes called BW OLAP)
- PAK: Planning Application Kit, i.e. the technology used to run BW-IP algorithms directly on SAP HANA
- SAP Enqueue Server: Server infrastructure to implement the SAP enqueue concept. Supports mainly ‘record based’ enqueues. The SAP enqueue server is used by BW enqueue server to implement enqueue based on data regions, i.e. selection tables
- Script Logic: BPC script language used in ‘BPC standard’
- Stateful: ‘Server state’ is kept between client server roundtrips, e.g. meta data buffer, transaction data and enqueues
- Stateless: No ‘server state’ is kept between client server roundtrips
- Structure element: This is an element in a structure of a BW query; can be of type selection or formula
- Query Cube: ‘Virtual cube’ defined by the (static) filter of a BW query. This is a run-time object.
- Unassigned value: In BW all characteristics support the unassigned value (external representation ‘#’); the unassigned value is always a valid master data value; this value represents the ‘rest’ value; this design is very useful to work with ‘delta records’
- User session, also often called planning session: Server concept that represents ‘server state’ assigned to a user. On the ABAP server this is the so called ‘internal session’, cf. ABAP Keyword Documentation (ABAP-Overview->ABAP Memory Organization – Overview -> General Memory Organization).
- WHM: Warehouse Management
- YTD: Year to Date
This is a great document. Very comprehensive.
Best article I've read on the topic yet, thanks! Question I have, and i'm not trying to get you to pick sides. If I'm a BPC Standard customer and want to take advantage of some of the key EDW/BW integration benefits you touched on, what would I have to leave behind by migrating to BPC Embedded? I'm not referring to done in a different way, for instance FOX Planning Functions vs Script. I mean, key features I won't be able to implement. Consolidations is the big one and we understand it's coming soon. What is your take, what would be the key business features I wouldn't be able to take with me by migrating over?
BPC standard and BW-IP/PAK are used by many customers, thus I expect in most cases one can find 'the key business features' in both products. As a result, this is also true for BPC embedded. On the other hand, these features might be implemented in a different way and I expect that it really depends on the customer implementation of BPC standard how easy a migration to BPC embedded is: there might exist many BADI implementations and/or special ways to use the EPM AddIn (e.g. API calls).
The good message is that one can use both BPC standard and embedded (the latter on HANA only). And if one is happy with BPC standard, there is no need to do a migration.
In my opinion, in case key business features are missing in BPC embedded, these features should be implemented by SAP in a way that these features fit to the engine and to the other existing (maybe related) features. The Idea Place can be used to make SAP aware of such maybe missing features.
thanks for your great article about the different conecpts. One question though: If an LOB User adds/Changes Masterdata, does this change the data of the BW InfoObject or are these changes only valid for one environment?
both is possible:
- if the LOB user has the authorization to maintain BW master data he/she can change master data, attribute and hierarchies using the Admin Client
- it is also possible to create 'local' master data values; to do this one creates e.g. a local product ZPRODUCT based on 0PRODUCT. ZPRODUCT is used in a local provider, 0PRODUCT e.g. is used in 'central' provider. If one does nothing ZPRODUCT has the master data and attribute values of 0PRODUCT in the context of the local provider. In addition one can create new master data values or change attribute values of existing master data values based on ZPRODCUT; these changes are only visible in context of the local provider.
The idea to integrate these concepts is e.g. to use a composite provider with the central provider and the local provider as part providers and the mapping
- central provider: 0PRODUCT -> ZPRODUCT in composite provider
- local provider: ZPRODUCT -> ZPRODUCT in composite provider
This document is about concepts, not about all features and functions, so not every detail feature might yet be available. Hopefully in the next months more detailed information will be available e.g. about the new features shipped with BW 7.50, SP04 in the context of BPC Embedded.
Many thanks for this. By far one of the better articles on this topic. Well done for putting this together. (Can't believe it's taken me so long to stumble upon it.)
This is the greatest article from the architect standpoint. I am pretty sure that embedded model covers many pain points of implementation to cover business requirements.
Many cusotmers that I met, mentioned BPC(standard) has problem of 1) granularity - related coverage of business requirement 2) performance - related modeling and implementation(standard feature). You already explained embedded model will improve bullet 1) above mentioned. But I am not pretty sure about 2) Performance.
Nowadays, in standard model, make FLAT model and relationship definition using dimension property, calc data by HANA SP(or AMDP) - SQL - , SQE only get RSDRI not MDX. This makes report faster. This means the query is working by SQL only, not by MDX. and the SQE decided how it works. The possibility of using SQL covers business requirement more before. Even this compensate the lack of granularity, but I agree the embedded model is better for this standpoint.
Question is that, what is better between SQL and BW query from the performance standpoint?
Hi YH Seo,
I am a developer and not a marketing guy. Thus I have to say that general statements about performance without any context information are not possible.
BPC Standard on HANA is using HANA MDX and/or BW read interface RSDRI; both benefit much from HANA. I think this forum also contains some examples where SQL based techniques were used in BPC Standard on HANA with great performance improvements; but as far as I remember this is a 'I know what I do' approach. In other words low level access may be faster but the price is that if SAP changes something the project solution (using e.g. low level data structures) may have to be adjusted. There is also the danger to produce wrong data if BPC logic is by-passed.
This why mechanisms as BPC SQE exist:
In BPC Embedded the BW query, the BW data manager and also planning functions have a similar purpose and thus provide the needed features including - of course - also data consistency. The latter aspect you might loose with free-style low level SQL access. But BPC Embedded also supports - so to say - guided low level access since PAK supports all kinds of HANA-optimized data processing, including e.g. SQL Script based planning function types, characteristic relationships and data slices; FOX on HANA is compiled into L and thus also HANA-optimized. In addition, you have all the BW on HANA features as HANA View support in BW InfoProviders and characteristics.
The following links contains some customer examples:
Changing Fox formula to an own planning function type based on AMDP running in memory (PAK)
Summing up I say: based on project requirements build prototypes and check the performance as early as possible.
Very useful article to have a look in detail, at BPC standard vs BPC embedded.
I have a question regarding the latest Embedded Consolidation model that has been released: Is it possible to migrate from BPC standard consolidation (10.1) to BPC embedded consolidation using a migration tool, or does it require a re-build (from scratch) of the standard consolidation solution in embedded consolidation?
The consolidation engine for BPC embedded is shared with BPC standard, which means all the consolidation related features and configration are very similar, including business rules, journal, control and ownership management.
But due to the data model difference as mentioned in this blog, there is automated migration tool. By using existing BW and NW tools, it should be quite straight forward to migrate all the data and configration, but all the reports/input forms need to migrated manually. Nerver the less, it's not a re-build.
In addition, I would like to know in what kind of scenario there is requirement to migrate BPC standard consoliation to embedded consolidation?
Best regards, William
Thanks for that information.
I just want to check something else here. I have a BPC Embedded instance and the only Business Rules I seem to have access to are Consolidation / Methods and Method Based Multipliers. See the screenshot.
What about the normal (Classic) rules like Account Transformation / Currency Conversion / Adjustments? Or is that only for BPC Classic?
Methods and Multipliers is environment level rules, while the regular business rules are model based. Once you have configured a proper consolidation model in your embedded instance and activate these rules on model level, you should be able to see the entries.
And in current embedded consolidation, we support most of the business rules from BPC standard except US elimination and Equity pickup.
Best regards, William
Thanks for the update, Willam. Much appreciated.
(It looks like I'll have to create my own Ownership and Rates cube, though).
Will give it a try.
Thanks for the information. In the case of Reports/Input Forms, when you say "migrate manually", I would assume they would have to be re-built, given the fact that BPC embedded makes use of BW Queries as a basis while BPC standard does not. Please correct me if I am wrong with this understanding.
Our scenario: We have to implement a consolidation solution and have chosen BPC standard for it, since we are currently not on the right BW version to use BPC embedded consolidation. But we were looking at how future proof this would be. In the sense that, if a year or so down the line, we do upgrade our BW system, would it then be easy to migrate the consolidation solution that we build now in BPC standard, to BPC embedded?
Yes, the reports/input forms need to be rebuilt as you mentioned.
Regarding migration, migration of BPC consolidation from standard to embedded will be quite similar.
Best regards, William
If one system's components are as below,can we know whether it is Standard or Embedded?
Hi Manohar....since BPC component is 801 then its BPC 10 NW and so the system is SAP BPC 10 NW on HANA!!! Standard version!
Can have the Embeded and Standard co-exist on the same system ? Does anyone have done it realtime , what are the technical difficulties ?
this blog is about planning concepts, not about consolidation and also not about special features. I recommend to post this question in SAP Business Planning and Consolidation, version for SAP
I always wonder what is the difference from each other. And this is the most comprehensive blog about the BPC and its family ever I have seen so far.