Data Transformation and Storage

Feature testing of the DataModel involves three major aspects for data transformation and storage:

  • Master data in InfoObjects;
  • Operational data in DataStores;
  • Reporting data in Cubes.

This does not include other aspects of the BW system like security, process chains, broadcasting, etc.


Initially it would take a fair amount of effort to plan and create feature tests for the data modeling involved in data transformation and data storage. This would involve several iterations to cover the basic range of features involved. This implies that a decent amount of learning and research would need to be done by the developer when working with unfamiliar features.

“the process of identifying and creating Feature Tests
is a great way to learn data modeling in SAP BW”

To help put it into perspective; the high level view of this would create a list and then the matrix of the relationships between the list items:

  • InfoObject;
  • Character validation;
  • Internal and external data format;
  • Time dependant;
  • Text representation;
  • Compound;
  • Full unique key;
  • Unit of measure;
  • Currency;
  • Attribute;
  • Display attributes;
  • Navigational attributes;
  • Transitive attributes;
  • Key figure attributes;
  • Time dependant attributes;
  • Compounded attributes;
  • Hierarchy;
  • Hierarchy node intervals;
  • Hierarchy aggregation;
  • Hierarchy elimination;
  • Exception aggregation;
  • Numerical precision;
  • Reference / shared master data;
  • DataStore;
  • Key and data fields;
  • Active table;
  • Load requests;
  • Delta update;
  • Activation queue;
  • Change log;
  • Append or update existing records;
  • Handle min / max exception aggregation;
  • Secondary indexes;
  • Meta-data or assumed context;
  • Comply with InfoObject definitions;
  • Cube;
  • Load requests append and never update existing records;
  • Delta update into fact table;
  • Request removal into compressed table;
  • InfoObject data surrogates (SID);
  • Organised storage in dimensions;
  • Dimension data surrogates (DIM ID);
  • MultiProvider hint table;
  • Aggregates;
  • Change run for volatile attribute data;
  • Virtual InfoObjects;
  • InfoObject property for constant value;
  • Sum, min, max aggregation;
  • Exception aggregation;
  • Meta-data or assumed context;
  • Comply with InfoObject definitions.

There are additional features involved but now you can see how the planning and creation of feature tests is an ongoing and evolving process. This is great news as you would be able to focus on features in a specific order according to what you know and actually use within the BW system. For Example: Are you using inter-company elimination hierarchies for group currency reporting? Are you using transitive attributes? Are you using exception aggregated, high precision, unit of measure compounded key figure as an attribute on a time dependant, compounded, reference characteristic? Really?

Knowing the above is a growing feature list and relationship matrix you do not have to do it all at once, like a big-bang go-live. You can plan the next deliverable and get to work in an isolated approach knowing that it will contribute to the larger solution without interfering. This creates a great learning environment, as there are practical objectives that can build upon the prior feature tests.

Creating the feature tests could begin with this sequence for InfoObjects:

  • Simple characteristic and attribute;
  • Master data reporting;
  • Transient attribute;
  • Key figure attribute (no compounding or referencing);
  • Master data reporting with key figure exception aggregation;
  • Compound characteristics and key figures;
  • Hierarchy reporting;
  • Etc …

Now move on to include DataStores and expand the number of individual feature tests by the number of InfoObject features. This will start off simple and then grow rapidly. Eventually you will get to a point where it becomes obvious that certain combinations are statistically un-likely to be used by most DataModels. These feature tests you can identify and prioritise to the bottom of the list to be created. For Example: Min/Max exception aggregated key figures in a DataStore Change Log table where the sign of the key figures in the Before-Image and After-Image records have significant importance.

By now it is clear that the mere effort to identify the feature matrix is quite a large task in it’s own right. It would be good if a tool could be developed that would track the main objects, their features and the valid relationships. This feature matrix tool can then be used as the framework to automate the comparison of a BW system to features that are actually used. The results could be stored as time-dependant snap shots and provide system owners with a long-term understanding of the growing complexity of their environment.

The user interface for transaction RSRV has quite a few good properties that make it ideal for the feature tests to be added to. The current RSRV options provide a very good way to analyse object instances and their data quality, including a ‘Correct’ button if issues can be fixed. The feature tests would be an additional complement to test the standard code that is the behaviour of these standard objects.

What SAP BW Feature Testing would I prioritise to the top of the list? Why?

Further Reading: SAP Demo Content for Features – OLAP and Integrated Planning