The testing of standard BW features seems like redundant effort; especially given that SAP AG tests the core functionality before releasing patches and enhancement packs. So what makes additional testing and measuring of BW features, that should already work, worth my time and effort? Lets examine one key object.
A DataStore is a corner stone of data flowing through BW. It is good to have feature content specifically to create additional metrics to complement the business content for statistical analysis. It doesn’t matter what SAP AG does to the code in the programs that are the features of a DataStore; we want baseline metrics on how well it performs in our specific landscape.
These metrics, over the years, would reflect all sorts of changes to the landscape that invalidates a single baseline set of metrics across the years. For Example: Hardware upgrades to the number of CPUs, clock speed, size of second level cache in CPU, IO block sizes, buffering, disk latency, ram, the actual patch level of the kernels, etc. The ability to compare ‘apples to apples’ gets complicated and less accurate over long periods of time. This in turn limits the usefulness of long-term historical analysis for feature metrics. So why bother?
“knowing where to focus the teams’ effort
provides the best Return On Investment (ROI)”
The feature metrics of a DataStore can provide a baseline between software patches. By knowing where the native improvements have appeared in the features of standard BW objects (Like DataStores, Cubes, Aggregates, Analysis Authorisations, OLAP Cache, etc) you can start there.
Alternatively, any decreases in performance can be effectively communicated ahead of time for Go/No-Go decisions and adjustments to project plans with new issues raised before you patch the production system.
Do you have a process chain to drop a DataStore and cube contents with a full re-load from the source system because it is faster than the multiple loads and activations of a DataStore? Back in BW v3.x this would have been a viable DataModeling decision; however, with the improvements made to BW v7.x DataStore activation, this choice should be re-visited and a mini enhancement activity considered to re-factor this part of the DataModel. For Example:
- Improvement: The ‘Request Number’ InfoObject in the key of the DataStore table was changed from storing the Char 30 value to now use a ‘Request SID’ InfoObject which saves on space and processing time for larger volumes of data;
- Improvement: Defining a semantic key on an InfoSource will enable the DataPacket compression at run-time before writing to the activation queue table of a DataStore;
- Improvement: Introduce a write optimised DataStore into the ETL to reduce the nightly load windows time spent in other systems.
Randomly picking sections of the DataModel for re-factoring is a waste of time and it is hard to prove the benefits before commencing work. With comparative feature metrics you can clearly show the improvements and make decisions (not guesses) at the outcome to be delivered. It will put the proposed DataModel refactoring into perspective for all levels of management as it highlights ‘the simple and obvious’ reasons. For Example: ‘It might be a cool technical challenge but the length of the nightly load widow is fine, how about we focus on report runtime instead?’ would be a direction based upon metrics, not guesses.
BW feature tests provide metrics focusing on how well an object does what it does, is not the same as the system logs identifying how well the actual programs performed. The difference in perspective is ‘Setting Expectations’ and is a clear path to expanding from being an Expert to becoming a Solution Architect. Knowing what features of an object work well and have improved allows you to focus on using that feature ‘more or less’. For Example:
- Flagging line item dimensions in a cube;
- Adding DataSet meta-data into transaction data records for use in global restricted key figures to then leverage the multi-provider hint table;
- Removing rogue records from DataStores to enable the delta flow as early as possible.
When feature testing is utilised by upgrade project teams, they are able to leverage a single test to give a level of comfort for all instances where that feature is used. With the ability to run known pre-defined data through the features of the DataModel we are able to compare the outcome of the processed data to an expected result. This is unit-testing 101 for the standard BW objects and their features, not the actual data. Half the battle in an upgrade project is confirming that the system is behaving the same way prior to all the new code changes (the patches provided by SAP).
- Have you investigated the SAP Feature Content for Cubes and Master Data?
- How can you leverage feature tests in the next upgrade project?
Further Reading: BW Technical Content FC1 – Cube and Master Data Testing