Process Chain Design and Configuration

A process chain should be built in the development system and transported into the test and production systems. The execution of the process chain will often have problems in development and test systems because the number of available process slots is greatly reduced.

This will often present itself as a frozen server because all the process slots are being used and there are no more slots available for other programs. This situation arises because the process chain that was built in the development system is designed to run in the production system using the resources available in the production system.

“the main strategy is to control
‘How many processes can run in parallel’?”

This highlights the need for a process chain design strategy to be in place that will allow everyone, in all servers, to handle the differences of available memory. This balance between building a process chain and using it across all landscape tiers can be achieved by adhering to an implementation strategy that relies on only certain components being configurable in each system.

There are fundamentally two objectives for a process chain:

A process chain contains process variants. The type of process variant defines what it will do. Each process variant usually has configurable parameters to refine the focus of the work to be done.

Every process chain has a ‘start variant’ that contains the scheduling parameters.

Process chains have two important aspects:

The definition of a process chain can be accessed using transaction RSPC and then navigate ‘View’ -> ‘Planning View’ menu.

The run-time logs containing the work results of an executed process chain can be accessed using transaction RSPC and then navigate the ‘View’ -> ‘Log View’ menu.

There is an annoying feature of viewing logs (in BW v7-01); after drilling into a process variant using the ‘maintain variant’ content menu option, you are defaulted back to the ‘planning view’. The more you drill into process variants the more times you have to put it back into ‘log view’.

Process chains are organised into process components. This is a simple, flat list of folders.

The technical activity of making a process chain is pretty simple:

Later, you will gather the process chain definition into a transport and import that into the test system and the production system (when approved by the change management team). Most BW systems do not have the additional Transport Management System (TMS) configuration for post-transporting activities. This is a system user account and password to be used by the TMS to execute additional activities from client 000 into the official target client for that target system.

“it is strongly recommended to never enable the
post-transport configuration for target client log-in”

One post transport activity that would get done would be to schedule the imported process chain(s). While this initially might seem like a good idea; it is not. What will end up happening is the transport will go into the target system and then schedule the process chain.

This usually interferes with the sequence of other go-live/cutover activities. You still have another 2 hours worth of transport importing to be done but now the imported process chain is running and generating system locks on objects which might conflict with the up-coming transports. They would fail to import for no good reason other than the process chain was running.

Systems with post-transport configuration enabled have also been known to freeze completely because all the process slots are being used due to a dozen process chains that were imported, all with a ‘start immediately’ configuration.

If your thinking to yourself that this is the fault of the developer who did not set the start variant to ‘meta-API trigger’ before releasing the transport from development; you would be ideally right but essentially wrong from a practical, real world support point of view.

This comes down to a choice on which go-live/cutover support activities you want to do in your environment.

Scenario 1: A target system with client 000 able to perform post TMS activities.

Minor process chain changes are auto-scheduled and the nightly load window does not fail due to process chains not being scheduled.

Major process chain changes are auto scheduled and executed causing source system jobs to fail leaving a big mess of clean-up activities. This still leaves you with the original cutover activities to initialise the new objects.

Scenario 2: A target system with NO ability to perform post TMS activities from client 000.

Minor process chain changes have the in-convince of needing a manual check to ensure the start variants using meta-api triggering are still scheduled.

Major process chain changes do not have any import issues or clean up to do, just get on with initialising the new data model.

Conclusion: Scenario 2 is the clear winner in this decision. When you consider that most major changes occur outside of business hours and the system administrators doing the work do not want to wait around for several hours. Scenario 1 would have had additional work to clean up auto-scheduled jobs due to the client 000 post TMS activities triggering jobs that were probably run out-of-sequence and wrong anyway.

Why would you consider configuring the client 000 ability to perform post TMS activities?

Further Reading: Importing client-dependent objects.

Further Reading: BW after-import post processing (Note 1142930, SMP login required).