Copyright 2024 - BV TallVision IT

These are some lessons learned about background jobs - that could avoid you having to learn the same lesson the hard way. Poor performance and finding input files.. 

Dramatic performance increase

In a conversion program we used a call to a BAPI for some update, over 1000 times in a row. When this was done in the foreground, the report would consume about half an hour, which was pushing the TIME_OUT limit set on the system. Hence the report was scheduled to run in the background. It performed it's task, but consumed a staggering 6 hours.

How can this be ? In the foreground it processes everything in half an hour, in the background it needs 6 hours - for the exact same workload.

After digging into this a bit deeper, the problem was the "logical unit of work" - I called the bapi but forgot all about COMMIT WORK in between Bapi calls. The system was processing my 1000 updates as a single logical unit of work.

Lesson learned: manage logical units of work (perform COMMIT WORK).

Dramatic performance increase - 2

A background job doing an update to delimit infotype 0041 in HR runs through a list of nearly 50k updates. After an hour the progress is checked, and the system consumed 1.4 seconds per update. An estimate of the total run time is done and after a few hours progress is checked. After 5 hours, the average time per update has increased to 11.7 seconds per update. Conclusion: this background job is dragging along finished updates somehow. A report (WRITE) lists 1 line per update and has over 400 pages.

The COMMIT WORK setup is in place (see previous paragraph) but did not produce the desired effect.

The actual job start-to-finish cycle needed to be shortened. So for a 50k workload, the report was adjusted to process only the first 1000, and running the report was repeated 50 times. Result: 0.9 seconds per update.

Lesson learned: manage logical units of work - though separate program runs. Note that a background job can be scheduled with (in the example case 50) steps.

Source files lost

This one presented itself in an LSMW setup: For some of the LSMW objects execution of the conversion step has to be done in the background. Error message “Unable to read file xxxx.lsmw.read'” is thrown and the background job is cancelled – this is what happened:

The processing of an LSMW object involves reading data and creating a .read file which is converted into a .conv file during the conversion step. When the read step is skipped before scheduling the conversion step, the system (LSMW) will throw this error and cancel the job. As the .read file is a mandatory file for the conversion step, running the background job on a server other than the (application)server that holds the read file, the same error will be thrown.

Lesson learned: server files live on application servers.

Thus 2 steps were done to cause the issue: (1) a file is created, in the foreground - and this file ends up on an application server. (2) A conversion step is started, which needs the file, however this conversion step is started as a background step - on another application server.

To solve this: background jobs can also be scheduled to run on a selected application server. As the .read file is available on a specific server only, follow these steps to schedule the job:

  1. use SE38 to create a variant for the conversion program (the name of the conversion program can be determined by starting the conversion step in LSMW and looking up the report name via System => Status. Example name: /1CADMC/SAP_LSMW_CONV_00002725)
  2. Run SM36 to create the background job, give the job a name and fill in the execution server on the first screen
  3. In SM36 add a step and define program name and variant
  4. Then in start condition set the job to start immediately and saved.
  5. Your job will show up in SM37.

    I stumbled across this in an LSMW setup - but it can happen anywhere where files and multiple application servers are concerned

Parallel processing

Instead of trying to tune coding logic into something with improved performance, the approach of parallel processing was introduced. Up to 6 processes were started (decribed here: Starting new task).

Lesson learned: parallel processes consume dialog processes. It's a very effective way to make the system work hard, but it's potentially hard for the system.