Data archiving: WM transfer requirements and orders

This blog will explain how to archive WM transfer orders and requirements via objects RL_TA and RL_TB. Generic technical setup must have been executed already, and is explained in this blog.

Objects RL_TA and RL_TB

Go to transaction SARA and select object RL_TA and RL_TB.

Dependency schedule (no dependencies for both):

Main tables that are archived:

  • LTAK (transfer order header)
  • LTAP (transfer order item)
  • LTBK (transfer requirement header)
  • LTBP (transfer requirement item)

Technical programs and OSS notes

RL_TA:

Write program: RLREOT00S

Delete program: RLREOT10

Read program:RLRT0001

RL_TB:

Write program: RLREOB00S

Delete program: RLREOB10

Read program: RLRB0001

Relevant OSS notes:

Application specific customizing

RL_TA and RL_TB don’t have application specific customizing.

Typically RL_TA and RL_TB will yield 90 to 100% documents that can be archived.

Executing the write run and delete run

In transaction SARA, RL_TA or RL_TB select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales warehouse and year. This is needed for data retrieval later on.

After the write run is done, check the logs. RL_TA and RL_TB archiving has average speed, and a high percentage of archiving (up to 90 to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria for transfer orders:

Start the data retrieval program and fill selection criteria for transfer requirements:

In the popup screen select the wanted archive files.

Data archiving: CO Order data

This blog will explain how to archive CO Order data via object CO_ORDER. Generic technical setup must have been executed already, and is explained in this blog.

Object CO_ORDER

Go to transaction SARA and select object CO_ORDER.

Dependency schedule:

This means you must first archive relevant purchase requisitions, purchaser orders and financial documents relating to the CO order..

Tables that are archived:

Most important:

  • COEP: CO line items

Technical programs and OSS notes

Pre-processing program: RKOREO01

Write program: RKAARCWR

Delete program: RKAARCD1

Read program: RKAARCS1

Relevant OSS notes:

Application specific customizing

In the application specific customizing for CO_ORDER you have to set two retention periods per CO order type:

Residence time 1 determines the time interval (in calendar months) that must elapse between setting the delete flag (step 1) and setting the deletion indicator (step 2).

Residence time 2 determines the time (in calendar months) that must elapse between setting the deletion indicator (step 2) and reorganizing the object (step 3).

Executing the pre-processing run

The pre-processing run will set the deletion indicator for the CO Orders:

The output is a list of orders that are processed, and if not processed, the reason of blocking is written down.

Executing the write run and delete run

In transaction SARA, CO_ORDER select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes date range. This is needed for data retrieval later on.

After the write run is done, check the logs. CO_ORDER archiving has average speed, but high percentage of archiving (up to 100%). Reason is that all filtering and checking is already done by the pre-processing program.

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

Result is a list. From the list double click on the order. You can see the order now in normal GUI layout.

Data archiving: CATS time writing data

This blog will explain how to archive CATS time writing data via object CATS_DATA. Generic technical setup must have been executed already, and is explained in this blog.

Object CATS_DATA

Go to transaction SARA and select object CATS_DATA.

Dependency schedule:

This means no dependencies.

Only table that is archived:

  • CATSDB: time writing data

Technical programs and OSS notes

Write program: RCATS_ARCH_ARCHIVING

Delete program: RCATS_ARCH_DELETING

Read program: RCATS_ARCH_READING

Relevant OSS notes:

Application specific customizing

In the application specific customizing for CATS_DATA is not required.

Executing the write run and delete run

In transaction SARA, CATS_DATA select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes date range. This is needed for data retrieval later on.

After the write run is done, check the logs. CATS_DB archiving has average speed, but high percentage of archiving (up to 100%). Only status 30 (approved) and status 60 (cancelled) are archived. See SAP Help.

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

Result is a simple list:

Data reload program

For emergency cases, there is an undocumented reload program: RCATS_ARCH_RELOADING. Use at own risk.

Data archiving: sales orders

This blog will explain how to archive sales orders via object SD_VBAK. Generic technical setup must have been executed already, and is explained in this blog.

Object SD_VBAK

Go to transaction SARA and select object SD_VBAK.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use), SD transport (if in use), deliveries (if in use), purchase orders and purchase requisitions related to the sales order.

Main tables that are archived:

  • NAST (for the specific records)
  • VBAK (sales order header)
  • VBAP (sales order item)
  • VBEP (sales order schedule line data)
  • VBFA (for the specific records)
  • VBOX (SD Document: Billing Document: Rebate Index)
  • VBPA (for the specific records)
  • VBUP (sales order status data)

Technical programs and OSS notes

Preprocessing program: S3VBAKPTS

Write program: S3VBAKWRS

Delete program: S3VBAKDLS

Read program: S3VBAKAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for SD_VBAK you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select SD_VBAK. In the preprocessing run the documents to be archived are prepared:

Check the log for the results:

Typically SD_VBAK will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, SD_VBAK select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. SD_VBAK archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_SD_VBAK_001 and SAP_SD_VBAK_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: SD invoices

This blog will explain how to archive SD invoices via object SD_VBRK. Generic technical setup must have been executed already, and is explained in this blog.

Object SD_VBRK

Go to transaction SARA and select object SD_VBRK.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use), SD transport (if in use) and deliveries (if in use).

Main tables that are archived:

  • NAST (for the specific records)
  • VBFA (for the specific records)
  • VBOX (SD Document: Billing Document: Rebate Index)
  • VBPA (for the specific records)
  • VBRK (invoice headers)
  • VBRP (invoice line items)
  • VBUK (invoice status)

Technical programs and OSS notes

Preprocessing program: S3VBRKPTS

Write program: S3VBRKWRS

Delete program: S3LIKPDLS

Read program: S3VBRKAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for SD_VBRK you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select SD_VBRK. In the preprocessing run the documents to be archived are prepared:

Check the log for the results:

Typically SD_VBRK will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, SD_VBRK select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. SD_VBRK archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_SD_VBRK_001 and SAP_SD_VBRK_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: deliveries

This blog will explain how to archive deliveries via object RV_LIKP. Generic technical setup must have been executed already, and is explained in this blog.

Object RV_LIKP

Go to transaction SARA and select object RV_LIKP.

Dependency schedule:

In case you use production planning backflush, you must archive those first. Then material documents, shipment costs (if in use) and SD transport (if in use).

Main tables that are archived:

  • LIPK
  • LIPS
  • NAST (for the specific records)
  • VBFA (for the specific records)
  • VBPA (for the specific records)

Technical programs and OSS notes

Preprocessing program: S3LIKPPTS

Write program: S3LIKPWRS

Delete program: S3LIKPDLS

Read program: S3LIKPAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for RV_LIKP you can maintain the document retention time settings:

Executing the preprocessing run

In transaction SARA, select RV_LIKP. In the preprocessing run the documents to be archived are prepared:

You must run the program twice: for inbound and outbound deliveries.

Check the log for the results:

Typically RV_LIKP will yield 30 to 70% documents that can be archived.

Executing the write run and delete run

In transaction SARA, RV_LIKP select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes sales organization/shipment point and year. This is needed for data retrieval later on.

After the write run is done, check the logs. RV_LIKP archiving has average speed, but not so high percentage of archiving (up to 40 to 90%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

For faster retrieval, setup data archiving infostructures SAP_RV_LIKP_001 and SAP_RV_LIKP_002. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).

Data archiving: material documents

This blog will explain how to archive material documents via object MM_MATBEL. Generic technical setup must have been executed already, and is explained in this blog.

Object MM_MATBEL

Go to transaction SARA and select object MM_MATBEL.

Dependency schedule:

In case you use production planning backflush, you must archive those first.

Main tables that are archived:

  • MKPF
  • MSEG
  • NAST (for the specific records)

Technical programs and OSS notes

Write program: RM07MARCS

Delete program: RM07MADES

Read program: RM07MAAU

Relevant OSS notes:

Application specific customizing

In the application specific customizing for MM_MATBEL you can maintain the document lifetime settings:

Executing the write run and delete run

In transaction SARA, MM_MATBEL select the write run:

Select your data, save the variant and start the archiving write run.

Give the archive session a good name that describes plant and year. This is needed for data retrieval later on.

After the write run is done, check the logs. MM_MATBEL is fast archiving and has high percentage of archiving (up to 100%).

Deletion run is standard by selecting the archive file and starting the deletion run.

Data retrieval

Start the data retrieval program and fill selection criteria:

In the second screen select the archive files. Now wait long time before data is shown.

Faster data retrieval is via archive explorer transaction SARE (for the archive explore the infostructures must be filled first, see this blog):

Fill out the document criteria:

From the result list double click on the line and jump to MIGO transaction to view the archived document.

Data archiving: archiving infostructures

Several retrieval functions in SAP data archiving require the setup of archiving infostructures.

More on archiving data retrieval in general can be found in this blog.

Activation of infostructures

Using transaction SARJ you can configure the infostructures for archiving:

Using display, you can see the fields that are put in archiving infostructure:

On the first screen you can activate the infostructure by pushing the Activate button.

Activation is only activating the structure for future archive runs, not for past runs. For these you need to fill the structures first.

Filling the structures for existing archive files

Filling the structures for existing archive files is bit hidden. Goto transaction SARJ and select the object. Now choose menu Environment and choose Fill Structure.

In the next screen select the files you want to fill (these are normally yellow):

After selection, choose the Fill Structures button to start the batch job filling them. Don’t select too much. It is intense on reading the archive.

Green ones are done. Red ones have failed.

Most common cause for failures is that the variant for the WRITE program was set so the same document got archived twice into different archive files.

What can be done? If it is OK to have the same document in different files, you can ignore the archive session entries with error in SARI.

To avoid having duplicate keys in the infostructure in future, you can add the filename as an extra key field to the infostructure. This can be done as follows:

– SARJ ->Infostructure -> Display
– Technical data
– Change the field “File Name Processing” from ‘D’ to ‘K’

Archiving infostructure status

Use transaction SARI to check the status of the archiving infostructures:

From here you can go to the Status to check the status of the files.

Archive explorer jumps to SARE transaction for the explorer.

Customizing jumps to the SARJ transaction described above.

Reference OSS notes

2676572 – SARI/KSB5 Archived documents not displayed in line item report

Data archiving: data retrieval

When you perform data archiving, from time to time you need to give support on data retrieval issues.

This blog will explain some of the general data retrieval concepts.

Questions that will be answered in this blog are:

  • How does single record retrieval work?
  • How can I use the archive explorer?
  • How can I get a list of data from the archive?

Single record retrieval

Single record retrieval is different per archiving object.

Some objects (like FI_DOCUMNT) are nicely integrated. In FB03 the system will check first database, then look into the archive inforecords to find if the document is archived. And then it will show the document in same layout.

Most objects have archive read program which you can find in SARA:

Now run the read program:

And fill out the record(s) you need:

Now you need to select the data files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

Results are shown:

Results might look ok, or very basic. This is different per archiving object.

Use of archive explorer for table level

An alternative way is the use of the archive explorer. This will give details on table level.

Start transaction SARE:

Fill out the required object and archive infostructure. In this case we used change document. In the second screen fill the object:

Now you can see list of changes:

Double click on the record to see the tables:

Double clicking on the table will give the actual table line content.

Filling infostructures

More on infostructures can be read in this dedicated blog.

List transactions

Some transactions (especially in FICO domain) have integrated reporting with the data archive. We will use transaction FBL3N as example.

Start FBL3N:

Then click on Data Sources, include Archive, and select the needed files:

If you didn't label your files correctly, you need to select them all, which makes data retrieval slow.

FIORI app for monitoring data archiving jobs

SAP has delivered diverse apps for basis administrators.

This blog will explain about the data archiving batch job monitoring FIORI app.

For generic data archiving technical setup: read this blog.

Activating the app for monitoring data archiving jobs

The full activation manual is published on the FIORI reference library.

Short manual:

  • Activate SICF service bas_ilm_jobmon
  • Activate ODATA service ILM_JOB_MONITOR_SERVICE
  • Manually add the tile in your catalog (use edit home page and than add the app)

Using the app display data archiving jobs

The main FIORI app tile will already show the amount of failed jobs:

When you open the app the overview screen comes:

On the left hand side you can choose the archiving object. On the right hand side you can see the last archiving jobs for the selected object.

When you click on a job, you can see the details per job:

There are tabs for the job results, job log details and application log.

Bug fix notes

Bug fix OSS notes: