SNOTE tips & tricks

This blog will give tips and tricks for the SAP SNOTE transaction. Questions that will be answered are:

  • How to update SNOTE itself?
  • How to check if there are new versions available for notes?
  • What is TCI?

If you are looking for way to check which OSS notes are needed, read the ANST blog: the automated notes search tool.

Notes for SNOTE itself

Also SNOTE itself can have bugs or has new functions. Download and implement most recent vesion of OSS note 1668882 – Note Assistant: Important notes for SAP_BASIS 730,731,740,750,751 to update SNOTE itself.

Downloading and implementing new versions of OSS notes

SAP regularly updates its own OSS notes. To check in your system if there are new updates for OSS notes relevant to you goto transaction SNOTE. Then choose “Goto -> SAP Note Browser ->Execute (F8)”, and then choose “Download Latest Version of SAP Notes” in the application toolbar. This will download all the latest versions. Check for the status “Obsolete version implemented” in the implementation state column.

Activation of inactive objects after implementing OSS note

In rare cases after implementing an OSS note some of the ABAP objects are in an inactive state. To activate them, select the menu SAP note and then Activate SAP note manually.

Or you can run program SCWB_NOTE_ACTIVATE to activate the coding of the note:

SCWB_NOTE_ACTIVATE

TCI: transport based correction instructions

Transport based correction instructions contain notes that are larger than normal OSS notes. This tool leverages the SPAM transaction to apply these large packages.

Relevant OSS notes:

Start with reading the PDF document attached to OSS note 2187425: TCI for customer. This contains the exact instructions to enable TCI based correction instructions. ABAP persons: please execute these instructions with help of a basis person; you will need SPAM access rights and 000 actions are involved, which are typically done by basis.

The TCI only recently has a rollback function. Please check if you can update/patch to the version where the rollback works. See the PDF document in OSS note 2187425 on the undo function.

Digitally signed oss notes

For digitally signed oss notes see the special  blog.

 

SAP database growth control: HANA data aging

HANA data aging is a method to reduce the memory footprint of the HANA in-memory part without disturbing the end users. It is not reducing your database size.

This blog will answer following questions:

  • What is HANA data aging?
  • How to switch HANA data aging on?
  • How to set up HANA data aging for technical objects?
  • What about data aging for functional objects?

What is HANA data aging?

HANA data aging is an application method to reduce the memory footprint based on application data logic. It is not a database feature but an application feature. The goal of HANA data aging is not to reduce the database size (which it is not doing), but to reduce the actual memory footprint of the HANA in-memory database.

Let’s take idocs as example: the idocs that are processed ok you need to keep in database for an agreed amount of time before business or audit allows you to delete them. Lets say you can only delete after 1 year. Every action on idocs now means that full year of idoc content is occupying main memory. For daily operational tasks you normally only need 2 months of data in memory and rest you can accept that it will take bit longer to read from disc into memory.

This is exactly what data aging is doing: you partion the data into application logic based chunks. In this case you can partion the idoc data per month and only have last 2 months in active memory. The other 10 months are on disc only. Reading data of last 2 months is still fast as usual. When having to report on the 10 months on disc, the system first needs to load from disc into memory; will be slower.

To reduce database itself, you would still need to do data archiving.

Advantage of the data aging is that the more expensive memory footprint costs can be reduced in such a way that the end users are not hampered. Data aging is transparent for them. With data archiving the users will always need to select different transaction and data files.

How to switch on data aging?

To switch on data aging on system level you need to do 2 things:

  1. Set the parameter abap/data_aging to on in RZ11
  2. In SFW5 switch on the switch called DAAG_DATA_AGING

This only enables the system for data aging.

Data aging switch on for technical object: example for application logging

With transaction DAGADM you can see the adminstration status of the data aging object. You first see red lights that the objects are not activated for data aging.

Per object you have extra transactions (which unfortunately differ per object…) to set the retention times. For application logging this is transaction SLGR. Here we choose in this example to data age all log after 180 days:

The advantage of this tailoring is that you could only age some of the objects if you want.

The transaction and OSS note for each of the objects can be found on this SAP blog.

Next step is to setup partitions for the object. To do this start transaction DAGPTM and open the object you want to partition:

SBAL partitioning

Initial screen is in display mode. Hit change button. On the bottom right side hit the Period button (Selection Time Period). In the popup enter the desired start date, time buckets (months, years) and amount of repetitions:

Partition intervals

Now the partions are defined. To execute the partitioning hit the execute button to start the partitioning in the background. Wait until the job finishes. Before running this on productive system check the runtime first on non-productive system with about same data size if possible.

After partitioning the screen should look like this:

Now we can activate the object in transaction DAGADM. Select the object and press the activate button. Popup appears to assign the object to existing data aging or new group:

The data aging run will be done per group.

To start the actual data aging run start transaction DAGRUN.

Here you can schedule a new run with the Schedule new run button.

To see the achieved results of the data aging goto transaction DAGADM and select the object. Then push the button View current/Historical data.

Functional data aging objects

Functional data archiving objects exist as well for Financial documents, sales orders, deliveries, etc. The full list and minimal application version can be found on this SAP blog.

Words of caution for functional archiving:

  • The technical archiving objects are more mature in coding and usage. They are used in productive system and are with lesser bugs than the technical objects
  • Before switching on a functional data aging object you need to prepare your custom ABAP code. If they are not adjusted properly to take the partitions with the date selections (or other application selection mechanism) into account all benefits are immeditately lost. A Z program that reads constantly into full history will force a continuous read of historical partitions….

 

SAP database growth control: technical cleanup

This blog will explain about technical cleanup to reduce the SAP database growth and to regain control of it.

Questions that will be answered are:

  • How to run the standard SAP clean up jobs?
  • Where can I find full list of items that could be cleaned up?
  • How to run the cleanup of some common objects?
  • Database reorganization after cleanup?

This blog assumes you have followed the step in the blog to get insight into your fast growing SAP tables.

If you run ECC on HANA or S4SHANA check out this blog on data aging.

List of technical clean up items

A full list of all possible technical clean up items can be found in OSS note 2388483 – How-To: Data Management for Technical Tables. The chapters below describe the most common ones.

SAP standard clean up jobs

Using SM36 you can plan all SAP standard jobs (which inlcude a lot of clean up jobs for spools, dumps, etc) via the button Standard Jobs.

By hitting the button Default scheduling in an initial system, or after any upgrade or support package, the system will plan its default clean up schedule.

SM36 standard job scheduling

Clean up of old idocs

Idoc data is stored in EDI* tables. Largest tables are usually EDI40, EDIDS and EDIDC.

Old idocs can be deleted using transaction WE11.

Idoc deletion

In batch mode you can schedule it as program RSETESTD.

In the bottom of the selection screen are the technical options:

Idoc deletion technical settings

The idoc deletion job can fail if there is too many data to process. If they happens remove the 4 tickboxes here and use the separate deletion programs: RSWWWIDE, RSARFCER, SBAL_DELETE and RSRLDREL2. These 5 combined programs will delete the same, but run more efficiently. This procedure is also explained in OSS note 1574016 – Deleting idocs with WE11/ RSETESTD.

Clean up of table logging

Table logging is stored in table DBTABLOG. Deletion can be done using transaction SCU3 and then choosing the option Edit/Logs/Delete, or by using program RSTBPDEL.

After you apply OSS note 2535552 - SCU3: New authorization design for table logging: new transaction code SCU3_DEL will be available.

DBTABLOG deletion

Clean up of application logging

Application logging is stored in tables BALDAT and BALHDR. Deletion can be done using transaction SLG2 or by using program SBAL_DELETE.

The last options to fine tune the number of logs per job and the commit counter setting do not appear by default. Select menu option Program/Expert mode first.

Tuned setting for commit counter is described in OSS note 2507213 – SBAL_DELETE runs too long.

Delete old RFC data

Old RFC data can be deleted using transaction SM58, selecting some data, then in the overview screen select the menu option Log File/ Reorganize. Or by starting program RSARFCER.

Delete old change pointers

Old change pointers occupy space in tables BDCP2 and BDCPS. You can use transaction BD22 or report RBDCPCLR/RBDCPCLR2 to delete them.

Delete change pointers

MDG change pointers

If you are using MDG: it has its own set of change pointer tables. Clean up transaction code is MDGCPDEL. Program for batch job clean up is RMDGCPCLR.

Workflows

Workflows are stored in many tables starting with SW*.

You can delete workitem history with transaction SWWH or program RSWWHIDE.

Delete workflow item history

This clean up will only do the workitem technical history and not the workflow itself. If workflow itself can be deleted or is to be archived is a functionality decision that the depend on the business and audit needs.

The workflow deleting program can create large amount of spools. If this is not wanted use the NULL printer.

Workflow deletion

If you want to delete the actual workflow you have to run program RSWWWIDE.

Take care that before deleting workflows you have checked that these are not needed for audit or financial proof. Some workflows will contain approval steps with a recording of who approved what at which time.

Large amount of documents in SAP inbox

If you have a large amount of items in your SAP inbox, you can delete them via program RSSODLIN. Background is in OSS note 63912 – SAPoffice: Delete user sessions.

Updating statistics

If you are running Oracle database it is wise to include in technical clean up job as last step the online reorganization of tables or indexes using program RSANAORA. See blog.

Enque and lock table issue analysis

Enqueue and lock table issue analysis can be bit hard form time to time. They don’t regularly occur and when they do, they can have big system performance impact.

This blog will explain:

  • How to detect enque issues?
  • How to quickly analysze the enque issues?

Detecting enqueue issues?

Enque issues can be easily detected in SM50 and SM66 if workprocess get stuck long time with status ENQ.

First analysis on enqueue issues

The first analysis on enque issues can be done in tcode SM12. From the menu now select the options Extra / Diagnosis and Extra / Diagnosis in Update. This will run the diagnostics on the enqueue handling.

Result looks like:

SM12 check enqueue in update

To get statistics on the enque processing, on the same SM12 start screen select the menu Extra / Statistics.

Deeper analysis on enque issues

For deeper analysis on the lock issues, you might need to switch to the detailed error handling part of SM12. This is a hidden feature. To switch it on you must have the correct authorization (S_ENQUE with ALL in the activities). Switching can be done by keying in the word TEST in the GUI command line (where you key in the tcodes and the /n etc).

Now you will see an extra menu called Error Handling.

From this menu you can directly launch program RSMONENQ_PERF via the menu option Error handling/Diagnosis envorinment. This programs will check the performance of the enque handling:

Result from program RSMONENQ_PERF

The Error Handling menu will also give you option to trace the enque processing.

More backgrounds can be found in OSS note 2252679 – How to analyze an enqueue lock problem and OSS note 2126913 – ENQU: The enqueue log (specificly on the logging).

Lock table overflow

Lock table overflow can happen when more locks are set by programs then the available allocated memory for the locks. In a normal system this will hardly occur. But during a conversion that is operating on massive amount of data (sometimes even using parallel jobs) this lock table overflow can happen. If it happens this will effect ALL users. They will get lock table overflow error and cannot save their work. More then enough reason to have large conversion tested first on a test system with production like sizing and settings.

What can be done about lock table overflow?

Provided you have checked your system sizing, you can increase the lock table memory by increasing the parameter enque/table_size.

Before increasing make sure to have read these two OSS notes on the lock table: OSS note 746138 – Analyzing lock table overflows and OSS note 746138 – Analyzing lock table overflows.