SAP has done an improvement on TAANA to count dynamic subfields. This blog will explain how. More generic information on TAANA can be found in this blog.
Questions that will be answered in this blog are:
How to get the new TAANA function for dynamic subfields?
How to run TAANA dynamic subfields?
How to get the new TAANA function for dynamic subfields?
We will use table JEST as example. This table as a pretty annoying setup. The main field OBJNR is in fact 2 fields: the first 2 characters are object identification, and the second part is a number for the object. But if you want to analyze how many objects type you have this is problematic with SE16.
In TAANA we can use the dynamic subfields. Start transaction TAANA and create an Ad Hoc Anlysis for table JEST. First hit Execute to start, enter table JEST and in this screen hit the Ad Hoc Variant button:
Now select the OBJNR field:
In the Offset field fill 0. And in Subfield length 2. This means take first 2 characters of field OBJNR. Press ok and start the run in the background.
The end result is a cross section with counts on the types of the first 2 characters in JEST-OBJNR:
SE16S and SE16H
For some searches, also have a look at SE16S and SE16H.
SE16H is a HANA specific implementation of SE16. This blog will explain the additional functions of SE16H.
Questions that will be answered in this blog are:
How to use SE16H?
Where to find full list of SE16H functions?
Which bug fix notes for SE16H should I apply?
SE16H: HANA specific implementation of SE16
SE16 or SE16N are one of the most used transactions for data analysis on any SAP system. SE16H is the HANA specific implementation which leverages some of the HANA specific strengths.
Transaction code to start is simply SE16H. We now enter VBAK as example table. Just pressing execute will give simple list of first 500 entries. Nothing new.
Now we run again, but tick the Group and Sort tick boxes for the Document Category field:
The output now is a sum of the sales orders in table VBAK grouped by identical Document Category:
TAANA vs SE16H vs SE16S
If you run on HANA, the SE16H transaction is a faster option than the classical TAANA transaction, since SE16H runs online and TAANA runs as batch.
SE16H is for lookup of single table. SE16S can search for content in one or multiple tables. More on SE16S in this blog.
This blog addresses the main challenge in SAP data archiving for functional object: the discussions with the business.
This blog will give answers to the following questions:
When to start data archiving discussion with the business?
How to come to good retention periods?
What are arguments for not archiving certain data?
Data archiving discussion with the business
Unlike technical data deletion, functional data archiving cannot be done without proper business discussion and approval.
Depending on your business several aspects for data are important:
Auditing and Sox needs
Tax and legal retention periods
Product data requirement
And so on…..
Here are some rules of thumb you can use before considering to start up the business discussions about archiving:
Rule of thumb 1: the system is pretty new. At least wait 3 years to get an insight into which tables are growing fast and are worth to investigate for data archiving.
Rule of thumb 2: if your system is growing slowly, but the infrastructure capabilities grow faster: only perform technical clean up and don't even start functional data archiving.
Rule of thumb 3: if you are on HANA: use NSE (Native Storage Extension) or check if the data aging concept for functional objects is stable enough and without bugs. NSE and data aging does not require too much work, it is only technical and it does not require much business discussions. Data retrieval from end user perspective is transparent.
Data analysis before starting the discussion
If your system is growing fast and/or you are getting performance complaints, then you need to do proper data analysis before starting any business discussion.
Start with proper analysis on the data. Use the TAANA tool to get insights into the data: how is the distribution of data per document type, per year, per plant/company code etc. If you want to propose retention period of let’s say 5 years, you can use the TAANA results to show what percentage of data you can move out of the database.
Secondly: if you have an idea on which data you want to archive, first execute a trial run on a recent production copy. There might be functional blocks that prevent you from archiving data (like not closed documents).
Third important factor is the ease of data retrieval. Some object have a nice simple data retrieval function, and some are really terrible. If the retrieval is good, the business will more easily accept a shorter retention period. Read more on technical data retrieval in this blog.
As last step you can start the business case: how much data will be saved (and how much money hence will be save) and how much performance would be gain. And how much time is needed to be invested for setting up, checking (testing!) and running the data archiving runs.
In practice data archiving business case is only present in very large systems of 5 TB and larger. This sizing tipping point changes in time as hardware gets cheaper and hourly manpower costs go up.
The discussion itself
Take must time in planning for the discussion itself. It is not uncommon that archiving discussions take over a year to complete. The better you are prepared the easier the discussion. It also helps to have a few real performance pain points to get solved via data archiving. There is normally a business owner for this pain point who can help push data archiving.
This blog will explain how to execute a data archiving run.
Questions that will be answered in this blog are:
Which settings do I need to make or check before data archiving run?
How to perform the data archiving run?
How to validate the data archiving run?
How to retrieve that archived data?
This blog assumes you have finished the basic technical data archiving setup as described in this blog. It also assumes you have made agreements with your business on the retention periods. For more information and tips on discussions with the business teams on data archiving, read this blog.
If you are looking for specific functional data archiving runs:
Functional data archiving example: purchase requisitions
To explain the functional data archiving we will use Purchase Requisitions as example. Technical object name is MM_EBAN.
To see which tables are archived hit the Database Tables button. Here you can see the list of tables from which data potentially be archived:
If you want to see the other way around, which table is used in archiving objects, do put in the table as entry point, to retrieve list of archiving objects. In this example archiving objects that delete from table EBAN:
Dependency of objects
By clicking the top left button on the archiving object you get the archiving dependency view. For MM_EBAN this is pretty simple: it has no dependencies.
As example for dependencies this is the overview for sales orders (SD_VBAK):
Here you can see that before you can archive sales orders, you should archive the billing documents first. And for the billing documents, you should archive the deliveries first.
Functional archiving settings
First we have to make or check the object specific functional archiving settings.
In the case of purchase requisitions we have to set the retention periods per document type:
Pre-processing step
Some archive object have a pre-processing step. MM_EBAN has one as well. In this step data is selected and marked for archiving (many times by setting deletion flag or other indicator).
In the step create the variant (give it a useful name) by putting in the name and pressing Edit. On the next screen fill out your data select the log level. Go back to the first screen and select the start data and spool parameters. When both lights are green, hit the execute button. When you click the job log button you check for the results.
Example of result of pre-processing run:
As you can see not all selected data is archived. Transactions that are not completed from business point of view will not be flagged for archiving.
Write run
If you have done the pre-processing step, continue with the write step. Principle is the same: select the data and log level. Important in the write step is to correctly fill the Archiving Session Note with a useful text. This text is put as label on the archive file for later retrieval:
When done plan the job and execute. Result looks like:
Pending on your technical system settings the file will be stored automatically or you still need to do this manually.
Storage run
If you have setup the system to store files in content server, you first have to execute storage run. For more details see this dedicated blog.
Deletion run
Finally we can now start the deletion run: the actual clean up of old data happens now.
Select the data files you want to archive and start the run.
Word of care with deletion: please don't select too much files and subsection in one go. Each file sub section will result into a deletion job. The deletion will put significant load on the database, since it will be pushing out a lot of data. If you are not careful you will launch easily 20 or more heavy deletion jobs that run in parallel and that might severely decrease system performance.
Result of archiving deletion run:
Checking archive result
The result checking is possible by looking at the technical correctness of the archive file.
In the archiving object choose the Overview button. Then select the archive file you want to inspect. A correct file should like like this:
In the testing phases and first production runs, you also want to do record counting. A good way is to run the TAANA transaction for key tables you want to archive before the archiving and after the archiving. The difference should match the deletion counter on the write and deletion logs. If you find differences: check for bug fix OSS notes.
Data retrieval
Retrieving archived data is different per archived object. Some retrieval is nicely integrated into the normal transaction. Some require extra transaction to run. Some retrieval is via special program.
Data retrieval of purchase requisitions can be done via SARA and choosing the read option.
Here you first need to manually select the archive files to read from (see I did not give the note and regret it, since the file has no meaning now…):
Before starting to check the data archiving for an object, it is best to check and read the OSS notes for the pre-processing, write, delete and read programs. Apply the bug fix notes and read about certain aspects, before you have time-consuming effort to figure out you have a bug or a certain feature that is documented inside the notes.
Controlling amount of parallel batch jobs
The deletion phase of archiving can lead to uncontrolled amount of parallel batch jobs. See this dedicated blog on how you can control it.
Data archiving run statistics
Transaction SAR_DA_STAT_ANALYSIS can be used to collect statistics on the data archiving runs:
FIORI app
If you are running recent version of S4HANA, you can also use a FIORI app for monitoring the data archiving runs. Read more in this dedicated blog.
This blog will explain the general technical setup to be performed for SAP data archiving.
Questions that will be answered in this blog are:
Which generic settings do I need to make for data archiving in the technology domain?
Why should I use a content server to store archive files?
For getting insights in what to archive, read this dedicated blog first.
Data archiving content server setup
For data archiving you can use the file system for storing the archive files. This you can do to perform initial testing. For productive use it is best to store the archive files in a content server. It will not be the first time an overzealous basis person in need for file storage deletes some old files in a directory called /archive…..
After you install the content server, set up in OAC0 the customizing for the content server to use it for Archivelink:
In this initial screen no object is selected. Now press the Customizing button.
Set the Cross-Client File Names/Paths to your needs. You can do that from this menu, or directly from the FILE transaction.
Set the physical path name to be used:
Even when you use content server the file will first be written to physical path for temporary storage.
And check the archive file name:
Technical settings per archiving object
Per archiving object you can set the technical settings. Normally you keep settings the same per object. Only for very large installations with archiving or special needs, you might want to deviate.
In the technical settings per data archiving object set the following:
Important settings to set:
Max size in MB or the max objects
Check the variants (some variants for production have still deliberately the test tick box as on: you have to change it)
Best to leave the delete jobs to Not scheduled (large archiving runs can create many files and many deletion jobs to kick in at the same time): best to do this manually in controlled way
Start storage automatically or manually is a choice for you
Best to store before deletion. This is the most conservative setting.
Best to delete only from storage system: if file is not stored properly in any way, deletion will not have. This is the most conservative setting.
Actual data archiving runs
How to execute the actual data archiving runs is explained in this dedicated blog.
The upload and processing of the last digit patch file can take a long time (typically 1 hour). If you don’t take measures the system will dump after 10 minutes with a time-out.
Goto RZ11 and set rdisp/max_wprun_time to value 12000 (and undo this after the patching). In newer versions of netweaver the parameter is rdisp/scheduler/max_runtime, which needs to be set to 120m.
Now start program /UI5/UI5_UPLOAD_PATCH_TO_MIME:
The file has to point to the file you have downloaded to your desktop. Use F4 to select the correct file. The request /task must be a valid unreleased workbench request.
First run in test mode. Wait until it is done (1 hour is normal…). If the result is ok, remove the tick box for test mode and run real mode (yes, 1 more hour to wait).
End result should look like:
After the application of the patch, apply the FLP note (in this case note 2605065).
Now you can start the version overview again to see if the patching was ok:
As you can see the 1.52 version is now updated to level 1.52.23. The 1.48 version is the same.
When you want to apply last digit patch on Q and P systems, you can move the transport you have selected in the upload step. The unfortunate thing is that the import to Q and P of this transport also takes about 1 hour. This means you need to properly plan the import (especially on production select a time where no users are using FIORI apps).
Patching versus upgrading
The goal of last digit patching is simple: it solves bugs in the SAP delivered UI5 libraries. But it can also bring new bugs.
Best patching strategy: only patch when you have a bug that must be solved. Then patch to latest version. Don't think last minus one, since the UI5 patches come every 2 to 4 weeks: just take latest one. If your system is stable: don't patch.
Upgrading to a higher FIORI frontend server will give you new libraries which will have new functions. Also: the higher frontend servers have better performance due to faster ABAP kernel, better caching features etc. If you are using newer S4HANA solutions, you will be forced to upgrade frontend server to specific minimum version.
Best practice upgrading: if you are using central FIORI gateway server plan for upgrade every year or every 2 years at minimum. Every year at least apply support pack: the support pack will also to do last digit patching as well. After support pack or full version upgrade immediately patch to last digit version available before starting the testing.
There are 2 good reasons for mass locking and ending validity date of user: security and licenses.
Questions that will be answered in this blog are:
How can I mass lock users automatically if they have not logged on for a certain time?
How can I mass set the validity date of the users that did not log on for a certain time?
Automatic lock of user after expired logon
In RZ11 you can set parameter login/password_max_idle_productive with an amount in days.
If the user (including yourself) did not logon to the system after this amount of days the password is still valid, but it does not allow you to logon.
If the user tries to logon after the period he will see this error message and cannot continue:
In SU01 such a user looks like this:
If you also want to automatically lock users after you give them a new password, use the parameter login/password_max_idle_initial.
Initial passwords is one of the nice ways of entering a system as hacker. Especially if the initial password used by the admin is more or less the same (like Welcome_1234!). Countermeasure: instruct your admins to use the Password Generator. This will generate long random once off password.
Mass setting of user validity date
For user measurement and security reasons you want to limit the validity period as well. Users who are locked still count for user measurement (see blog on license measurement tips & tricks). Users locked and unlocked by some method can be security threat.
Standard SAP program RSUSR_LOCK_USERS (built on top of program RSUSR200) is the tool to achieve this.
It has quite a long selection screen:
On the first block set the dates for last logon and password change to get a good selection of users.
On the second block very important to only select Dialog Users.
First run with Test Selection to get a list. If you are happy with the list, run it with Set End Of Validity Period.
OSS notes
Performance and bug notes (OSS search hints RSUSR200 and RSUSR_LOCK_USERS):
For some batch jobs you want to have the execution done and don’t want to fill up your system with large spool files of this execution. This blog will explain to setup printer NULL to have a batch job suppress the output generation.
Questions that that will be answered in this blog are:
How do I setup printer NULL?
How to test the setup of printer NULL?
Where to find more background information on printer NULL?
Setup of printer NULL
Start transaction SPAD to define a new printer. Now create printer call NULL (with long and short name both NULL):
Select a simple windows driver. Fill the other mandatory fields. Add the message description clearly that the output will be lost.
Save the printer definition.
Testing the NULL printer
From the blog explaining the technical clean up we will take program RSWWHIDE. This program generates huge amount of output (per deleted item 3 to 10 lines). We will run the program twice in test mode: once with printer NULL and once with printer LP01 (default printer). Selection of printer NULL is same as with any printer:
Result in SM37:
The first run with printer NULL has suppressed the generation of the spool file.
This blog will analyze some of the tables behind the SAP user license measurement.
Warning: the list of tables below is not complete. Do not base any assumptions on the content of these tables in your system. In updates and newer versions all content can change. The tables and the text in blog is to give you insight into the process. In any contract SAP will claim the right for inspection of actual usage of your system versus the license rights in your contract.
Questions that will be answered are:
How do I know which objects are measured?
How are objects measured?
How can I find actual measured objects?
The general user measurement principles are explained in the blog on USMM.
The tables behind license measurement
The best table to start with is the TUAPP table: measurement of applications.
Example is given below:
Here you can see that Advanced ATP is measured via call function module. In SE37 you can lookup the function module and see inside the code what exactly is measured:
The other entry in TUAPP we will take as example is Procurement Orders. Its application ID is 5000 and does not measure via function module.
First we get the application to unit and unit name from table TUAPP_UNT (units themselves are defined in table TUUNT):
Now we see procurement is counting Inquiry, Purchase order, Contract, Scheduling Agreement and Others.
The actual values read by the measurement for the application counters are stored in table TUCNT:
The tables behind the AC checks
The AC (anti cheating) modules use bit different tables.
Table TUL_AC_UNIT is to denote the table to count on:
Here you see the main procurement table EKKO has ID number 5018.
In table TUL_ACTTC you can lookup this value:
This data will be used in dynamic SQL statement that will list the user name (ERNAM) who did the create or change and uses AEDAT (last change or creation date) for table EKKO to count for check 5018.
Be careful with the interface users. If an external system posts data into SAP system with a single background user, but it is clear that in the source system multiple real users doing the actions, SAP might want to charge you for 'indirect use'.
For live support of an SAP system you typically will have 2 types of support users:
Users for SAP themselves to logon to your system and provide support to you
Fire-call users with elevated authorizations to solve time critical incidents
Both type of users have no direct business goal, but have only support usage. You can mark them as type 91 Test user, as long as you have a clear naming convention for these users and a general rule that they are locked unless they are needed.
User deletion as regular activity
The user measurement program (both USMM and USMM2) checks for deletion of users in the last three months. To avoid discussions on user deletion it is best practice to delete monthly, or bi-monthly, all persons which have left your company.
End validity date
Users who don’t have a current validity date are not counted in the user measurement program. You might want to schedule program RSUSR_LOCK_USERS in a regular batch job to end the validity of users that did not log on for long time automatically. See this blog for more details.
Multiple logon
SAP does measure how many times a user has a double, triple, etc logon. The results are stored in table USR41_MLD. SAP might argue that the same user is used by multiple persons. You can use the contents of table USR41_MLD to prove if this was a mistake only. If the are too many multiple logons you might need to go back to the business to change their behavior.
You can also forbid the multiple logons at system level. SAP system parameter login/disable_multi_gui_login can be set in RZ11 to forbid multiple logons. For some users (like DDIC) you do want to keep multiple logons. These users must be set into system parameter login/multi_login_users=username1,username2,username3,etc.
Proper consolidation
Use the SLAW or SLAW2 tool to execute a proper consolidation of your measured users. This process will de-duplicate your counted users.
License validation program
Read this dedicated blog to know more about license vailidation program RSUVM080.
LUI License utilization information
The LUI (license utilization information) tool is an online SAP tool that has all the information on your on premise and cloud licenses information combined. For cloud the usage is automatically visible. For on premise systems you can upload the usage via the SLAW files. This can give you insights into under-consumption and over-consumption of licenses. Read more in this blog.
Check for bug fix notes in your advantage
SAP might give you a list of OSS notes to apply in your system before the measurement. These notes normally benefit SAP. You can also check for OSS notes that benefit you.