Demo systems hosted at SAP:
Free trial demo systems upon request can be found on this link.
Free product trials can be requested: follow this link.
A 14 day trial for S4HANA Cloud can be found on this link.
Blog for SAP technical guru's: SAP basis, SAP security and authorization, SAP ABAP, SAP Focused Run
basis
SAP netweaver gateway is used to host FIORI apps. Attention must be paid to the generic system performance of the system to avoid FIORI users to complain about performance.
Run program /IWFND/R_SM_CLEANUP in batch to clean up the application log. See note 2499354 – Report /IWFND/R_SM_CLEANUP. You can run with clean up scenario * for all, or select a specific scenario:
/IWFND/CLEANUP_APPSLOG will clean up the application log.
/IWFND/SUPPORT_UTILITIES will clean up support utilities.
See more on periodic tasks on help.sap.com.
See OSS note 2620821 – Periodical jobs and corresponding reports to also run programs /IWBEP/SUTIL_CLEANUP and /IWBEP/R_CLEAN_UP_QRL regularly (daily).
Run report /UI2/PERS_EXPIRED_DELETE weekly to clean up old personalization data (see note 2031444 – Cleanup of expired Fiori personalization data on the frontend server).
In customizing go to this path:
Setting should look like following in a productive system:
In a development system you should de-activate this cache.
Check if you have enabled HTTP/2 already. If not, activate it. See this blog.
In a productive system you can reduce data footprint and improve performance by reducing the log level settings. Only in case of issues, you can increase the log levels again.
To avoid excessive SLG1 logging of type “MESSAGE_TEXT”, make sure to apply solution from OSS note 3247600 – Fiori: /IWFND/MED170 – No service found for namespace ‘/IWBEP/’, name ‘MESSAGE_TEXT’, version ‘0001’.
Go to transaction /n/IWFND/ERROR_LOG. Now select menu Error Log/Global Configuration. Now the configuration opens:
Set the Error log level in production to Secure. And check the days to realistic dates (settings above in the screen shot are per SAP advice).
Go to transaction /n/IWBEP/ERROR_LOG. Now select menu Error Log/Global Configuration. Now the configuration opens:
Set the Error log level in production to Secure. And check the days to realistic dates (settings above in the screen shot are per SAP advice).
Make sure you don’t keep all metering data too long. You can aggregate and delete it. See this blog.
Check your FIORI search settings. The setup of search is described in this blog and is very powerful. But search might not be needed at all, or most users want only to search for apps:
See note 2871580 – SAP Fiori Launchpad Settings: new Parameters for Enterprise Search for the settings to optimize, and SAP blog on explanation of the settings.
See note 2885249 – How to disable Enterprise Search in Fiori Launchpad to disable the enterprise search part.
See note 2051156 – Deactivation of search in SAP Fiori launchpad for deactivation.
The amount of tiles assigned to a user has a big impact on performance. Try to reduce the amount of tiles assigned to the user as minimal as possible.
See OSS notes 2829773 – Fiori Launchpad performance with a large volume of Tiles and 2421371 – Understanding Launchpad performance Issues.
Specific notes and solutions for tables that are growing fast in a netweaver gateway system:
This should be simple one, but it isn’t. Sizing of Netweaver Gateway should not be done based on day or week average. Determine your peak times. And at those peak times check the CPU and memory load. If you take averages that include weekend and night, then system sizing might be totally fine. But to avoid complaints, you must be able to handle the peak load.
SAP has a checklist called “SAP Fiori launchpad best practice verification checklist”. Follow this link for the document.
Good other blog with checklist: follow this link.
If you convert ECC to S4HANA you need to execute custom code adjustments for both HANA database migration and for functional application changes. This can be read in this blog and this blog.
If you only want to migrate an existing database to HANA for a netweaver ABAP stack (either standalone or for SAP ECC), you will also need to adjust custom code.
Questions that will be answered in this blog are:
There are mandatory ABAP changes to be made for HANA database migration. The main ones are:
The first few will not appear too much and are relatively easy to fix.
The last one: the statements without ORDER BY needs some explanation. Some current custom code might work properly with the current database, since some database will present the data to the ABAP application server in a specific sorted way. When migration to HANA database the HANA database might present the same records to the ABAP application server, but in a different sorting or in a random order. This might lead to issues in further handling in custom code. The solution is to analyze the code and to add explicit sorting as per need of the custom program. To scan the usage in live system, see below chapter on SRTCM.
All these changes can be detected with the SCI variant FUNCTIONAL_DB:
Run this SCI variant via the ATC tool on your custom code:
Wait for the run to finish and go to the results. The best overview is when you click the Statistics View button:
Clicking on an item will drill down to the details.
The second set of custom code changes is from the performance side. For this set you need to run the ATC tool with SCI variant PERFORMANCE_DB:
The PERFORMANCE_DB variant has 2 main parts: mandatory fixes, good to fix.
The mandatory fix is the unsecure use of SELECT FOR ALL ENTRIES. If this is not properly checked, it might blow up the system:
What happens here? If in the current database the SELECT FOR ALL ENTRIES for whatever reason is not giving results this might be running fine. But on HANA the entire table is read in this case. To scan the usage in live system, see below chapter on SRTCM.
The other part is the performance best practices for HANA:
This ATC run can yield a very long working list:
Where to start? Since even the priority 1 and 2 can yield a very long list.
Use the SQLM and SWLT tools. These tools will help you to prioritize the ATC run result from the PERFORMANCE_DB variant. SQLM will take statistics data from production. You start with the heavy used programs. SWLT will combine the heavy use with the ATC run. The output is the heavy used program which can be improved.
The SRTCM tool is specifically designed to scan for 2 main issues: Empty table in FOR ALL ENTRIES clause and Missing ORDER BY or SORT after SELECT. The tool is run on a productive system and will list the actual usage in a productive system.
To switch on start transaction SRTCM and press the Activate Globally button.
Let the tool run, and later Display Results from either running system or snapshot:
Show results;
Clicking on the line will jump to the direct code point.
Note for Oracle as source database: 3209584 – RTM: RTM_PERIODIC_JOB canceled with runtime error SQL_CAUGHT_RABAX (ORACLE).
Functional issues after HANA database migration are rare. Some might occur on an ECC system if there are minor bugs or issues in the HANA database optimized routines.
Example OSS notes:
SAP solution manager offers a custom code decommissioning cockpit tool. This tool you can use to delete unused custom code. Unused code does not need to be migrated, which will save you effort.
SAP will stop supporting SAP solution manager by 31.12.2027. All its functions will be migrated to SAP Cloud ALM.
The SAP readiness check for Cloud ALM will check the usage of current functions in your SAP solution manager and will validate if replacements in Cloud ALM are already present.
Since Cloud ALM is constantly growing in functionality you better run this check yearly. It is up to each customer to decide to move to SAP Cloud ALM for part of the functions or to wait until all functions are available.
There are other readiness checks as well:
Apply the latest version of SAP note 3236443 – SAP Readiness Check for SAP Cloud ALM to your SAP solution manager system. Run program /SDF/RC_ALM_COLLECT_DATA in your solution manager production system.
It might be that your RFC for destination WEBADMIN is not setup correctly. That will cause a short dump.
If you are not using Trace Analysis function, you can remove code to this call in the local definition of class /SDF/CL_RC_ALM:
Run program /SDF/RC_ALM_COLLECT_DATA in your solution manager production system. Select the option Schedule Analysis:
When the job is finished select Download Analysis Data and save the file.
Upload the result file on the Readiness page in SAP for me on this URL.
After the upload, wait up to 1 hour for SAP to process the results. For full explanation of the checks, read this SAP blog. Short description is below:
Top left you get an overview of the capabilities that were detected that are currently in use in Solution manager, or were used in the past:
Below is the most important overview. This shows if your used functions are available on Cloud ALM or will come in the near future, or only on the far Vision future:
Read more on SAP Cloud ALM positioning in this blog.
Read more on SAP Cloud ALM activation in this blog.
For parts of solution manager that you don’t use, you can perform the technical clean up.
This blog will explain how to archive MM accounting interface posting data via object MM_ACCTIT. Generic technical setup must have been executed already, and is explained in this blog.
The MM_ACCTIT is strange object. Since it is about intermediate data which is normally not viewed by users, there is no read program. In worst case there is a reload program. Do also read about the option to stop using these intermediate tables if the business does not require them.
Please check OSS note 48009 – Tables ACCTHD, ACCTIT, ACCTCR: Questions and answers to see if you can fully avoid data being written to the tables.
Go to transaction SARA and select object MM_ACCTIT.
Dependency schedule (no dependency):
Tables that are archived:
Write program: MM_ACCTIT_WRI
Delete program: MM_ACCTIT_DEL
Read program: none. Only write and delete.
Relevant OSS notes:
MM_ACCTIT has no application specific customizing.
In transaction SARA, MM_ACCTIT select the write run:
Select your data, save the variant and start the archiving write run.
After the write run is done, check the logs. MM_ACCTIT archiving has high speed and high percentage of archiving (up to 100%).
Deletion run is standard by selecting the archive file and starting the deletion run.
This blog will explain how to archive CO line items via object CO_ITEM. Generic technical setup must have been executed already, and is explained in this blog.
Go to transaction SARA and select object CO_ITEM.
Dependency schedule (no dependency):
Tables that are archived:
Most important:
Write program: CO_ITEM_WRI
Delete program: CO_ITEM_DEL
Read program: RKCOITS4
Relevant OSS notes:
In the application specific customizing for CO_ITEM you have to set the residence time in months per CO type:
In transaction SARA, CO_ITEM select the write run:
Select your data, save the variant and start the archiving write run.
Give the archive session a good name that describes date range. This is needed for data retrieval later on.
After the write run is done, check the logs. CO_ITEM archiving has average speed, but high percentage of archiving (up to 100%).
Deletion run is standard by selecting the archive file and starting the deletion run.
Start the data retrieval program and fill selection criteria:
To avoid reporting issues, see OSS note 2676572 – SARI/KSB5 Archived documents not displayed in line item report, fill the archiving structure SAP_CO_ITEM_001.
This blog will explain how to archive purchase orders and purchase documents via object MM_EKKO. Generic technical setup must have been executed already, and is explained in this blog.
Go to transaction SARA and select object MM_EKKO.
Dependency schedule:
No dependencies.
Main tables that are archived:
Pre-processing program: RM06EV70
Write program: RM06EW70
Delete program: RM06ED70
Read program: RM06ER30
Relevant OSS notes:
In the application specific customizing for MM_EKKO you can maintain the document retention time settings:
You have to set the residence time per purchase order type:
In transaction SARA, MM_EKKO select preprocessing:
There are quite some reasons why a purchase order cannot be archived.
In transaction SARA, MM_EKKO select the write run:
Select your data, save the variant and start the archiving write run.
Give the archive session a good name that describes the purchasing group and year. This is needed for data retrieval later on.
After the write run is done, check the logs. MM_EKKO archiving has average speed, and medium percentage of archiving (50 to 90%).
Deletion run is standard by selecting the archive file and starting the deletion run.
For MM_EKKO start the read via SARA:
Then select the archive file(s).
Result is in a simple list.
If you setup the archiving infostructures, the popup with the files will be skipped.
This blog will explain how to archive change documents via object CHANGEDOCU. Generic technical setup must have been executed already, and is explained in this blog.
Go to transaction SARA and select object CHANGEDOCU.
Dependency schedule: (none):
Change documents are archived as part of other archiving objects. For specific changes you might want to archive the changes sooner to get a grip on CDHDR and CDCLS/CDPOS table sizes and amount of entries.
Main tables that are archived:
Write program: CHANGEDOCU_WRI
Delete program: CHANGEDOCU_DEL
Read program: CHANGEDOCU_READ
Reload program: CHANGEDOCU_REL
Relevant OSS notes:
No application specific customizing is required for CHANGEDOCU archiving.
In transaction SARA, CHANGEDOCU select the write run:
Select your data, save the variant and start the archiving write run.
Give the archive session a good name that describes change document object(s) and year. This is needed for data retrieval later on.
After the write run is done, check the logs. CHANGEDOCU archiving has average speed and very high percentage of archiving (up to 100%).
Deletion run is standard by selecting the archive file and starting the deletion run.
Don’t start the data retrieval program from SARA. Start program CHANGEDOCU_READ from SA38 (see OSS note 3395609 – Default read program for CHANGEDOCU in transaction AOBJ):
In the second screen select the archive files. Now wait long time before data is shown.
For faster retrieval, setup data archiving infostructures SAP_CHANGEDOCU1 and SAP_CHANGEDOCU2. These are not active by default. So you have to use transaction SARJ to set them up and later fill the structures (see blog).
Now transaction RSSCD100 can be use for data retrieval:
Don’t forget to select the tick box “Read from archive info system”.
Another option is via transaction SARE (archive explorer) and then choose object CHANGEDOCU with archive structure SAP_CHANGEDOCU1.
This blog will explain how to archive purchase requisitions via object MM_EBAN. Generic technical setup must have been executed already, and is explained in this blog.
Go to transaction SARA and select object MM_EBAN.
Dependency schedule:
No dependencies.
Main table that is archived:
Pre-processing program: RM06BV70
Write program: RM06BW70
Delete program: RM06ID70
Read program: RM06BR30
Relevant OSS notes:
In the application specific customizing for MM_EBAN you can maintain the document retention time settings:
You have to set the residence time per requisition type:
In transaction SARA, MM_EBAN select preprocessing:
There are quite some reasons why a purchase requisition cannot be archived.
In transaction SARA, MM_EBAN select the write run:
Select your data, save the variant and start the archiving write run.
Give the archive session a good name that describes the purchasing group and year. This is needed for data retrieval later on.
After the write run is done, check the logs. MM_EBAN archiving has average speed, and medium percentage of archiving (50 to 90%).
Deletion run is standard by selecting the archive file and starting the deletion run.
For MM_EBAN start the read via SARA:
Then select the archive file(s).
Result is in a simple list:
This blog will explain how to archive WM transfer orders and requirements via objects RL_TA and RL_TB. Generic technical setup must have been executed already, and is explained in this blog.
Go to transaction SARA and select object RL_TA and RL_TB.
Dependency schedule (no dependencies for both):
Main tables that are archived:
RL_TA:
Write program: RLREOT00S
Delete program: RLREOT10
Read program:RLRT0001
RL_TB:
Write program: RLREOB00S
Delete program: RLREOB10
Read program: RLRB0001
Relevant OSS notes:
RL_TA and RL_TB don’t have application specific customizing.
Typically RL_TA and RL_TB will yield 90 to 100% documents that can be archived.
In transaction SARA, RL_TA or RL_TB select the write run:
Select your data, save the variant and start the archiving write run.
Give the archive session a good name that describes sales warehouse and year. This is needed for data retrieval later on.
After the write run is done, check the logs. RL_TA and RL_TB archiving has average speed, and a high percentage of archiving (up to 90 to 100%).
Deletion run is standard by selecting the archive file and starting the deletion run.
Start the data retrieval program and fill selection criteria for transfer orders:
Start the data retrieval program and fill selection criteria for transfer requirements:
In the popup screen select the wanted archive files.