This blog explains how to mass stop and mass start batch jobs as admin. This especially useful putting the SAP system in maintenance mode. Maintenance mode can be needed for upgrade, support package patching or data conversion.
Questions that will be answered are:
How to mass stop batch jobs?
Can I plan new jobs I need during the suspend mode?
Downloading and implementing new versions of OSS notes
SAP regularly updates its own OSS notes. To check in your system if there are new updates for OSS notes relevant to you go to transaction SNOTE. Then choose “Goto -> SAP Note Browser ->Execute (F8)”, and then choose “Download Latest Version of SAP Notes” in the application toolbar. This will download all the latest versions. Check for the status “Obsolete version implemented” in the implementation state column.
Issues with OSS note downloads
In rare cases OSS note download and extractions might fail.
Activation of inactive objects after implementing OSS note
In rare cases after implementing an OSS note some of the ABAP objects are in an inactive state. To activate them, select the menu SAP note and then Activate SAP note manually.
Or you can run program SCWB_NOTE_ACTIVATE to activate the coding of the note:
Transport based correction instructions contain notes that are larger than normal OSS notes. This tool leverages the SPAM transaction to apply these large packages.
Start with reading the PDF document attached to OSS note 2187425: TCI for customer. This contains the exact instructions to enable TCI based correction instructions.
The TCI only recently has a rollback function. Please check if you can update/patch to the version where the rollback works. See the PDF document in OSS note 2187425 on the undo function.
Applying TCI note
There are 2 ways to upload TCI note.
Basis way: you will need SPAM access rights and 000 actions are involved. Upload the TCI file in SPAM in client 000. Then apply the note via SNOTE in main client. The note tool will ask you to confirm to use the TCI mechanism.
ABAP way: you will need SPAM access rights. In transaction SNOTE use menu option Goto / Upload TCI. After uploading the file, choose Decompress. Now apply the note via SNOTE. The note tool will ask you to confirm to use the TCI mechanism.
During the implementation, it can be that you are forced to delete all BI queues.
Transporting obsolete TCI packages
When you upgraded earlier to S4HANA or other recent version, some of the TCI notes might be obsolete. There is an issue moving this through the landscape. Read and apply the solution from OSS note 3116396 – How to Adjust Obsolete TCI Notes in Downstream Systems for the fix.
For digitally signed oss notes see the special blog.
KBA notes
Some notes don’t contain coding updates, but are KBA’s: Knowledge Base Articles. You have to read the note which contains manual instructions or explanation in detail.
In newer netweaver versions SNOTE is revamped. You can apply this version earlier if you want to use it. Read more on the SNOTE revamp in this blog.
Applying notes in shadow during upgrade
In rare cases you might need to apply and OSS note in the shadow system during a system upgrade. Basis team will usually use the SUM tool. Applying notes to shadow during upgrade can be needed to solve upgrade stopping bugs.
Always handle with care. If you are not experienced with upgrades, let a senior handle it.
HANA data aging is a method to reduce the memory footprint of the HANA in-memory part without disturbing the end users. It is not reducing your database size.
This blog will answer following questions:
What is HANA data aging?
How to switch HANA data aging on?
How to set up HANA data aging for technical objects?
What about data aging for functional objects?
What is HANA data aging?
HANA data aging is an application method to reduce the memory footprint based on application data logic. It is not a database feature but an application feature. The goal of HANA data aging is not to reduce the database size (which it is not doing), but to reduce the actual memory footprint of the HANA in-memory database.
Let’s take idocs as example: the idocs that are processed ok you need to keep in database for an agreed amount of time before business or audit allows you to delete them. Lets say you can only delete after 1 year. Every action on idocs now means that full year of idoc content is occupying main memory. For daily operational tasks you normally only need 2 months of data in memory and rest you can accept that it will take bit longer to read from disc into memory.
This is exactly what data aging is doing: you partition the data into application logic based chunks. In this case you can partition the idoc data per month and only have last 2 months in active memory. The other 10 months are on disc only. Reading data of last 2 months is still fast as usual. When having to report on the 10 months on disc, the system first needs to load from disc into memory; will be slower.
To reduce database itself, you would still need to do data archiving.
Advantage of the data aging is that the more expensive memory footprint costs can be reduced in such a way that the end users are not hampered. Data aging is transparent for them. With data archiving the users will always need to select different transaction and data files.
How to switch on data aging?
To switch on data aging on system level you need to do 2 things:
Set the parameter abap/data_aging to on in RZ11
In SFW5 switch on the switch called DAAG_DATA_AGING
This only enables the system for data aging.
Data aging switch on for technical object: example for application logging
With transaction DAGADM you can see the administration status of the data aging object. You first see red lights that the objects are not activated for data aging.
Per object you have extra transactions (which unfortunately differ per object…) to set the retention times. For application logging this is transaction SLGR. Here we choose in this example to data age all log after 180 days:
The advantage of this tailoring is that you could only age some of the objects if you want.
The transaction and OSS note for each of the objects can be found on this SAP blog.
Next step is to setup partitions for the object. To do this start transaction DAGPTM and open the object you want to partition:
Initial screen is in display mode. Hit change button. On the bottom right side hit the Period button (Selection Time Period). In the popup enter the desired start date, time buckets (months, years) and amount of repetitions:
Now the partitions are defined. To execute the partitioning hit the execute button to start the partitioning in the background. Wait until the job finishes. Before running this on productive system check the runtime first on non-productive system with about same data size if possible.
After partitioning the screen should look like this:
Now we can activate the object in transaction DAGADM. Select the object and press the activate button. Popup appears to assign the object to existing data aging or new group:
The data aging run will be done per group.
To start the actual data aging run start transaction DAGRUN.
Here you can schedule a new run with the Schedule new run button.
To see the achieved results of the data aging go to transaction DAGADM and select the object. Then push the button View current/Historical data.
Functional data aging objects
Functional data archiving objects exist as well for Financial documents, sales orders, deliveries, etc. The full list and minimal application version can be found on this SAP blog.
Words of caution for functional archiving:
The technical archiving objects are more mature in coding and usage. They are used in productive system and are with lesser bugs than the technical objects
Before switching on a functional data aging object you need to prepare your custom ABAP code. If they are not adjusted properly to take the partitions with the date selections (or other application selection mechanism) into account all benefits are immediately lost. A Z program that reads constantly into full history will force a continuous read of historical partitions….
This blog focuses on technical data objects archiving and clean up by performing deletion. If you want to setup functional archiving, start reading this blog.
Using SM36 you can plan all SAP standard jobs (which include a lot of clean up jobs for spools, dumps, etc) via the button Standard Jobs.
By hitting the button Default scheduling in an initial system, or after any upgrade or support package, the system will plan its default clean up schedule.
S4HANA has different set up of standard jobs. See blog.
Clean up of old idocs
Idoc data is stored in EDI* tables. Largest tables are usually EDI40, EDIDS and EDIDC.
Old idocs can be deleted using transaction WE11.
In batch mode you can schedule it as program RSETESTD.
In the bottom of the selection screen are the technical options:
The idoc deletion job can fail if there is too many data to process. If they happens remove the 4 tick boxes here and use the separate deletion programs: RSWWWIDE, RSARFCER, SBAL_DELETE and RSRLDREL2. These 5 combined programs will delete the same, but run more efficiently. This procedure is also explained in OSS note 1574016 – Deleting idocs with WE11/ RSETESTD.
Table logging is stored in table DBTABLOG (general information on table logging can be found in this blog). Deletion can be done using transaction SCU3 and then choosing the option Edit/Logs/Delete, or by using program RSTBPDEL.
Application logging (SLG1) is stored in tables BALDAT and BALHDR (for general information on the use of the application log, read this blog). Deletion can be done using transaction SLG2 or by using program SBAL_DELETE.
The last options to fine tune the number of logs per job and the commit counter setting do not appear by default. Select menu option Program/Expert mode first.
Old RFC data can be deleted using transaction SM58, selecting some data, then in the overview screen select the menu option Log File/ Reorganize. Or by starting program RSARFCER.
If you are using MDG: it has its own set of change pointer tables (MDGD_CP_REP_STAT). Clean up transaction code is MDGCPDEL. Program for batch job clean up is RMDGCPCLR.
Workflows are stored in many tables starting with SW*.
You can delete work item history with transaction SWWH or program RSWWHIDE.
This clean up will only do the work item technical history and not the workflow itself. If workflow itself can be deleted or is to be archived is a functionality decision that the depend on the business and audit needs.
The workflow deleting program can create large amount of spools. If this is not wanted use the NULL printer.
If your business is using the GOS (generic object services) to see workflows linked to a business document, and they cannot retrieve the archived work item, please follow carefully the instructions in OSS note 2356250 – Not able to view archived workflows.
If you want to delete the actual workflow you have to run program RSWWWIDE.
Take care that before deleting workflows you have checked that these are not needed for audit or financial proof. Some workflows will contain approval steps with a recording of who approved what at which time.
If you have a large amount of items in your SAP inbox, you can delete them via program RSSODLIN. Background is in OSS note 63912 – SAPoffice: Delete user sessions.
Test this first and check with the data owner that the documents are no longer needed.
For a full explanation on deleting SAP office documents (including all the pre-programs to run) and bug fix notes: read this dedicated blog on SAP office document deletion.
Usually the business will not allow deletion of SAP office document (unless they are very old). You might be ending up with a SOFFCONT1 table of 100 GB or more.
In stead of deleting SAP office documents, you can also migrate them to a content server. Read more in this blog.
Change documents
Change documents do contain business data changes to business objects. If tables CDHDR and CDPOS grow very big, you start with an age analysis. You can propose to business to delete change documents older than 10 years. 10 years is the legal time you need to keep a lot of data. Deletion is done via program RSCDOK99. If business does not want to delete, but keep the data in the archive, you can use data archiving object CHANGEDOCU. Retrieval of archived change documents is via transaction RSSCD100.
If you have large SYS_LOB tables, most likely these are occupied with attachments. Consider setup of SAP content server (see blog) and then migrate the documents from the SAP database to the content server (see blog).
To analyze SYS_LOB tables, follow the instructions in this dedicated blog.
You can schedule program RSAUPURG or program RSAU_FILE_ADMIN with the right variant to delete old Audit log data:
Before deleting audit log data, first agree with your security officer on the retention period. More on audit log in this blog.
Clean up of user role assignment data
If you have an older system, you will find that many users will have double roles assigned, or roles with validity dates in the past. This will lead to large amount of entries in table AGR_USERS. You can clean up by compressing this data with program PRGN_COMPRESS_TIMES. Read more in this blog.
Large WBCROSSGT table
Table WBCROSSGT is used to store the ABAP where used index. Might be large after upgrade. Use program RS_DEL_WBCROSSGT to delete and program SAPRSEUB to recreate the indexes.
For clean up of a solution manager system, read this dedicated blog.
Clean up for SAP Focused Run
For clean up of a SAP Focused Run system, read this dedicated blog.
Updating statistics
If you are running Oracle database it is wise to include in technical clean up job as last step the online reorganization of tables or indexes using program RSANAORA. See blog.
Enqueue and lock table issue analysis can be bit hard form time to time. They don’t regularly occur and when they do, they can have big system performance impact.
This blog will explain:
How to detect enqueue issues?
How to quickly analyze the enqueue issues?
Detecting enqueue issues?
Enqueue issues can be easily detected in SM50 and SM66 if work process get stuck long time with status ENQ.
First analysis on enqueue issues
The first analysis on enqueue issues can be done in transaction code SM12. From the menu now select the options Extra / Diagnosis and Extra / Diagnosis in Update. This will run the diagnostics on the enqueue handling.
Result looks like:
To get statistics on the enqueue processing, on the same SM12 start screen select the menu Extra / Statistics.
Deeper analysis on enque issues
For deeper analysis on the lock issues, you might need to switch to the detailed error handling part of SM12. This is a hidden feature. To switch it on you must have the correct authorization (S_ENQUE with ALL in the activities). Switching can be done by keying in the word TEST in the GUI command line (where you key in the tcodes and the /n etc).
Now you will see an extra menu called Error Handling.
From this menu you can directly launch program RSMONENQ_PERF via the menu option Error handling/Diagnosis environment. This programs will check the performance of the enqueue handling:
The Error Handling menu will also give you option to trace the enqueue processing.
Lock table overflow can happen when more locks are set by programs then the available allocated memory for the locks. In a normal system this will hardly occur. But during a conversion that is operating on massive amount of data (sometimes even using parallel jobs) this lock table overflow can happen. If it happens this will effect ALL users. They will get lock table overflow error and cannot save their work. More then enough reason to have large conversion tested first on a test system with production like sizing and settings.
If you are running an older ECC system, the lock table settings in the profile parameters might be set quite low. Newer upgraded ECC system can handle much higher values of the enque/table_size parameter.
After starting transaction ST22 select menu item Goto / Overview. Fill out the dates and you now get the overview including the statistics on the occurrences:
Detecting Z code in a dump is normally easy if it is a Z program. Some dumps you can have due to the fact that Z code is there in a user-exit, which again is calling SAP code. This dump will appear as looking 100% standard SAP, but when you scroll down in the Call Stack you will see Z code:
Before raising OSS message to SAP: make sure the call stack does not contain custom Z code.
RFC_NO_AUTHORITY dump
The RFC_NO_AUTHORITY is special kind of dump and typically looks like this:
First thing to get from the dump is the user ID and the calling system (is it an internal call or call from different system). And if the user ID is a human user or system user.
Second thing to determine is: is this a valid call or not a valid call?
In case of valid call, look in the dump which authorization is missing and what needs to be added. If the addition is done: do keep an eye on the dumps, since a new dump might come for a different new authorization object.
In case of an invalid call, you need to determine how the call was initiated and take action to avoid the initiation. This is not always a simple job.
Why is checking this dump important? Complete business flows might be disrupted if this happens. It is hard to detect for the end users what is going on. It will take them time to raise an incident and for functional people to determine what is going on. This way a lot of valuable time can be lost.
What can also happen: people try to connect via RFC methods to read data. This will give lot of dumps which are hard to follow up.
If you get too many of these dumps and you can’t solve them, you can switch parameter rfc/signon_error_log to value -1. Then the dumps are no longer there in ST22, but in stead moved to SM21 system log with less detail. If you need to have the details again, switch the parameter again (it is dynamic). Background on the parameter rfc/signon_error_log can be found in OSS note 402639 – Meaningful error message texts (RFC/Workplace/EBP).
CALL_FUNCTION_SINGLE_LOGIN_REJ dump
A bit similar to the above dump is the CALL_FUNCTION_SINGLE_LONG_REJ dump. Here a user tries to login via RFC to the SAP system, from a different SAP system, or from a JCO based connector.
Again: first determine if the call is valid or not. If not valid, determine the calling source (can be hard!).
If it is a valid call, scroll down in the details section for this dump and look for the part below:
There are two codes: T-RC code and the L-RC code. Check both the codes. In this case above the user ID validity was no longer ok.
Depending on the codes different solution needs to be applied.
Why is checking this dump important? Complete business flows might be disrupted if this happens to system user. If it happens to single user he might get grumpy. It is hard to find for the end users what is going on. It will take them time to raise an incident and for functional people to determine what is going on. This way a lot of valuable time can be lost.
TIME_OUT dumps
If an online query takes longer than the timing set in parameter rdisp/max_wprun_time a TIME_OUT dump will happen. By default and best practice, this time out parameter is set to 10 minutes. This is also the case in most system.
This dump will look like:
If you scroll down (or click in the left section) to the User and Transaction section, you can see the ID of the user who caused this and the transaction.
First reaction of the average basis person is: call/mail the user and ask him to run this in batch mode. This is indeed one of the solutions.
Alternative potential solutions:
Analyze with the end-user if he can fill out more selection criteria (hence reducing the time needed to select the data)
Analyze with the end-user if he can run the report in multiple smaller sets
Check if there are known performance OSS notes for the transaction the user is running (the root cause might simply be an SAP bug)
Check if the database statistics of the tables queried is up to date
In some cases both the selection criteria are ok, and the output of the list in batch only give a few results: in this case the creation of special index might be the solution. This can happen in case of check reports that look for business exceptions.
Why is checking this dump important? Users tend to get very frustrated by the system if they hit this dump. They have to wait 10 minutes and have no result. Sometimes you see this dump a couple of times in a row. Imagine yourself being the user with a boss demanding report which crashes after 10 minutes…
MESSAGE_TYPE_X dumps from program SAPLOLEA
The MESSAGE_TYPE_X can be pointing to very serious issue. But the ones generated by program SAPLOLEA point towards one type: the SAP GUI server interaction.
This dump typically look like this: a main dump MESSAGE_TYPE_X and calling program SAPLOLEA.
This dump can have 3 main root causes:
Issue in ABAP code (hit the SAP correction notes button to search for solutions)
Issue in local SAP gui installation of the end user
Issue in the SAP kernel
If you see many dumps with the same user ID: this typically points towards an old local SAP gui installation. Solution is to update the local SAP GUI for that user to the latest version that is supported in your company.
In rare cases the SAP kernel causes these kind of dumps. These are hard to find and detect. The only remedy here is to update the kernel at regular intervals.
To find which users use which SAP GUI version: go to transaction SM04 and add the field SAP GUI version:
From ABAP code: use function module TH_USER_LIST to get list of sessions. The GUI version is in the field GUIVERSION of output table USRLIST.
For more background on SAP GUI patching read this dedicated blog.
These dumps are caused by missing callback positive listing. See OSS note 2981184 – What to do in case of CALL_FUNCTION_BACK_REJECTED short dump. The solution is to add the function module to the positive list in RFC. In no way reduce the RFC security by lowering the RFC callback security parameter rfc/callback_security_method. Read this blog on how to hack using callback RFC, and why not to lower the security.
Coding and table generation dumps
Dumps can happen due to coding and tables not generated properly. When it happens during transport import, it is normal. If it persists after the import, you need to act. Best practice notes:
This blog will explain about getting insight into SAP database growth and controlling the growth.
Questions that will be answered are:
Do I have a database growth issue?
What are my largest tables?
How do I categorize my tables?
Why control database growth?
Controlling database growth has several reasons:
When converting to S/4 HANA you could end up with smaller physical HANA blade and need to buy less memory licenses from SAP
Less data storage leads to less costs (think also about production data copied back to acceptance, development and sandbox systems)
Back up / restore procedures are longer with large databases
Performance is better with smaller databases
Database growth
The most easy way to check if the database is growing too fast or not is using the Database Growth section in the SAP EWA (early watch alert). The EWA has both graphical and table representation for the growth:
You now have to determine if the growth is acceptable or not. This depends a bit on the usage of the system, amount of users, business data, and if you already stretched your infrastructure or not.
General rules of thumb:
1. Growth < 1 GB/month: do not spend time.
2. Growth > 1 GB/month and < 5 GB/month: implement technical clean up.
3. Growth > 5 GB/month: implement technical clean up and check for functional archiving opportunities.
Which are my largest tables?
To find the largest tables and indexes in your system start transaction DB02. In here select the option Space/Segments/Detailed Analysis and select all tables larger than 1 GB (or 1000 MB):
Then wait for the results and sort the results by size:
You can also download the full list.
Analysis of the large tables
Processing of the tables is usually done by starting with the largest tables first.
You can divide the tables in following categories:
Technical data: deletion and clean up can be done (logging you don’t want any more like some idoc types, application logging older than 2 years, etc): see blog on technical clean up
Technical data: archiving or storing can be done (idocs you must store, but don’t need fast access to, attachments)
In Oracle based systems, you might find large SYS_LOB tables. To analyze these, read this special blog.
SAP has a best practice document called “Data Management Guide for SAP Business Suite” or “DVM guide”. This document is updated every quarter to half year. The publication location is bit hidden by SAP under their DVM (data volume management) service. In the bottom here go to SAP support and open the How-to-guides section. Or search on google with the term “Data Management Guide for SAP Business Suite” (you might end up with a bit older version). The guide is giving you options per large table to delete and/or archive data.
Common technical objects
Most common technical tables you will come across:
EDIDC, EDIDS, EDI40: idocs
DBTABLOG: table changes
BALHDR, BALDAT: application logging
SWW* (all that start with SWW): workflow tables
SYS_LOB…..$$: attachments (office attachments and/or DB storage of attachments and/or GOS, global object services attachments)
Detailed table analysis for functional tables: TAANA tool
For detailed analysis on functional tables the TAANA (table analysis) tool can be used. Simply start transaction TAANA.
Now create a table analysis variant by giving the table name and selection of the analysis variant:
The default variant will only do a record count. Some tables (like BKPF in this example) come with a predefined ARCHIVE variant. This is most useful option. If this option does not fit your need, you can also push the create Ad Hoc Report button and define your own variant.
Caution: with the ad hoc variant select your fields with care, since the analysis will count all combinations of fields you select. Never select table key fields
Results of TAANA are visible after the TAANA batch job is finished.
By running the proper TAANA analysis for a large functional table you get insight into the distribution per year, company code, plant, document type etc. This will help you also estimate the benefits of archiving a specific object.
For TAANA improvement on dynamic subfields, please check this blog.
If you run on HANA, you can also use SE16H for the table analysis.
SAP data volume management via SAP solution manager
SAP is offering option to report on data volume management via SAP solution manager directly or as a subsection in the EWA. Experience so far with this: too long in setup, too buggy. The methods described above are much, much faster and you get insight into a matter of hours. The DVM setup will take you hours to do and days/weeks to wait for results…. TAANA and SE16H are way faster.
This blog will explain you how to set your company logo on the SAP logon screen. If you prefer text or hyperlinks on first screen or after the logon screen, please check this blog on text on logon screen. Or even integrating ABAP web dynpro page: see this blog.
Questions that will be answered are:
How to set your company or project logo on the SAP login page?
Why is the picture not shown?
Can I have multiple logon pictures?
Setting the logon picture
Start with transaction SMW0. And select the binary option:
Press execute and show the list.
Check in the menu Settings / Define MIME types that the .gif or .jpg mime type is defined. If not there define it.
Now go back to the main list and upload your company logo:
The object name will be re-used later.
Optionally you can display the picture. For this you might need to set the mime editor option (in the menu Settings / Set Mime Editor).
Now the picture is uploaded.
In transaction SM30 edit the contents of table SSM_CUST (in case your admin does not want you to use SM30, you can also use transaction SM30_SSM_CUST to maintain it):
Here add three parameters:
START_IMAGE with value ZCOMPANYLOGO (or the name you have given when uploading the image)
RESIZE_IMAGE with value NO
HIDE_START_IMAGE with value NO
Now log off and log on again: your picture should appear.
My picture does not appear, what did I do wrong?
Check the value in SSM_CUST to be set to NO for HIDE_START_IMAGE. If correctly set, try to logon, logoff.
If that fails the most common is a simple personalized GUI setting. In the logon screen select menu Extras/Settings and make sure the “Do not display picture” checkbox is not marked. Default new GUI installs have this set to on. Remove the checkbox and the picture will appear.
If you want you can also embed a webpage in stead of a picture (longer loading times might happen pending on the speed of the webpage embedded). Follow the instructions in OSS note 1387086 – HTML viewer in SAP Easy Access screen.
Multiple logon pictures
Multiple logon pictures are possible with SAP GUI 8.0. Read more in thig blog from SAP.
This blog will explain options and tools you have for S/4HANA sizing for both new installations as well as upgrades.
Questions that will be answered are:
How can I execute S/4HANA sizing?
How do I execute the memory sizing for upgrading existing ECC system on non-HANA database to S/4HANA?
How do I execute CPU sizing for S/4HANA?
How do I execute disc storage sizing for S/4HANA?
Executing S/4HANA sizing
For both greenfield and existing ECC systems the SAP specific quicksizer for S/4HANA can be used: S4HANA quicksizer, then launch the tool from that page:
For existing system you can pull data from existing system for greenfield you have to take either existing numbers from legacy system or input from project them.
The term quick sizing can be bit misleading. The tools is nowadays pretty advanced and requires quite some input.
SAP has delivered a tool to help in sizing memory for S4HANA for upgrading an existing system. In your current ECC system you need to apply OSS note 1872170 – Business Suite on HANA and S/4HANA sizing report. This will deliver ABAP report /SDF/HDB_SIZING. You test this on development system and transport it to production for productive run.
Best to run this in background. You can then get the results in the spool of the batch job.
The results give an as good as possible estimation of memory sizing after the database conversion.
SAP has released S4HANA readiness check 2.0. Please read this blog on the new tool version.
If you want to use old version, please read on.
This blog explains the new tool for SAP customers to prepare for S/4 HANA upgrade: S/4 HANA readiness check.
Questions that will be answered are:
What is the S/4 HANA readiness check?
How to execute it?
What results can I expect?
S/4 HANA readiness check
The S/4 HANA readiness check is a tool from SAP that can help you prepare for S/4 HANA upgrade. The tool is a web based online tool running in SAP cloud that is using 2 files with data from your system:
Extract from your customer code
Usage data of transactions measured in your system (based on ST03N data)
The outcome is online report with list of potential improvements in S/4 HANA that might be relevant for your business and list of potential issues when upgrading caused by custom code or by generic changes by SAP.
The end user guide of the tool can be found on the SAP site.
Execution of S/4 HANA readiness check
The main note for the readiness check is 2290622. This note describes that there 2 ways to run the check:
Via solution manager
Directly
The direct approach is the most easy. The exact steps are always updated in OSS note 2310438. Carefully implement all the prerequisite notes mentioned in this note.
After this is done 2 programs will be available.
Program SYCM_DOWNLOAD_REPOSITORY_INFO will download the ABAP custom developments.
The program will check if the where-used index is up to date. If not it will refer to OSS note 2234970. This note can be bit confusing. But basically what you need to do is run program SAPRSEUB in the background (and wait up to 2 days on larger system with many custom code!!).
Please note the following: As a prerequisite for SAP Note 2185390 or the program SYCM_DOWNLOAD_REPOSITORY_INFO, please start only the program SAPRSEUB! Do not start SAPRSEUC. If you use an MSSQL database, you must implement SAP Note 1554667 before starting SAPRSEUB; otherwise, database problems occur. More on ABAP where used index via SAPRSEUB see blog link.
The second program will capture analysis data: TMW_RC_DOWNLOAD_ANALYSIS_DATA.
You will have to start this program a few times. Every time it will launch a new batch job for each tick box you have selected.
Both of the programs will deliver you a zip file that you store on local PC or laptop.
When the analysis is finished you first enter the dashboard:
When zooming in you will reach the detailed screens with all the small details and relevant OSS note references:
Top right in the details list there is the button to create the results document. This is easier for sharing the results with management, since they typically don’t have an S user to logon to the tool.
Running S4HANA ABAP checks in your own system
With the remote ATC tool with special variant S4HANA Readiness you can run the ABAP checks in your onw system. Read this blog for more information.
New content for new S4HANA versions
With every new version of S4HANA (and its intermediate feature packs) SAP will update the simplification list and the corresponding OSS notes. This will also impact the analysis programs. OSS note 2399707 – Simplification Item Check lists down which note version you need to apply to your system to have the checks for the S4HANA version of your choice. For the newer notes you will have to use the TCI based OSS notes (see blog on notes tips & tricks).
If you have installed the latest TCI note, you also get a new program called /SDF/RC_START_CHECK. After start of this program you get this screen:
You now can immediately see if you have new versions of OSS notes to apply to get most recent checks.
And after the run, you can use the button Application Log to see a more detailed result list on the simplification checks carried out in your system.
Custom ABAP code analysis
For a more detailed analysis on your custom ABAP code you can use the remote ATC tooling for a more detailed analysis. See this blog for details.