SAP Focused Run self monitoring

When using Focused Run you monitor systems. But how about the health and stability of the monitoring tool itself? Here is where self monitoring plays an important role.

Questions that will be answered are:

  • What to check in self monitoring?
  • How much metrics are collected and stored in my Focused Run installation?
  • How can I check all grey metrics in system monitoring?

Self monitoring

Self monitoring can be started with the Self Monitoring Fiori tile:

If you click the tile the overview page comes (this page will take time to load):

The interesting part is unfortunately hidden in the below screen (you need to scroll), which is the CPU utilisation.

The other interesting part is the amount of data collected and stored. This is not so interesting for yourself, but more interesting for your manager to show how much data HANA can handle, or to show how much work is really automated.

Simple diagnostics agent

On the top left icons, click on the Simple DA agent button to get the agent overview screen:

Important here:

  • Check that all agents are up
  • Check that all agent versions are not too old

Monitoring and Alerting Infrastructure

The next option is to check the MAI (monitoring and alerting infrastructure) data collection:

Important here is to fix the systems in error.

Wily Introscope

The Wily option will show if your Wily Introscope connected to Focused Run is ok. Wily is used for special use cases like JAVA and Business Objects products.

Managed system overview

The managed system overview gives the overview of the diverse systems and application status:

Any red or yellow item can mean a setup issue. But it can also be because of missing authorizations and privileges of the Focused Run technical user in the connected managed system.

Central component monitoring

The central component monitoring shows the overview of the central components:

Identifying all grey metric in System Monitoring

In SAP Focused Run there is no standard mechanism to identify and display all grey metrics in System Monitoring, a grey metric can cause critical situations not being captured and alerted in monitoring hence we need to monitor such grey metrics.

In this blog we explain how you can list all the grey metrics by directly reading from database tables that store the monitoring data.

Focused Run system monitoring metric aggregate data is stored in table AEM_METRIC_AGGR. We can filter on metric status = Grey to see the list of grey metrics.

Open the table in transaction SE16:

Increase the width and no of hits and click on execute:

Now you have all the data that you can export to an excel sheet. For this select the following menu option.

Select file type as Text with Tabs.

Provide the path and filename to save the file and then click on Generate button.

Now open the .txt file in MS Excel.

In the Home tab select option for filtering as shown below

Now set the following filter for the column LAST_RAT

Now you will get the list of all grey metrics as shown below.

Note: The Context_ID value will give you the ID of the managed object, Metrtric_type_ID will give you the ID of the metric name and the Last_text will give you the return text of the last data collection which will give you the reason for grey metric.

In order to get the managed object name and metric type you can use the following in transaction MAI_TOOLS –> Metric Event Alert Details.

In the selection screen for Managed Object ID enter the Context ID from the excel and for Metric Type ID enter the same from the excel. Also select the checkboxes as shown below and execute.

Now you will get the info on the Template as well as the Metric name which is currently in grey.

SAP instructions for grey metrics

You can also check the instructions from SAP in OSS note 2859574 – How to list all current Grey Metrics in FRUN.

Alerting on critical metrics turning grey

It is essential to activate alerting on critical metrics turning grey in order to avoid missing critical issue not getting detected by Focused Run.

Since SAP Focused Run 3.0 FP2 , a new metric has been added to the self monitoring template in System Monitoring, the Grey Metrics metric measure what percentage of critical metrics in Grey.

This metric by default uses threshold 30% for Yellow and 70% for Red rating. You can change this threshold to show red if value crosses more than 1% so that if there are any critical metric that is in grey then alert is raised.

Note: This metric considers only those metrics which are marked as critical in Self Monitoring app. The percentage is calculated based on how many metrics out of the metrics designated as critical metrics in Self Monitoring app are in Grey.

To designate a metric as critical metric navigate to Self Monitoring App in Focused Run launchpad –> Infrastructure Administration.

In the Self Monitoring app navigate to MAI Data Quality.

In the Overview screen select the Managed System type to go to its details screen.

In the Details page it shows list of systems with their critical metrics that are in grey. To modify the list of designated critical metrics click on the chnage button.

In the new popup use the text search button to enter the text of the metric you want to add to the list.

Finally click on the “+” button and then click on close to save the added metric in the critical metric list.

Now the added metric will be considered as critical metric while calculating % of Grey Metrics.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans and Manas Tripathy (Simca). Repost done with permission. >>

Activating early watch report (EWA) for ABAP and JAVA managed systems in SAP Focused Run

EWAs in Focused Run

When you have performed Simple System Integration (SSI) for the connected managed system on the Focused Run system, by default SSI itself activates the SAP early watch (EWA) reporting for a managed system, provided the managed system has IT Admin Role defined as Production system. For more information on how to set IT Admin Role read this blog.

For non productive systems you can also manually activate EWA.

Additionally for all ABAP systems (Production and Non-production), you will need to configure SDCCN in managed system so that EWA data is sent to the Focused Run system.

Note: In an Focused Run environment, EWAs are not generated on the Focused Run system, but rather at SAP side. Only data is collected at the Focused Run side and the EWA is then available in Service Messages at SAP. For accessing the EWAs you can navigate from the launchpad using the EWA Workspace tile. For more details click here or here.

General EWA tips and tricks can be found in this blog.

Steps for activating EWA for ABAP Systems

STEP 1: Activate EWA on Focused run System

Goto Launchpad and click on the SAP Early Watch Alert Status tile.

In the EWA Status Application first you need to select scope to include all ABAP systems as shown below.

To activate EWA, change under Active column from NO to YES for the respective system.

After Activation status changes as shown below. Refresh after 5 minutes to ensure that the first circle is Green.

STEP 2: Configure SDCCN on Managed System

Once EWA session for an ABAP system is activated, the SDCCN of the respective ABAP system must read the EWA session data from Focused Run system. For more details read OSS note 2359359 – SDCC Enhancement for ABAP Systems Connecting to Focused Run.

For this first create a HTTP destination from the managed ABAP system to Focused Run system using report /BDL/CREATE_HTTP . Provide the following input and execute the report.

  1. HTTP Destination: By default, it shows as SDCC_SID, you’d better change the SID to the real target system id. This is the suggestion, but you can specify any name if you want.
  2. SSL Status(HTTPS): This checkbox means whether you want to use HTTPS for the communication. By default, it will only use HTTP.
  3. Path Prefix: By default, the service used by SDCCN is /sap/bc/sdf/sdcc/. You cannot change this unless you mark the checkbox “Force Mode”.
  4. User Name: FRN_EWA_<FRUN SID> which you have created during the initial setup of FRUN system
  5. Run report /BDL/CONFIGURE_SDCCN_HTTP to activate SDCCN. Provide the HTTP destination that created in step 2 and change the job user. Only the first check box must be  selected (only on FRUN 1.0 both check boxes must be selected).
  6. The job user must have authorization SAP_SDCCN_ALL

After running this report with above parameters, the SDCC_OSS RFC will be removed from RFC destinations, and the new HTTP destination will be added to RFC destinations.

STEP 3: Create Maintenance Package on SDCCN in Managed System

Now you need to create a maintenance package in transaction SDCCN of managed system.

To Ensure the EWA activation is properly completed, goto transaction SDCCN on the managed system and ensure that the EWA sessions for the managed system is registered.

When EWA Data is sent to Focused Run and processed at the SAP side, you will see all circles in Green for the respective ABAP system as shown below.

Activating EWA for JAVA Systems

Goto Launchpad and click on the SAP Early Watch Alert Status tile.

In the EWA Status Application first you need to select scope to include all java systems as shown below.

To activate EWA, change under Active column from NO to YES for the respective system.

After Activation status changes as shown below. Refresh after 5 minutes to ensure that the first circle is Green.

To Ensure the EWA activation is properly completed, goto transaction SDCCN on the FRUN system and ensure that the EWA session for respective Java system is registered.

When EWA Data is sent to FRUN and processed, you will see all circles in Green for the respective JAVA system as shown below.

EWA troubleshooting

In case of issues you can follow the link to the troubleshooting guide of SAP:

Data retention

The EWA data is kept for 1 year. To change this, read the blog on housekeeping settings.

Relevant OSS notes

OSS notes:

Guided answers

SAP guided answers:

<< This blog was originally posted on SAP Focused Run Guru by Manas Tripathy from Simac. Repost done with permission. >>

SAP Focused Run cloud monitoring overview

The integration and cloud monitoring function of SAP Focused Run consists of 2 main functions:

  • Cloud monitoring between on premise and cloud SAP products
  • Interface monitoring between SAP systems (read more on interface monitoring in this blog)

This blog will give an overview of the Cloud monitoring between SAP on premises systems and SAP cloud solutions.

Questions that will be answered in this blog are:

  • How does the Cloud monitoring in SAP Focused Run look like?
  • How much details and history can I see in SAP Focused Run interface monitoring?
  • Can I link an Cloud monitoring event to and alert?
  • Which Cloud monitoring scenarios are supported?
  • How to monitor message to and from SAP CPI?
  • How to setup the monitoring towards SAP CPI?
  • How to monitor message to and from SAP Ariba?
  • How to setup the monitoring towards SAP Ariba?
  • How to setup alert notification from SAP BTP?

Cloud monitoring

To start the cloud monitoring click on the Fiori tile:

Select the cloud scenarios:

You now reach the scenario overview screen:

Click on the tile for details (we will take Ariba as example):

Click on the red line between the on premise and the cloud system:

Click on the red errors number for the error overview:

Click on specific error:

Supported cloud scenarios

Not all cloud products and scenarios of SAP are supported via SAP Focused Run Cloud monitoring. On the SAP Focused Run Expert Portal the following scenarios are currently published:

Read the scenario details carefully! Inside the details there might be less monitored than you were expecting.

CPI message monitoring

SAP Focused Run Cloud Monitoring can be used to monitor messages to and from the SAP BTP CPI solution. CPI stands for Cloud Platform Integeration.

End result of CPI message monitoring

The configuration of the scenario is described in the next chapter. We start explaining the end result.

Select the scenario and the overview tile appears:

Click on the card to go to the scenario topology:

Zoom into the overview screen of the errors:

And drill down to any specific error:

Set up of the CPI monitoring scenario

llow the steps from the SAP expert portal for CPI monitoring to setup the STRUST in SAP Focused Run for the CPI URL.

Validate in the SAP Focused Run ABAP stack that these two parameters are set in RZ11:

  • icm/HTTPS/client_sni_enabled = TRUE
  • ssl/client_sni_enabled = TRUE

If this is done, go to the cloud setup FIORI tile:

Add a new end point for CPI:

The application key, client ID and client secret will need to be provided by the basis person or functional consultant maintaining the CPI interface configurations on the BTP cloud. Depending on the security setup, a proxy is required as well.

After entering the details check the connection that connectivity is working as expected.

Now go to the configuration of the interface scenario and create a new cloud service for Cloud Platform Integration:

On the monitoring screen specify filters for specific IFlows if requiered:

On the alerting tab you can set up any alerting wanted:

Set the filter for alerting (in this case all failed flows):

Assign alert receivers and make sure everything is saved and activated.

Now you can model the scenario graphically as well:

Cloud monitoring: Ariba

SAP Focused Run Cloud Monitoring can be used to monitor messages to and from the Ariba solution.

End result of Ariba cloud monitoring

The configuration of the scenario is described in the next chapter. We start explaining the end result.

Select the scenario and the overview tile appears:

Click on the card to go to the scenario topology:

Click on the red line to zoom into the communication error details:

Click on the message to zoom into the details:

Set up of the Ariba monitoring scenario

ollow the steps from the SAP expert portal for Ariba monitoring to setup the STRUST in SAP Focused Run for the Ariba URL.

Validate in the SAP Focused Run ABAP stack that these two parameters are set in RZ11:

  • icm/HTTPS/client_sni_enabled = TRUE
  • ssl/client_sni_enabled = TRUE

If this is done, go to the cloud setup FIORI tile:

Add a new end point for Ariba:

The application key, client ID and client secret will need to be provided by the basis person or functional consultant maintaining the Ariba interface configurations on the Ariba cloud. Depending on the security setup, a proxy is required as well.

After entering the details check the connection:

Now go to the configuration of the interface scenario and create a new cloud service for Ariba Network Transaction:

On the Monitoring tab connect to the end point create above and set the wanted filters:

If you want, you can also set up alerting in the third tab.

Save and activate the setup.

Now you can model the scenario graphically as well:

Alert notification from BTP

The BTP platform has a function called Alert Notification. This is a generic function that can be used to send alerts. It can be used to send alerts from applications, but also to send alerts from the HANA Cloud database.

SAP Focused Run can pick up these alert notifications form the BTP platform. From there Focused Run can be used to further relay the alert to notification teams.

Setup of the scenario

First you need to prepare your BTP environment to allow SAP Focused Run to collect data from the Alert Management application from your tenant and subaccount. This will give you the URL, client ID and client secret (be careful this is only shown once). To do this, follow the steps on the SAP Focused Run expert portal in this link.

For the setup of the scenario in SAP Focused Run, go to the FIORI tile for cloud setup:

Set up the OAUTH end point:

After the setup save the details and test the connection.

Now this end point can be used in the scenario setup.

In the Scenario configuration create the Cloud Service and select the SAP Cloud Platform Alert Notification Service:

In the monitoring details set the Endpoint you just created and filter on the events:

In the third tab Alerting you can set up the alerts if wanted.

Save and activate.

In the scenario modelling you can now use an on premise system and the Cloud Service you set up above to model a graphical scenario:

End result of alert notification from BTP

Select the configured scenario:

In this case we have setup the alert notification for HANA Cloud. Click on the card tile for details, and click on the interface line:

Now select the errors and the overview screen opens:

Click on a single line to go to the specific error details:

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run demo system and OpenSAP course

The SAP site for Focused Run has a link to the online Focused Run demo system and link to videos.

First visit the general SAP Focused Run site. Now scroll down to the resources part:

On the right hand side are the video.

In the middle the Demo System link. You can also access it directly via this URL.

Scroll down to the landscape overview. To access the system press the blue “SAP Focused Run launchpad button”.

The user ID and password is in the table below the button.

SAP training on SAP Focused Run

SAP is providing a free training on SAP Focused Run. Follow this link for the main training.

Content of the training:

  • Week 1 Unit 1: Focused Run overview
  • Week 1 Unit 2: Architecture and demo system
  • Week 1 Unit 3: Integration and exception monitoring
  • Week 1 Unit 4: Real user monitoring
  • Week 1 Unit 5: Synthetic user monitoring
  • Week 1 Unit 6: Job & automation monitoring
  • Week 2 Unit 1: Health monitoring
  • Week 2 Unit 2: System monitoring
  • Week 2 Unit 3: Alert management
  • Week 2 Unit 4: Operations Analytics (overview)
  • Week 2 Unit 5: Operations Analytics (dashboard examples)
  • Week 2 Unit 6: Operations Intelligence
  • Week 3 Unit 1: Configuration and security analysis
  • Week 3 Unit 2: System analysis
  • Week 3 Unit 3: Trace analysis and file system browser
  • Week 3 Unit 4: Operations automation
  • Week 3 Unit 5: IT calendar & work mode management
  • Week 3 Unit 6: Service availability management
  • Week 3 Unit 7: Focused Run summary

Older background material

Some older, but still useful, background material can be found on this link.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run housekeeping and technical clean up

Housekeeping in SAP Focused Run is needed for 3 reasons:

  1. Keep performance high
  2. Reduce data footprint on the database
  3. Stay within the licensed volume (see more on licenses of SAP Focused Run in this blog)

Questions that will be answered in this blog are:

  • Which housekeeping settings can I make in SAP Focused Run?
  • Which technical clean up can I do in SAP Focused Run?

Housekeeping for alert and event management

For alert and event management housekeeping program:

Housekeeping for system analysis and root cause analysis

For system analysis housekeeping schedule program SRAF_LOG_HOUSEKEEPING and WEA_AGG_STORE_PARTITIONING. For root cause analysis schedule program RCA_HOUSEKEEPING.

Detailed settings for RCA housekeeping are done in table RCA_HKCONFIG. You can maintain this table with SM30:

Housekeeping for application integration monitoring

For application integration monitoring housekeeping schedule program /IMA/HOUSEKEEPING for older releases and /IMA/HOUSEKEEPING_NEW for FRUN 3.0 FP01 onwards.

In the tile for integration monitoring you maintain the detailed settings and retention periods:

Press the change button to alter the data retention periods towards your need:

Housekeeping for EWA data

For EWA data housekeeping schedule program FRUN_DELETE_SERVICE_DATA:

Important year: default 1 year of EWA data is kept. If you need more, increase the days kept. If you want to clean up more, you can reduce the days.

Housekeeping for health monitoring

For health monitoring housekeeping, schedule program OCM_HOUSEKEEPING.

Housekeeping for statistical records

For housekeeping of statistical records, schedule program AI_STATRAGG_HOUSEKEEPING:

Read also this note explaining it will take time before clean up is reflected: 3478938 – Housekeeping of System Analysis data in SAP Focused Run.

Housekeeping for work mode management

For housekeeping of work mode management, schedule program WMM_HOUSEKEEPING:

Housekeeping for security and configuration validation

In the Configuration and configuration analytics Administration tile, choose the configuration icon:

Here you can set the retention period.

Technical clean up

There are also technical tables that might grow fast in SAP Focused Run that will consume memory in your HANA database.

Fast growing table LMDB_P_CHANGELOG

See OSS note 2610122 – Cleaning up the change history in the LMDB: run program RLMDB_CLEAR_CHANGELOG.

Fast growing SISE_LOG table

Run program SISE_LOG_DELETE to clean up SISE_LOG table. See OSS note 2984789 – Scenario F4-help not working for SISE_LOG_DELETE report.

Idoc and PI monitoring data fast growing

If you get too much data for idoc monitoring, apply OSS note 3241688 – Category wise table cleanup report (IDOC, PI). This note delivers program /IMA/TABLE_CLEANUP_REPORT for clean up.

Invalid entries in MAI_UDM_PATHS

If table MAI_UDM_PATHS is getting large, follow the instructions from OSS note 3030652 – Cleanup invalid entries from database table mai_udm_paths to clean up. It is explained in more detail in OSS note 3250729 – Housekeeping for metric paths. And read OSS note 3424812 – MAI housekeeping does not allow ad hoc execution for ad hoc clean up.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run alert management outbound integration to ServiceNow

SAP Focused Run alert management function can send out mails to alert to mail addresses (see this blog).

SAP Focused Run can also call an outbound integration to a ITIL tool like ServiceNow. This can help to speed up incident creation.

It needs implementation on ABAP level. The coding is given at the end of the blog.

Questions that will be answered in this blog are:

  • How does the high level integration between SAP Focused Run and ServiceNow look like?
  • Where can I find information on the to-be-implemented ABAP BADI?
  • How can I send an alert directly to ServiceNow from the Alert management detailed page?
  • How can I automate in template settings to send an alert via outbound integration towards ServiceNow?
  • How do I connect from the ABAP stack towards the midserver?
  • Which BADI do I need to activate for the outbound integration?
  • How do I call the midserver connection from the BADI?
  • How do I deal with the differences in severity definition between ServiceNow and SAP Focused Run?
  • If I want to set up the connection via web services, what do I need to do?
  • How can I include application logging in such a way that I can monitor the calls and issues in SLG1?
  • Where can I find the ABAP code needed?

Setting up the integration

For setting up the integration to ServiceNow the AEM third party consumer connection BADI must be implemented. The full manual for the BADI itself can be found on the SAP Focused Run Expert portal.

The documents describes the BADI in generic way.

To call ServiceNow you have to use one of the following 2 integration methods:

  • Call webservice: in this case you import the WSDL from ServiceNow and generate the proxy and execute the SOAMANAGER settings to logon to ServiceNow. You need ABAP code in the BADI to call the proxy. See this blog for generic use of setting up webservice consumption in ABAP stack. Available webservices for ServiceNow can be found on the ServiceNow page.
  • Call the ServiceNow midserver: in this case you call a REST interface. In this case you need to setup a HTTP RFC connection to the midserver. ABAP code in the BADI is needed to make the REST call. See this blog for generic use of REST call in ABAP stack. REST API references from ServiceNow can be found on the ServiceNow page.

Alert trigger integration

If you are inside an alert, you can trigger the alert reaction:

Then select the reaction to forward to ServiceNow:

Within few seconds the alert in ServiceNow is created:

Alert reaction automation in template settings

The alert reaction to ServiceNow can also be automated as Outbound Integration. If you are in template maintenance mode, switch to Expert mode.

In the alerts tab now configure the alert type for Forward to and Outbound Connector:

Assign the correct variant.

If you click on the variant you go to the variant configuration screen:

Then select the outbound integration name to see the details:

Important here is the where used list, which shows you from which templates and template elements the connector is called.

Whenever the alert is raised, also the outbound integration connector to ServiceNow is called.

Set up the RFC destination

In SM59 setup the RFC connection towards the MID server as type H RFC connection:

Activation of the enhancement spot and BADI

The details of the enhancement spot and BADI implementation are in the SAP document published on the SAP Focused Run Expert Portal.

Use transaction SE18 or SE80 to activate enhancement spot ACC_REACTION_EXTERNAL and then activate BADI BADI_ACC_REACTION_EXT.

The result looks as follows:

Double click on the implementation:

Double click on the REACT_TO_ALERT interface to go to the code. The code we implemented looks as below:

METHOD if_acc_reaction_ext~react_to_alert.     
  IF is_alert-ref->get_type( ) <> 'ALERT'.       
    RETURN.     
  ENDIF.     
" Init. the application log.     
  me->zgo_logger = NEW zcl_snow_bi_logger( ). " Add info message to notify reaction was triggered     
  me->zgo_logger->bal_log_add_message(       
    EXPORTING 
       ziv_msgty = zif_snow_constants=>zgc_message_types-info        ziv_msgno = '002'     
     ).     
" Send message to service now     
  NEW zcl_snow_bi_common( )->zif_snow_bi_common~send_message_to_snow(     EXPORTING         
      zii_logger       = me->zgo_logger       
      zii_snow_message = NEW zcl_snow_event_message( zii_alert = is_alert-ref  )
      ziv_resolution_state = zif_snow_constants=>zgc_resolution_state-new     ).     
" Save the application log     
  me->zgo_logger->bal_log_save( ).   
ENDMETHOD.

What do we do in the code:

  1. We only react on type ALERT
  2. We make an entry in the application log (so we can check later on in SLG1)
  3. We call the actual interface which we have implemented in class ZCL_SNOW_BI_COMMON class

The implementation code

The sending code in method ZIF_SNOW_BI_COMMON~SEND_MESSAGE_TO_SNOW that was just called looks as follows:

  METHOD zif_snow_bi_common~send_message_to_snow.

    " Retrieve JSON body for the request
    DATA(zlv_snow_event_json) = zii_snow_message->get_json( ziv_resolution_state ).

    me->add_json_to_bal_log( EXPORTING zii_logger   = zii_logger
                                       ziv_json     = zlv_snow_event_json ).


    TRY.
        " Execute HTTP Post, send data to the MiD API
        DATA(zlv_http_status_code) = NEW zcl_snow_mid_api( )->post( ziv_event_messages = zlv_snow_event_json ).

        " Add HTTP response code to the log..
        IF zlv_http_status_code-code >= 200 AND zlv_http_status_code-code < 300.
          DATA(zlv_msgty) = zif_snow_constants=>zgc_message_types-success.
        ELSE.
          zlv_msgty = zif_snow_constants=>zgc_message_types-error.
        ENDIF.

        zii_logger->bal_log_add_message(
           EXPORTING
             ziv_msgty = zlv_msgty
             ziv_msgno = '001'
             ziv_msgv1 = |{ zlv_http_status_code-code } { zlv_http_status_code-reason }|
         ).

      CATCH zcx_snow_mid_api INTO DATA(zlo_exception).
        "Add exception to the application log
        zii_logger->bal_log_add_message(
          EXPORTING
            ziv_msgid = zlo_exception->if_t100_message~t100key-msgid
            ziv_msgno = zlo_exception->if_t100_message~t100key-msgno
            ziv_msgv1 = CONV #( zlo_exception->if_t100_message~t100key-attr1 )
            ziv_msgv2 = CONV #( zlo_exception->if_t100_message~t100key-attr2 )
            ziv_msgv3 = CONV #( zlo_exception->if_t100_message~t100key-attr3 )
            ziv_msgv4 = CONV #( zlo_exception->if_t100_message~t100key-attr4 )
        ).
    ENDTRY.

  ENDMETHOD.

What happens here:

  1. Data object is build in the data definition (details follow below)
  2. This is logged
  3. The actual call is performed by calling class ZCL_SNOW_MID_API (details follow below)
  4. The result is checked (200 is http code for Ok)
  5. Error result is logged in case of issues

The code for the message content

For the message content, we first define the message event type:

INTERFACE zif_snow_message
  PUBLIC .

  TYPES:
    BEGIN OF zgts_event,
      "! Source
      source           TYPE string,
      "! Name of the object
      node             TYPE string,
      "! Type of object, host, instance
      type             TYPE string,
      "! Severity
      severity         TYPE string,
      "! Date/Time(YYYY-MM-DD HH:MM:SS)
      time_of_event    TYPE string,
      "! Alert description/name
      description      TYPE string,
      "! SAP System ID
      event_class      TYPE string,
      "! Unique ID
      message_key      TYPE string,
      "! Alert state
      resolution_state TYPE string,
      "! Resource
      resource         TYPE string,
    END OF zgts_event.

  METHODS get_json  IMPORTING ziv_resoultion_state TYPE string
                    RETURNING VALUE(zrv_json)      TYPE /ui2/cl_json=>json.

ENDINTERFACE.

This event is used in the actual message build code:

  METHOD zif_snow_message~get_json.

    DATA zlv_events TYPE /ui2/cl_json=>json.

    " Get current time stamp
    GET TIME STAMP FIELD DATA(zlv_time_stamp_now).

    DATA(zlv_event_json) = /ui2/cl_json=>serialize(
      EXPORTING
        data        =  VALUE zif_snow_message~zgts_event(
          source              = |{ syst-sysid } - FRUN |
          node                = zgi_alert->get_managed_object_name( )
          type                = zgi_alert->get_managed_object_type( )
          severity            = me->convert_severity( zgi_alert->get_severity( ) )
          time_of_event       = me->convert_alert_timestamp( ziv_timestamp = zlv_time_stamp_now )
          description         = COND #( LET custom_description = me->remove_html_tags( zgi_alert->get_custom_description( ) ) IN
                                        WHEN strlen( custom_description ) > 0 THEN custom_description
                                        ELSE me->remove_html_tags( zgi_alert->get_sap_description(  ) ) )
          event_class         = substring( val = zgi_alert->get_managed_object_name( ) off = 0 len = 3 )
          message_key         = zgi_alert->get_type_id( )
          resolution_state    = ziv_resoultion_state
          resource            = zgi_alert->get_name( )
        )
        pretty_name = /ui2/cl_json=>pretty_mode-low_case
    ).

    IF zlv_events IS INITIAL.
      zlv_events = zlv_event_json.
    ELSE.
      zlv_events = zlv_events && ',' && zlv_event_json.
    ENDIF.

    IF zlv_events IS NOT INITIAL.
      zrv_json =  '{ "records": [' && zlv_events && '] }'.
    ENDIF.

  ENDMETHOD.

Simple method codes:

  METHOD if_acc_mea~get_managed_object_name.
    rv_managed_object_name = ms_mea-context_name.
  ENDMETHOD.
  METHOD if_acc_mea~get_managed_object_type.
    rv_managed_object_type = ms_mea-context_type.
  ENDMETHOD.
  METHOD if_acc_mea~get_severity.
    rv_severity = ms_mea-severity.
  ENDMETHOD.
  METHOD convert_alert_timestamp.

    " Convert the timestamp
    CONVERT TIME STAMP ziv_timestamp TIME ZONE 'UTC'
      INTO DATE DATA(zlv_date) TIME DATA(zlv_time)
      DAYLIGHT SAVING TIME DATA(zlv_dls_time).


    zrv_date_time = |{ zlv_date+0(4) }-{ zlv_date+4(2) }-{ zlv_date+6(2) } { zlv_time+0(2) }:{ zlv_time+2(2) }:{ zlv_time+4(2) }|.

  ENDMETHOD.
  METHOD if_acc_mea~get_name.
    rv_name = ms_mea-name.
  ENDMETHOD.
  METHOD if_acc_mea~get_type_id.
    rv_type_id = ms_mea-type_id.
  ENDMETHOD.

Helper method to remove HTML tags:

  METHOD remove_html_tags.

    IF ziv_description IS INITIAL.
      RETURN.
    ENDIF.

    DATA(zlv_description) = ziv_description.

    DATA(zlv_newline) = cl_abap_char_utilities=>newline.

    REPLACE ALL OCCURRENCES OF '<h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<p>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</p>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<li>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</li>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<a href="' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF REGEX '">[A-Za-z0-9_\~\-+=&[:space:]]*</a>' IN  zlv_description WITH '' IGNORING CASE.

    REPLACE ALL OCCURRENCES OF ' ' IN zlv_description WITH '_'  IGNORING CASE.
    REPLACE ALL OCCURRENCES OF ':' IN zlv_description WITH ':'     IGNORING CASE.

    zrv_desription = zlv_description.


  ENDMETHOD.

Method to convert severity:

  METHOD convert_severity.

    CASE ziv_sm_severity.
      WHEN 1 OR 2 OR 3 OR 4.
        zrv_sn_severity = 5. " Info
      WHEN 5.
        zrv_sn_severity = 4. " Warning
      WHEN 6.
        zrv_sn_severity = 3. " Minor
      WHEN 7.
        zrv_sn_severity = 2. " Major
      WHEN 8 OR 9.
        zrv_sn_severity = 1. " Critical
      WHEN OTHERS.
        zrv_sn_severity = 0. " Clear
    ENDCASE.

  ENDMETHOD.

This method convert the SAP Focused Run severity code from 1 to 9 towards the codes in ServiceNow. Adjusts the codes per your requirement.

The sending code

The sending code is as follows (for more details on ABAP REST calls, read this blog):

  METHOD post.

    " Initialize the HTTP client
    me->init_http_client( ).

    " Set header
    me->zgo_http_client->request->set_method( method = me->zgo_http_client->request->co_request_method_post ).

    me->zgo_http_client->request->set_content_type( content_type = zgc_content_type ).

    " Set body
    me->zgo_http_client->request->set_cdata( EXPORTING data = ziv_event_messages ).

    " Send the data (POST)
    me->send( ).

    " Receive the response; needed to get the http status code
    me->receive( ).

    " Get the status code
    me->zgo_http_client->response->get_status(
      IMPORTING
        code   = zrs_status-code      " HTTP status code
        reason = zrs_status-reason    " HTTP status description
    ).

  ENDMETHOD.

Subimplementations of the methods:

  METHOD init_http_client.

    cl_http_client=>create_by_destination(
      EXPORTING
        destination              = zgc_destination      " Logical destination (specified in function call)
      IMPORTING
        client                   = me->zgo_http_client  " HTTP Client Abstraction
      EXCEPTIONS
        argument_not_found       = 1
        destination_not_found    = 2
        destination_no_authority = 3
        plugin_not_active        = 4
        internal_error           = 5
        OTHERS                   = 6
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD send.

    me->zgo_http_client->send(
      EXCEPTIONS
        http_communication_failure = 1
        http_invalid_state         = 2
        http_processing_failed     = 3
        http_invalid_timeout       = 4
        OTHERS                     = 5
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD receive.

    me->zgo_http_client->receive(
      EXCEPTIONS
        http_communication_failure = 1
        http_invalid_state         = 2
        http_processing_failed     = 3
        OTHERS                     = 4
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD raise_exception_for_sys_msg.

    RAISE EXCEPTION TYPE zcx_snow_mid_api
      EXPORTING
        textid = VALUE scx_t100key(
                      msgid = syst-msgid
                      msgno = syst-msgno
                      attr1 = syst-msgv1
                      attr2 = syst-msgv2
                      attr3 = syst-msgv3
                      attr4 = syst-msgv4
                  ).

  ENDMETHOD.

What is important here is that in the constant ZGC_DESTINATION is the definition of the H type RFC destination towards the midserver.

Helper code: zcl_snow_event_message

Method ZIF_SNOW_MESSAGE~GET_JSON with input ZIV_RESOULTION_STATE type STRING and returning ZRV_JSON type /UI2/CL_JSON=>JSON, code:

  METHOD zif_snow_message~get_json.

    DATA zlv_events TYPE /ui2/cl_json=>json.

    " Get current time stamp
    GET TIME STAMP FIELD DATA(zlv_time_stamp_now).

    DATA(zlv_event_json) = /ui2/cl_json=>serialize(
      EXPORTING
        data        =  VALUE zif_snow_message~zgts_event(
          source              = |{ syst-sysid } - FRUN |
          node                = zgi_alert->get_managed_object_name( )
          type                = zgi_alert->get_managed_object_type( )
          severity            = me->convert_severity( zgi_alert->get_severity( ) )
          time_of_event       = me->convert_alert_timestamp( ziv_timestamp = zlv_time_stamp_now )
          description         = COND #( LET custom_description = me->remove_html_tags( zgi_alert->get_custom_description( ) ) IN
                                        WHEN strlen( custom_description ) > 0 THEN custom_description
                                        ELSE me->remove_html_tags( zgi_alert->get_sap_description(  ) ) )
          event_class         = substring( val = zgi_alert->get_managed_object_name( ) off = 0 len = 3 )
          message_key         = zgi_alert->get_type_id( )
          resolution_state    = ziv_resoultion_state
          resource            = zgi_alert->get_name( )
        )
        pretty_name = /ui2/cl_json=>pretty_mode-low_case
    ).

    IF zlv_events IS INITIAL.
      zlv_events = zlv_event_json.
    ELSE.
      zlv_events = zlv_events && ',' && zlv_event_json.
    ENDIF.

    IF zlv_events IS NOT INITIAL.
      zrv_json =  '{ "records": [' && zlv_events && '] }'.
    ENDIF.

  ENDMETHOD.

Method Constructor with input ZII_ALERT type IF_ACC_MEA, code:

  METHOD constructor.

    me->zgi_alert = zii_alert.

  ENDMETHOD.

Method CONVERT_ALERT_TIMESTAMP input ZIV_TIMESTAMP type TIMESTAMP, returning ZRV_DATE_TIME type STRING. Code:

  METHOD convert_alert_timestamp.

    " Convert the timestamp
    CONVERT TIME STAMP ziv_timestamp TIME ZONE 'UTC'
      INTO DATE DATA(zlv_date) TIME DATA(zlv_time)
      DAYLIGHT SAVING TIME DATA(zlv_dls_time).


    zrv_date_time = |{ zlv_date+0(4) }-{ zlv_date+4(2) }-{ zlv_date+6(2) } { zlv_time+0(2) }:{ zlv_time+2(2) }:{ zlv_time+4(2) }|.

  ENDMETHOD.

Method CONVERT_SEVERITY input ZIV_SM_SEVERITY type AC_SEVERITY, returning ZRV_SN_SEVERITY type INT4. Code:

  METHOD convert_severity.

    CASE ziv_sm_severity.
      WHEN 1 OR 2 OR 3 OR 4.
        zrv_sn_severity = 5. " Info
      WHEN 5.
        zrv_sn_severity = 4. " Warning
      WHEN 6.
        zrv_sn_severity = 3. " Minor
      WHEN 7.
        zrv_sn_severity = 2. " Major
      WHEN 8 OR 9.
        zrv_sn_severity = 1. " Critical
      WHEN OTHERS.
        zrv_sn_severity = 0. " Clear
    ENDCASE.

  ENDMETHOD.

Method REMOVE_HTML_TAGS, input ZIV_DESCRIPTION type STRING, returning ZRV_DESRIPTION type STRING. Code:

  METHOD remove_html_tags.

    IF ziv_description IS INITIAL.
      RETURN.
    ENDIF.

    DATA(zlv_description) = ziv_description.

    DATA(zlv_newline) = cl_abap_char_utilities=>newline.

    REPLACE ALL OCCURRENCES OF '<h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<p>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</p>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<li>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</li>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<a href="' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF REGEX '">[A-Za-z0-9_\~\-+=&[:space:]]*</a>' IN  zlv_description WITH '' IGNORING CASE.

    REPLACE ALL OCCURRENCES OF ' ' IN zlv_description WITH '_'  IGNORING CASE.
    REPLACE ALL OCCURRENCES OF ':' IN zlv_description WITH ':'     IGNORING CASE.

    zrv_desription = zlv_description.


  ENDMETHOD.

Helper code: zcl_snow_bi_logger

Exception class ZCX_SNOW_MID_API as redefinition of CX_ROOT with re-implementation of constructor; inputs TEXTID type F_T100_MESSAGE=>T100KEY and PREVIOUS type PREVIOUS.

Code:

  METHOD constructor ##ADT_SUPPRESS_GENERATION.

    CALL METHOD super->constructor
      EXPORTING
        previous = previous.

    CLEAR me->textid.

    IF textid IS INITIAL.
      if_t100_message~t100key = if_t100_message=>default_textid.
    ELSE.
      if_t100_message~t100key = textid.
    ENDIF.

  ENDMETHOD.

Helper class zcl_snow_bi_logger.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_SAVE:

 METHOD zif_snow_bi_logger~bal_log_save.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->save( ziv_commit = abap_true ).

    ENDIF.

  ENDMETHOD.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_ADD_EXCEPTION

Import parameter: ZIX_SNOW_EXCEPTION type ref to ZCX_SNOW_MID_API

Code:

  METHOD zif_snow_bi_logger~bal_log_add_message.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->add_message(
        EXPORTING
          ziv_msgty = ziv_msgty
          ziv_msgid = ziv_msgid
          ziv_msgno = ziv_msgno
          ziv_msgv1 = ziv_msgv1
          ziv_msgv2 = ziv_msgv2
          ziv_msgv3 = ziv_msgv3
          ziv_msgv4 = ziv_msgv4
      ).

    ENDIF.

  ENDMETHOD.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_ADD_FREE_TEXT

Inputs ZIV_MSGTY type SYMSGTY and ZIV_TEXT type ZCL_BC_BAL_LOG=>ZGTV_FREE_TEXT “(which is TYPES zgtv_free_text TYPE c LENGTH 200) .

Source:

  METHOD zif_snow_bi_logger~bal_log_add_free_text.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->add_free_text(
        EXPORTING
          ziv_msgty = ziv_msgty
          ziv_text  = ziv_text
      ).

    ENDIF.

  ENDMETHOD.

Constructor code:

  METHOD constructor.

    me->zgo_bal_log = zcl_bc_bal_log=>factory(
        ziv_object     = zif_snow_constants=>zgc_bal_log-object
        ziv_sub_object = zif_snow_constants=>zgc_bal_log-sub_object
    ).

  ENDMETHOD.

Using the ServiceNow web services

The ServiceNow webservices including instructions on how to download the WSDL are published on the ServiceNow help web pages.

Download the WSDL file and follow the instructions from this blog to import the WSDL file inso SE80 and generate the ABAP web service proxy object. In SOAMANAGER setup the logical port towards your ServiceNow installation and make sure the connection is working.

Then implement the ABAP code as above.

In stead of calling the REST service, you now call the ABAP proxy generated:

* Data Declarations
DATA: zcl_proxy TYPE REF TO zco_zbapidemowebservice, " Proxy Class
      zdata_in  TYPE zzbapidemo, " Proxy Input
      zdata_out TYPE zzbapidemoresponse, " Proxy Output
      zfault    TYPE REF TO cx_root. " Generic Fault

* Instantiate the proxy class providing the Logical port name
CREATE OBJECT zcl_proxy EXPORTING logical_port_name = 'ZDEMOWS'.

* Set Fixed Values
zdata_in-zimport = '1'.

TRY .
    zcl_proxy->zbapidemo( EXPORTING input = zdata_in
                          IMPORTING output = zdata_out ).
    WRITE: / zdata_out-zexport.
  CATCH cx_root INTO zfault.
* here is the place for error handling

ENDTRY.

Off course you will use the generated in and out data from the generated service.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run system analysis overview

System analysis is part of the Root Cause analysis functions of Focused Run. It can be used for issues analysis of current issues and for longer term trending.

Questions that will be answered in this blog are:

  • How can I start execute System Analysis for a system?
  • Which type of systems can be analysed with System Analysis?
  • How can I use the System Analysis tool for immediate analysis of issues with a system?
  • How can I use the System Analysis tool for getting insight in the longer term trends inside a system?
  • How to set up System Analysis for performance analysis?

System analysis

Start the system analysis function by clicking on the Fiori tile for System Analysis:

Select the system you need to analyze for issues in the scope selection screen. In the first case we take an ABAP stack with time frame of the last 6 hours:

This overview might be bit overwhelming the first time. But you can see the performance was bad in the middle of the day (see top middle graph on average response time). Bottom middle graph shows CPU of some application servers was at 100%. And at the same time there were many dumps (right middle graph). This gives a clear direction were to look for issues.

The system analysis overview adjusts the information automatically to its content. This is the information for a HANA system:

Note here that the time frame here is from the last month. This is for getting longer term overview of the system behaviour. You can get this longer term overview by changing the time frame of the system analysis tool.

Page catalog

You can select a specific view from the page catalog list on the left button bar on the screen:

So you can easily filter the specific page for the type of system you need to analyze.

Performance analysis in System Analysis

In the system analysis function there is a special function to monitor system performance based on ST03 system data from the managed system.

Choose the menu option for ABAP performance:

The performance overview will now open:

You can click on many items now to get to the details.

Setup of Performance Analysis

To make the above function work;, click on the settings wheel and click on the Configure Collection of ABAP performance data:

Make sure the system you need analysis data from is activated correctly.

If the data collection is not ok, check the Collector Status button and Agent logs. Also check the backend system user used to see if this user has sufficient authorization to fetch the required data.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run license and usage

SAP Focused Run is a licensed product. The metric is amount of GB stored in the application.

If you have more systems, more detailed metrics, with short measurement times and many functions, the more GB you will use.

Questions that will be answered in this blog are:

  • How to check the current license usage?
  • What drives the usage?
  • How can I get a cost estimate?
  • How can I create a business case for Focused Run?

Checking the license usage

In SE38 start program FRUN_USAGE_UPDATE:

Now you can see which Focused Run function uses how many MB’s.

Read this note that explains the slower clean up on system analysis data: 3478938 – Housekeeping of System Analysis data in SAP Focused Run.

What drives the usage?

Usage is driven by:

Getting a cost estimate

Your SAP account manager or the Focused Run team in Germany can give you a good cost estimate. Material number for Focused Run in the price list is 7019453.

Input for cost estimate: sizes and numbers of systems, functions of Focused Run you want to deploy, and the retention period of the data.

Output: cost estimate.

Creating the business case

The business case has 2 aspects:

  • Cost: infrastructure, license, implementation
  • Benefits

Benefits is easier to quantify if your IT service is more mature.

Elements to consider:

  • How much does an hour of outage cost on your main ECC or S4HANA core system? For lager companies, this is easily 10.000 Euro per hour or more.
  • How much does your complaint handling cost per ticket?
  • How much time is currently spent on manual monitoring?

Benefits of SAP Focused Run are then in avoiding half the outages by faster insights and reducing the outage costs. You cannot avoid all outages, but you can act faster.

Benefits of Focused Run are in improved clean up and issue solving. This will both reduce issues in your systems and reduce complaints and tickets you need to handle.

For larger system landscapes (more than 50 systems) the business case is quite easy to create and will be positive fast.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run interface monitoring overview

The integration and cloud monitoring function of SAP Focused Run consists of 2 main functions:

  • Interface monitoring between SAP systems
  • Cloud monitoring between on premise and cloud SAP products (see blog)

This blog will give an overview of the interface monitoring between SAP systems.

Questions that will be answered in this blog are:

  • How does the interface monitoring in SAP Focused Run look like?
  • How much details and history can I see in SAP Focused Run interface monitoring?
  • How can I enable my systems for interface monitoring?
  • How do I set up a scenario to monitor?
  • How do I setup alerting for an interface scenario?
  • How do I setup Idoc monitoring?
  • How do I setup ODATA monitoring?
  • How do I setup qRFC monitoring?
  • How do I setup webservice monitoring?
  • How do I setup RFC monitoring?
  • How do I setup SLT monitoring?

Interface monitoring

To start the interface monitoring click on the Fiori tile:

In the next screen you now select one or multiple integration scenario’s:

Then you reach the scenario overview screen:

You can immediately see with the red colored scenario’s that there is an issue.

Click on the red scenario to open the details of the scenario topology:

The topology indicates most of the interfaces are correct. To see the detailed issue, click on the red line:

Click on the red error for the details:

On the right side of the you can click on the Dashboard icon to get an historical overview:

Link with alert management

Interface errors can be the trigger for an alert in the Alert Management function.

Technical scenario setup

e concept for interface monitoring is unfortunately a bit confusing at first.

There are 2 main things to remember:

  1. Systems data collection and alerting: this is where the action happens
  2. Graphical representation: this is where you make it visible

Unfortunately this means you have to do lot of double work.

Set up systems

Go to the Integration and cloud monitoring Fiori tile. On top right click on the configuration icon to change or add a scenario:

First add the systems:

Select the system:

Select the configuration categories:

Select the monitoring:

Here you must add the connections you want to monitor.

The alerting configuration is empty initially:

We will fill this later if we want alerts for a specific interface connection.

Save this system and repeat for the rest of the systems.

The system determines the actual data collection and actual alerting. The system can be re-used in multiple scenarios.

Scenario configuration

On the configuration screen now add the new scenario. Add a name and description for the scenario:

In the topology screen now add the systems in the drop down for Node Selection and use the + icon to add them to the screen:

Now select the source system (we will have 1 CUA central and 2 child systems) and select the Action box:

Select Add link to and then select the system.

Now add a filter to the link by clicking on the line:

In the dialog screen on the right now add the details:

Start by giving the group a name. Now add the filter. Give the filter a name (in this case RFC1). Select the central component and the category (in this case Connection monitoring SM59). Now add the RFC connection type (3) and connection name to be monitored.

Very important here: press Ok first to transfer the data. Only then press Save. Otherwise your data is lost. SAP UI is not ok for this area.

Repeat for the second scenario. The end result is that the dotted lines are replaced by straight lines:

Then Save.

The scenario is active now:

Reminder: you did have to add the same information in the system level as well in the Technical System as well: this will perform the data collection itself. If this is not done, then the scenario overview will show grey results for missing data collection.

The scenario is used to make the interfaces graphically visible.

Adding alert

When you have monitored the scenario long enough to see it is stable, the next step is to setup alerting so you get notified in the central alert inbox.

First add the alert in the Technical System as shown above. This will be the actual alert definition.

To add an alert to the graphical overview, go to the scenario definition and select the source system. Press the button alerts for component:

On the right hand side now add the alert by clicking the + button:

Then select the wanted Alert Category. And select the filter options. Add the connections for which you want to alert:

Give the filter also a name.

On the Description field you can set the alert to active:

You can also set the frequency of checking, and if an notification is to be send as well (via mail or towards outbound connector).

Also important here: first press Ok, then Save. Otherwise the data is lost. 

Set up Summary and final check

After you have finished the graphical topology, you need to go back to the Systems overview to validate if everything is activated ok for both monitoring and alerting:

Reminder: there is a split in graphical representation in the topology and scenarios and the actual system monitoring and alerting in the Technical System overview.

Interfaces that can be monitored

Full list of interfaces that can be monitored is published on the Focused Run expert portal.

Specific interface monitoring topics are explained below:

  • Idoc monitoring
  • ODATA Gateway monitoring
  • qRFC monitoring
  • RFC monitoring
  • SLT monitoring
  • Web Service monitoring

Idoc monitoring

SAP Focused Run can both report on idoc errors and delays in idoc processing. Delay in idoc processing can cause business impact and is sometimes hard to detect, since the idocs are in status 30 for outbound, or 64 for inbound, but are not processed. SAP Focused Run is one of the only tools I know which can alert on delays of idoc processing.

The monitoring starts with the Integration & Cloud monitoring tile. Then select the modelled idoc scenario (modeling is explained later in this blog):

On the alert ticker you can already see there are alerts for both idocs in error, but also alerts for idocs in delay:

In the main overview screen click on the interface line to get the overview of idocs sent:

You can now see the amount of idocs that were sent successfully, which are still in transit and which ones are in error. Click on the number to zoom in:

Click on the red error bar to zoom in further to the numbers:

Click on the idoc number to get further details:

Unfortunately, you cannot jump from SAP Focused Run into the managed system where the idoc error occurred.

Documents monitor for idocs

A different view on the idocs can be done using the documents monitor. You can select the documents monitor tool on the left side of the screen:

Now you goto the overview:

You can click on the blue numbers to dive into the details. Or you can click the Dashboards icon top right of the card to go into the dashboard mode:

This will show you the summary over time and per message type. Clicking on the bars will again bring you to the details.

Data collection and alerting setup for idoc monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Idoc monitoring:

In the monitoring filter, you can restrict the data collection to certain idoc types, receivers, senders, etc. Or leave all entries blank to check every idoc:

The graphical modelling for idocs is similar to the explanation of the example above.

Alerting for idoc errors

First alert we set up is the alert for errors.

Create a new alert and select the alert for idocs in status ERROR for longer than N minutes:

Now we add the filter. In our case we filter on outbound idocs of type DESADV:

A bit hidden at the bottom of this screen is the setting for the N for the minutes:

The time setting is depending on your technical setup of idoc reprocessing jobs (see for example this blog), and the urgency of the idocs for your business.

In the description tab add the notification variant in case you want next to the FRUN alert also mail to be sent (setup is explained in this blog).

You can set up multiple alerts. This means you can have different notification groups for different message types, different directions, different receiving parties.

Save the filter and make sure it is activated.

Idoc alert for backlog

Next to alerting on errors, Focused Run can also alert on delay of idocs. This can be done for both inbound and outbound idocs.

To set up an alert for backlog choose the option idocs in status BACKLOG for longer than N minutes:

In the filter tab set the idoc filter and at the bottom fill out the value for N minutes of backlog that should be alerted:

And in the final tab set the notification variant if wanted:

Save the filter and make sure it is activated.

Definition of delayed and error idocs

On the SAP Focused Run expert portal on idocs, there is this definition of the determination of idocs in delay and error:

Data clean up idoc monitoring

If you get too much data for idoc monitoring, apply OSS note 3241688 – Category wise table cleanup report (IDOC, PI). This note delivers program /IMA/TABLE_CLEANUP_REPORT for clean up.

ODATA gateway monitoring

We assume in this use case that end users are using the ODATA in FIORI apps. In case ODATA is consumed by external applications like Tibco, Mulesoft, Mendix, etc., you have to replace USER with the corresponding application.

Model end users in LMDB

Before we can start the scenario modelling, we first need to model the end users in LMDB as a Unspecific Standalone Application System), just like we did for TIBCO in this blog.

Name the ‘system’ USER:

Make sure the status is Active.

Add this new system USER to the Technical System list in the Integration Monitoring setup.

The system will be display only.

Data collection and alerting setup for ODATA interfaces

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Gateway Errors:

In the monitoring settings, you can filter on specific items if wanted, or leave everything blank to report on any error:

In the tab alerting setup the alerting:

The filter for monitoring and alerting can be different. It cloud be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Graphical modelling of ODATA interfaces

In the graphical modelling add the backend system and the system created for USER:

Now add the link starting with USER towards the backend system:

Save your changes.

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.
Monitoring usage of ODATA interfacing

The end result in operations looks as follows:

In the graphical overview click on the red line. The screen with the exceptions opens. Click on the red number to see the overview:

Here you can see the trends and zoom into the specific errors:

qRFC monitoring

qRFC connections are frequently used in communication from ECC to EWM and SCM systems. For generic tips and tricks for qRFC handling, read this blog.

OSS notes for bug fixing qRFC monitoring

Please make sure bug fix OSS note 3014667 – Wrong parameter for QRFC alerts is applied before starting with qRFC monitoring.

Other OSS notes:

Data collection and alerting setup for qRFC monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for qRFC Errors:

In the monitoring settings, you can filter on specific queues, direction and RFC name, or leave everything blank to report on everything:

In the alerting part check you can choose between age of qRFC entries and number of entries:

And set the filters for which ones, and the metric threshold for CRITICAL errors:

The filter for monitoring and alerting can be different. It could be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Queued RFC’s are normally back and forth between 2 systems. If this is the case you have to make the settings for both systems.

Graphical modelling of qRFC interfaces

In the graphical modelling add the filter between two systems for the qRFC monitoring:

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.

Queued RFC’s are normally back and forth between 2 systems. If this is the case you have to make the settings for both systems. You model first one direction and then model the direction back:

Monitoring qRFC usage

The end result in operations looks as follows:

You can see here qRFC is modelled back and forth between 2 systems. The blue line indicates messages in process. The red line is clicked on. Here you can see both messages in process and errors. Click on the red error number gives the details:

Monitoring RFC’s between SAP systems

RFC’s with fixed user ID

See the example above on CUA idoc monitoring.

Trusted RFC’s

If you have to setup an RFC monitoring for a trusted RFC (for example between Netweaver Gateway system and ECC system), then you have to take care of the user ID’s and rights. The system from which the SM59 test will run, will use that Focused Run user ID to log on to the other system. If your user ID’s are unique for each system you have to create the user ID in the other systems with the rights to be able to execute a ping and logon for the test.

End result RFC checks

The end results of the RFC is list of RFC’s with the latency time, availability and logon test overview:

Transactional RFC towards external system

To monitor transactional RFC (type T) towards an external system like TIBCO, Mulesoft, etc, you first need to model the external system in the LMDB. To do this goto the LMDB maintenance Fiori app:

Then select Single Customer Network and select the option Technical Systems. In this section choose the Type Unspecific Standalone Application System:

And press Create:

Fill out the details and Save. Make sure the status is Active.

Now the system can be added in the configuration of technical systems in the Interface monitoring configuration:

Now you can model the tRFC interface connection monitoring:

OSS notes for RFC monitoring

Relevant OSS notes:

SLT integration monitoring

This blog focuses specifically on SLT integration monitoring. Monitoring an SLT system itself is explained in this dedicated blog.

Set up SLT integration scenario

Start the integration and exception monitoring FIORI tile:

On the configuration add the SLT system:

Select SLT as specific scenario:

On the Monitoring part you can filter on a specific source system and/or SLT schema:

On the 3rd tab you can set the Alerting in cases of errors:

Now save and activate. The monitoring is active now.

Next step is to use this system in a model for your scenario:

Using the SLT integration monitor

If you open the Fiori tile and you have selected your scenario, you still need to perform an extra click to go to the SLT monitor:

First you get overview of your system(s):

You need to click on the blue numbers to drill down:

This gives overview of errors, source connection status and target connection status.

You cannot drill down further on this tile. If you see an error, you need to go to your SLT server and start transaction LTRO to see all detailed error and start fixing from there. Transaction LTRO can have errors shown that are not visible in transaction LTRC. Focused Run uses LTRO data.

Web service monitoring

Web services monitoring automates the monitoring in transaction SRT_MONI, which is extensively explained in this blog.

This monitoring does not check the connection availability of the web service. To make that happen, you would need to install a custom program from this blog, that writes an entry to SM21. From the SM21 entry, you can create a custom monitoring metric that alerts on the connection issue. How to setup custom metrics is explained in this blog.

SAP reference for web service monitoring can be found here.

Data collection and alerting setup for web service monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Web Service Errors:

In the monitoring settings, you can filter on specific criteria, or leave everything blank to report on everything:

In the alerting part check you can choose between amount of entries and number of error entries:

And set the filters for the alerting:

The filter for monitoring and alerting can be different. It could be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Graphical modelling of web services monitoring

In the graphical modelling add the filter between two systems for the web service monitoring:

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.
Monitoring usage of web services

The end result looks as follows:

You can click on the errors or success messages and zoom all the way down to individual messages:

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run batch job monitoring overview

This blog will explain SAP Focused Run capabilities for batch job monitoring. The first part of the blog explains the functional capabilities. The second part covers the technical setup of batch job monitoring.

Batch job monitoring

Batch job monitoring in SAP Focused Run is part of Job and Automation monitoring:

After opening the start screen and selecting the scope you get the total overview:

Click on the top round red errors to zoom in to the details (you can’t drill down on the cards below):

Click on the job to zoom in:

Systems overview

Click on the system monitoring button:

On the screen, zoom out on the overview by clicking the blue Systems text top left:

Now you get the overview per system:

Batch job analysis

Batch job analysis is a powerful function. Select it in the menu:

Result screen shows 1 week data by default:

The default sorting is on total run time.

Useful sortings:

  1. Total run time: find the jobs that run long in your system in total. These most likely will also be the ones that cause high load, or business is waiting long for to finish to give results.
  2. Average run time: find the jobs that take on average long time to run. By optimizing the code or batch job variant, the run time can be improved.
  3. Failure rate: find the jobs that fail with a high %. Get the issues known and then address them.
  4. Total executions: some jobs might simply be planned too frequently. Reduce the run frequency.

By clicking on the job trend icon at the end of the line you jump to the trend function.

Job trend function

From the analysis screen or by selecting the Trend graph button you reach the job trend function:

Select the job and it will show the trend for last week:

You can see if execution went fine, or not, and bottom right see average time the job took to complete.

Technical setup of job monitoring

For batch job monitoring settings, open the configuration and start with the global settings:

Here you can see the data volume used and set the retention time for how long aggregated data is kept.

You can also set generic rating rules:

Activation per system

In the activation per system select the system and it will open the details:

First switch on the generic activation for each system

Activation for specific jobs to be monitored

Now you can start creating a job group. First select left Job groups, then the Plus button top right:

Add a job by clicking the plus button and search for the job:

Press Save to add the job to the monitoring.

Grouping logic

You can group jobs per logical block. For example you can group all basis jobs, all Finance jobs, etc. Or you can group jobs per system. Choice is up to you. Please read first the part on alerting. This might make you reconsider the grouping logic.

Adding alerting to job monitor

The jobs added to the group are monitored. But alerting is a separate action.

Go to the Alerting part of the job group. And an alert. First select the Alert type (critical status, delay, runtime, missing a job). Assign a notification variant (who will get the alert mail), and decide on alert grouping or atomic alerts.

If you do not specify a filter it will apply for the complete group. You can also apply a filter here to select a sub group of the job group.

Based on the alerting you might want to reconsider the grouping.

Monitoring Number of Long Running Jobs

In SAP Focused Run there is no standard way of monitoring number of long running jobs either in System Monitoring or in Job Monitoring

With Job Monitoring you can monitor if a specific job is running for more than specific amount of time, but we can’t monitor if there are more than a specific number of jobs that are running for more than a specific amount of time.

And in System monitoring we can only monitor if there are more than a specific number of cancelled jobs or job in status running but we can’t monitor long running jobs.

However, the Focused Run Guided Procedure Framework provides a pre-build plugin by which you can create a custom guided procedure to check the number of long running batch jobs in a managed system.

Setup of long running jobs

To access Guided Procedures App navigate to Advanced System Management area on the Focused Run launchpad and click on the System Management Guided Procedure App.

Then on the Guided Procedure app navigate to Guided Procedure Catalog page.

On the Catalog Page click on the + button to create a new custom guided procedure.

In the next pop-up screen provide a name and description. Optionally you need to select a package if you want to transport your guided procedures e.g. from DEV Focused Run system to PROD Focused Run system.

Then back in the Guided Procedure Catalog screen click on the guided procedure you just created to open it for editing.

In the guided procedure edit screen click on Edit button.

In the Properties section provide a name for the step and a description as shown below.

Then in the Step Content section click on New button to add a step.

In the next popup select the plugin ABAP Long Running Jobs and provide the threshold for time range as long running Job. You can also specify the graphic type as a table or as a bar chart. After selecting, click on Ok button to continue.

Then Save and Activate the custom guided procedure.

Now your custom guided procedure is available for execution.

Go back to Guided Procedures Instances page to execute your guided procedure. Click on the + button to start a new execution.

In the next popup, select the guided procedure you just created and then click on + button to add managed systems for scope of execution.

After selecting the managed system click on Perform Manually to create the instance of the guided procedure for the selected scope.

Then click on the instance to start the guided procedure page.

Click on Perform to execute the guided procedure.

Now you will see the result as shown below.

For more information regarding available plugins you can refer the SAP documentation here.

Relevant OSS notes

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans and Manas Tripathy (Simac). Repost done with permission. >>