SAP Focused Run alert management outbound integration to ServiceNow

SAP Focused Run alert management function can send out mails to alert to mail addresses (see this blog).

SAP Focused Run can also call an outbound integration to a ITIL tool like ServiceNow. This can help to speed up incident creation.

It needs implementation on ABAP level. The coding is given at the end of the blog.

Questions that will be answered in this blog are:

  • How does the high level integration between SAP Focused Run and ServiceNow look like?
  • Where can I find information on the to-be-implemented ABAP BADI?
  • How can I send an alert directly to ServiceNow from the Alert management detailed page?
  • How can I automate in template settings to send an alert via outbound integration towards ServiceNow?
  • How do I connect from the ABAP stack towards the midserver?
  • Which BADI do I need to activate for the outbound integration?
  • How do I call the midserver connection from the BADI?
  • How do I deal with the differences in severity definition between ServiceNow and SAP Focused Run?
  • If I want to set up the connection via web services, what do I need to do?
  • How can I include application logging in such a way that I can monitor the calls and issues in SLG1?
  • Where can I find the ABAP code needed?

Setting up the integration

For setting up the integration to ServiceNow the AEM third party consumer connection BADI must be implemented. The full manual for the BADI itself can be found on the SAP Focused Run Expert portal.

The documents describes the BADI in generic way.

To call ServiceNow you have to use one of the following 2 integration methods:

  • Call webservice: in this case you import the WSDL from ServiceNow and generate the proxy and execute the SOAMANAGER settings to logon to ServiceNow. You need ABAP code in the BADI to call the proxy. See this blog for generic use of setting up webservice consumption in ABAP stack. Available webservices for ServiceNow can be found on the ServiceNow page.
  • Call the ServiceNow midserver: in this case you call a REST interface. In this case you need to setup a HTTP RFC connection to the midserver. ABAP code in the BADI is needed to make the REST call. See this blog for generic use of REST call in ABAP stack. REST API references from ServiceNow can be found on the ServiceNow page.

Alert trigger integration

If you are inside an alert, you can trigger the alert reaction:

Then select the reaction to forward to ServiceNow:

Within few seconds the alert in ServiceNow is created:

Alert reaction automation in template settings

The alert reaction to ServiceNow can also be automated as Outbound Integration. If you are in template maintenance mode, switch to Expert mode.

In the alerts tab now configure the alert type for Forward to and Outbound Connector:

Assign the correct variant.

If you click on the variant you go to the variant configuration screen:

Then select the outbound integration name to see the details:

Important here is the where used list, which shows you from which templates and template elements the connector is called.

Whenever the alert is raised, also the outbound integration connector to ServiceNow is called.

Set up the RFC destination

In SM59 setup the RFC connection towards the MID server as type H RFC connection:

Activation of the enhancement spot and BADI

The details of the enhancement spot and BADI implementation are in the SAP document published on the SAP Focused Run Expert Portal.

Use transaction SE18 or SE80 to activate enhancement spot ACC_REACTION_EXTERNAL and then activate BADI BADI_ACC_REACTION_EXT.

The result looks as follows:

Double click on the implementation:

Double click on the REACT_TO_ALERT interface to go to the code. The code we implemented looks as below:

METHOD if_acc_reaction_ext~react_to_alert.     
  IF is_alert-ref->get_type( ) <> 'ALERT'.       
    RETURN.     
  ENDIF.     
" Init. the application log.     
  me->zgo_logger = NEW zcl_snow_bi_logger( ). " Add info message to notify reaction was triggered     
  me->zgo_logger->bal_log_add_message(       
    EXPORTING 
       ziv_msgty = zif_snow_constants=>zgc_message_types-info        ziv_msgno = '002'     
     ).     
" Send message to service now     
  NEW zcl_snow_bi_common( )->zif_snow_bi_common~send_message_to_snow(     EXPORTING         
      zii_logger       = me->zgo_logger       
      zii_snow_message = NEW zcl_snow_event_message( zii_alert = is_alert-ref  )
      ziv_resolution_state = zif_snow_constants=>zgc_resolution_state-new     ).     
" Save the application log     
  me->zgo_logger->bal_log_save( ).   
ENDMETHOD.

What do we do in the code:

  1. We only react on type ALERT
  2. We make an entry in the application log (so we can check later on in SLG1)
  3. We call the actual interface which we have implemented in class ZCL_SNOW_BI_COMMON class

The implementation code

The sending code in method ZIF_SNOW_BI_COMMON~SEND_MESSAGE_TO_SNOW that was just called looks as follows:

  METHOD zif_snow_bi_common~send_message_to_snow.

    " Retrieve JSON body for the request
    DATA(zlv_snow_event_json) = zii_snow_message->get_json( ziv_resolution_state ).

    me->add_json_to_bal_log( EXPORTING zii_logger   = zii_logger
                                       ziv_json     = zlv_snow_event_json ).


    TRY.
        " Execute HTTP Post, send data to the MiD API
        DATA(zlv_http_status_code) = NEW zcl_snow_mid_api( )->post( ziv_event_messages = zlv_snow_event_json ).

        " Add HTTP response code to the log..
        IF zlv_http_status_code-code >= 200 AND zlv_http_status_code-code < 300.
          DATA(zlv_msgty) = zif_snow_constants=>zgc_message_types-success.
        ELSE.
          zlv_msgty = zif_snow_constants=>zgc_message_types-error.
        ENDIF.

        zii_logger->bal_log_add_message(
           EXPORTING
             ziv_msgty = zlv_msgty
             ziv_msgno = '001'
             ziv_msgv1 = |{ zlv_http_status_code-code } { zlv_http_status_code-reason }|
         ).

      CATCH zcx_snow_mid_api INTO DATA(zlo_exception).
        "Add exception to the application log
        zii_logger->bal_log_add_message(
          EXPORTING
            ziv_msgid = zlo_exception->if_t100_message~t100key-msgid
            ziv_msgno = zlo_exception->if_t100_message~t100key-msgno
            ziv_msgv1 = CONV #( zlo_exception->if_t100_message~t100key-attr1 )
            ziv_msgv2 = CONV #( zlo_exception->if_t100_message~t100key-attr2 )
            ziv_msgv3 = CONV #( zlo_exception->if_t100_message~t100key-attr3 )
            ziv_msgv4 = CONV #( zlo_exception->if_t100_message~t100key-attr4 )
        ).
    ENDTRY.

  ENDMETHOD.

What happens here:

  1. Data object is build in the data definition (details follow below)
  2. This is logged
  3. The actual call is performed by calling class ZCL_SNOW_MID_API (details follow below)
  4. The result is checked (200 is http code for Ok)
  5. Error result is logged in case of issues

The code for the message content

For the message content, we first define the message event type:

INTERFACE zif_snow_message
  PUBLIC .

  TYPES:
    BEGIN OF zgts_event,
      "! Source
      source           TYPE string,
      "! Name of the object
      node             TYPE string,
      "! Type of object, host, instance
      type             TYPE string,
      "! Severity
      severity         TYPE string,
      "! Date/Time(YYYY-MM-DD HH:MM:SS)
      time_of_event    TYPE string,
      "! Alert description/name
      description      TYPE string,
      "! SAP System ID
      event_class      TYPE string,
      "! Unique ID
      message_key      TYPE string,
      "! Alert state
      resolution_state TYPE string,
      "! Resource
      resource         TYPE string,
    END OF zgts_event.

  METHODS get_json  IMPORTING ziv_resoultion_state TYPE string
                    RETURNING VALUE(zrv_json)      TYPE /ui2/cl_json=>json.

ENDINTERFACE.

This event is used in the actual message build code:

  METHOD zif_snow_message~get_json.

    DATA zlv_events TYPE /ui2/cl_json=>json.

    " Get current time stamp
    GET TIME STAMP FIELD DATA(zlv_time_stamp_now).

    DATA(zlv_event_json) = /ui2/cl_json=>serialize(
      EXPORTING
        data        =  VALUE zif_snow_message~zgts_event(
          source              = |{ syst-sysid } - FRUN |
          node                = zgi_alert->get_managed_object_name( )
          type                = zgi_alert->get_managed_object_type( )
          severity            = me->convert_severity( zgi_alert->get_severity( ) )
          time_of_event       = me->convert_alert_timestamp( ziv_timestamp = zlv_time_stamp_now )
          description         = COND #( LET custom_description = me->remove_html_tags( zgi_alert->get_custom_description( ) ) IN
                                        WHEN strlen( custom_description ) > 0 THEN custom_description
                                        ELSE me->remove_html_tags( zgi_alert->get_sap_description(  ) ) )
          event_class         = substring( val = zgi_alert->get_managed_object_name( ) off = 0 len = 3 )
          message_key         = zgi_alert->get_type_id( )
          resolution_state    = ziv_resoultion_state
          resource            = zgi_alert->get_name( )
        )
        pretty_name = /ui2/cl_json=>pretty_mode-low_case
    ).

    IF zlv_events IS INITIAL.
      zlv_events = zlv_event_json.
    ELSE.
      zlv_events = zlv_events && ',' && zlv_event_json.
    ENDIF.

    IF zlv_events IS NOT INITIAL.
      zrv_json =  '{ "records": [' && zlv_events && '] }'.
    ENDIF.

  ENDMETHOD.

Simple method codes:

  METHOD if_acc_mea~get_managed_object_name.
    rv_managed_object_name = ms_mea-context_name.
  ENDMETHOD.
  METHOD if_acc_mea~get_managed_object_type.
    rv_managed_object_type = ms_mea-context_type.
  ENDMETHOD.
  METHOD if_acc_mea~get_severity.
    rv_severity = ms_mea-severity.
  ENDMETHOD.
  METHOD convert_alert_timestamp.

    " Convert the timestamp
    CONVERT TIME STAMP ziv_timestamp TIME ZONE 'UTC'
      INTO DATE DATA(zlv_date) TIME DATA(zlv_time)
      DAYLIGHT SAVING TIME DATA(zlv_dls_time).


    zrv_date_time = |{ zlv_date+0(4) }-{ zlv_date+4(2) }-{ zlv_date+6(2) } { zlv_time+0(2) }:{ zlv_time+2(2) }:{ zlv_time+4(2) }|.

  ENDMETHOD.
  METHOD if_acc_mea~get_name.
    rv_name = ms_mea-name.
  ENDMETHOD.
  METHOD if_acc_mea~get_type_id.
    rv_type_id = ms_mea-type_id.
  ENDMETHOD.

Helper method to remove HTML tags:

  METHOD remove_html_tags.

    IF ziv_description IS INITIAL.
      RETURN.
    ENDIF.

    DATA(zlv_description) = ziv_description.

    DATA(zlv_newline) = cl_abap_char_utilities=>newline.

    REPLACE ALL OCCURRENCES OF '<h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<p>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</p>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<li>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</li>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<a href="' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF REGEX '">[A-Za-z0-9_\~\-+=&[:space:]]*</a>' IN  zlv_description WITH '' IGNORING CASE.

    REPLACE ALL OCCURRENCES OF ' ' IN zlv_description WITH '_'  IGNORING CASE.
    REPLACE ALL OCCURRENCES OF ':' IN zlv_description WITH ':'     IGNORING CASE.

    zrv_desription = zlv_description.


  ENDMETHOD.

Method to convert severity:

  METHOD convert_severity.

    CASE ziv_sm_severity.
      WHEN 1 OR 2 OR 3 OR 4.
        zrv_sn_severity = 5. " Info
      WHEN 5.
        zrv_sn_severity = 4. " Warning
      WHEN 6.
        zrv_sn_severity = 3. " Minor
      WHEN 7.
        zrv_sn_severity = 2. " Major
      WHEN 8 OR 9.
        zrv_sn_severity = 1. " Critical
      WHEN OTHERS.
        zrv_sn_severity = 0. " Clear
    ENDCASE.

  ENDMETHOD.

This method convert the SAP Focused Run severity code from 1 to 9 towards the codes in ServiceNow. Adjusts the codes per your requirement.

The sending code

The sending code is as follows (for more details on ABAP REST calls, read this blog):

  METHOD post.

    " Initialize the HTTP client
    me->init_http_client( ).

    " Set header
    me->zgo_http_client->request->set_method( method = me->zgo_http_client->request->co_request_method_post ).

    me->zgo_http_client->request->set_content_type( content_type = zgc_content_type ).

    " Set body
    me->zgo_http_client->request->set_cdata( EXPORTING data = ziv_event_messages ).

    " Send the data (POST)
    me->send( ).

    " Receive the response; needed to get the http status code
    me->receive( ).

    " Get the status code
    me->zgo_http_client->response->get_status(
      IMPORTING
        code   = zrs_status-code      " HTTP status code
        reason = zrs_status-reason    " HTTP status description
    ).

  ENDMETHOD.

Subimplementations of the methods:

  METHOD init_http_client.

    cl_http_client=>create_by_destination(
      EXPORTING
        destination              = zgc_destination      " Logical destination (specified in function call)
      IMPORTING
        client                   = me->zgo_http_client  " HTTP Client Abstraction
      EXCEPTIONS
        argument_not_found       = 1
        destination_not_found    = 2
        destination_no_authority = 3
        plugin_not_active        = 4
        internal_error           = 5
        OTHERS                   = 6
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD send.

    me->zgo_http_client->send(
      EXCEPTIONS
        http_communication_failure = 1
        http_invalid_state         = 2
        http_processing_failed     = 3
        http_invalid_timeout       = 4
        OTHERS                     = 5
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD receive.

    me->zgo_http_client->receive(
      EXCEPTIONS
        http_communication_failure = 1
        http_invalid_state         = 2
        http_processing_failed     = 3
        OTHERS                     = 4
    ).

    IF sy-subrc NE 0.
      me->raise_exception_for_sys_msg( ).
    ENDIF.

  ENDMETHOD.
  METHOD raise_exception_for_sys_msg.

    RAISE EXCEPTION TYPE zcx_snow_mid_api
      EXPORTING
        textid = VALUE scx_t100key(
                      msgid = syst-msgid
                      msgno = syst-msgno
                      attr1 = syst-msgv1
                      attr2 = syst-msgv2
                      attr3 = syst-msgv3
                      attr4 = syst-msgv4
                  ).

  ENDMETHOD.

What is important here is that in the constant ZGC_DESTINATION is the definition of the H type RFC destination towards the midserver.

Helper code: zcl_snow_event_message

Method ZIF_SNOW_MESSAGE~GET_JSON with input ZIV_RESOULTION_STATE type STRING and returning ZRV_JSON type /UI2/CL_JSON=>JSON, code:

  METHOD zif_snow_message~get_json.

    DATA zlv_events TYPE /ui2/cl_json=>json.

    " Get current time stamp
    GET TIME STAMP FIELD DATA(zlv_time_stamp_now).

    DATA(zlv_event_json) = /ui2/cl_json=>serialize(
      EXPORTING
        data        =  VALUE zif_snow_message~zgts_event(
          source              = |{ syst-sysid } - FRUN |
          node                = zgi_alert->get_managed_object_name( )
          type                = zgi_alert->get_managed_object_type( )
          severity            = me->convert_severity( zgi_alert->get_severity( ) )
          time_of_event       = me->convert_alert_timestamp( ziv_timestamp = zlv_time_stamp_now )
          description         = COND #( LET custom_description = me->remove_html_tags( zgi_alert->get_custom_description( ) ) IN
                                        WHEN strlen( custom_description ) > 0 THEN custom_description
                                        ELSE me->remove_html_tags( zgi_alert->get_sap_description(  ) ) )
          event_class         = substring( val = zgi_alert->get_managed_object_name( ) off = 0 len = 3 )
          message_key         = zgi_alert->get_type_id( )
          resolution_state    = ziv_resoultion_state
          resource            = zgi_alert->get_name( )
        )
        pretty_name = /ui2/cl_json=>pretty_mode-low_case
    ).

    IF zlv_events IS INITIAL.
      zlv_events = zlv_event_json.
    ELSE.
      zlv_events = zlv_events && ',' && zlv_event_json.
    ENDIF.

    IF zlv_events IS NOT INITIAL.
      zrv_json =  '{ "records": [' && zlv_events && '] }'.
    ENDIF.

  ENDMETHOD.

Method Constructor with input ZII_ALERT type IF_ACC_MEA, code:

  METHOD constructor.

    me->zgi_alert = zii_alert.

  ENDMETHOD.

Method CONVERT_ALERT_TIMESTAMP input ZIV_TIMESTAMP type TIMESTAMP, returning ZRV_DATE_TIME type STRING. Code:

  METHOD convert_alert_timestamp.

    " Convert the timestamp
    CONVERT TIME STAMP ziv_timestamp TIME ZONE 'UTC'
      INTO DATE DATA(zlv_date) TIME DATA(zlv_time)
      DAYLIGHT SAVING TIME DATA(zlv_dls_time).


    zrv_date_time = |{ zlv_date+0(4) }-{ zlv_date+4(2) }-{ zlv_date+6(2) } { zlv_time+0(2) }:{ zlv_time+2(2) }:{ zlv_time+4(2) }|.

  ENDMETHOD.

Method CONVERT_SEVERITY input ZIV_SM_SEVERITY type AC_SEVERITY, returning ZRV_SN_SEVERITY type INT4. Code:

  METHOD convert_severity.

    CASE ziv_sm_severity.
      WHEN 1 OR 2 OR 3 OR 4.
        zrv_sn_severity = 5. " Info
      WHEN 5.
        zrv_sn_severity = 4. " Warning
      WHEN 6.
        zrv_sn_severity = 3. " Minor
      WHEN 7.
        zrv_sn_severity = 2. " Major
      WHEN 8 OR 9.
        zrv_sn_severity = 1. " Critical
      WHEN OTHERS.
        zrv_sn_severity = 0. " Clear
    ENDCASE.

  ENDMETHOD.

Method REMOVE_HTML_TAGS, input ZIV_DESCRIPTION type STRING, returning ZRV_DESRIPTION type STRING. Code:

  METHOD remove_html_tags.

    IF ziv_description IS INITIAL.
      RETURN.
    ENDIF.

    DATA(zlv_description) = ziv_description.

    DATA(zlv_newline) = cl_abap_char_utilities=>newline.

    REPLACE ALL OCCURRENCES OF '<h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</h2>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</strong>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<p>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</p>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</b>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</u>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</i>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</ul>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<li>' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '</li>' IN zlv_description WITH zlv_newline IGNORING CASE.
    REPLACE ALL OCCURRENCES OF '<a href="' IN zlv_description WITH '' IGNORING CASE.
    REPLACE ALL OCCURRENCES OF REGEX '">[A-Za-z0-9_\~\-+=&[:space:]]*</a>' IN  zlv_description WITH '' IGNORING CASE.

    REPLACE ALL OCCURRENCES OF ' ' IN zlv_description WITH '_'  IGNORING CASE.
    REPLACE ALL OCCURRENCES OF ':' IN zlv_description WITH ':'     IGNORING CASE.

    zrv_desription = zlv_description.


  ENDMETHOD.

Helper code: zcl_snow_bi_logger

Exception class ZCX_SNOW_MID_API as redefinition of CX_ROOT with re-implementation of constructor; inputs TEXTID type F_T100_MESSAGE=>T100KEY and PREVIOUS type PREVIOUS.

Code:

  METHOD constructor ##ADT_SUPPRESS_GENERATION.

    CALL METHOD super->constructor
      EXPORTING
        previous = previous.

    CLEAR me->textid.

    IF textid IS INITIAL.
      if_t100_message~t100key = if_t100_message=>default_textid.
    ELSE.
      if_t100_message~t100key = textid.
    ENDIF.

  ENDMETHOD.

Helper class zcl_snow_bi_logger.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_SAVE:

 METHOD zif_snow_bi_logger~bal_log_save.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->save( ziv_commit = abap_true ).

    ENDIF.

  ENDMETHOD.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_ADD_EXCEPTION

Import parameter: ZIX_SNOW_EXCEPTION type ref to ZCX_SNOW_MID_API

Code:

  METHOD zif_snow_bi_logger~bal_log_add_message.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->add_message(
        EXPORTING
          ziv_msgty = ziv_msgty
          ziv_msgid = ziv_msgid
          ziv_msgno = ziv_msgno
          ziv_msgv1 = ziv_msgv1
          ziv_msgv2 = ziv_msgv2
          ziv_msgv3 = ziv_msgv3
          ziv_msgv4 = ziv_msgv4
      ).

    ENDIF.

  ENDMETHOD.

Method: ZIF_SNOW_BI_LOGGER~BAL_LOG_ADD_FREE_TEXT

Inputs ZIV_MSGTY type SYMSGTY and ZIV_TEXT type ZCL_BC_BAL_LOG=>ZGTV_FREE_TEXT “(which is TYPES zgtv_free_text TYPE c LENGTH 200) .

Source:

  METHOD zif_snow_bi_logger~bal_log_add_free_text.

    IF me->zgo_bal_log IS BOUND.

      me->zgo_bal_log->add_free_text(
        EXPORTING
          ziv_msgty = ziv_msgty
          ziv_text  = ziv_text
      ).

    ENDIF.

  ENDMETHOD.

Constructor code:

  METHOD constructor.

    me->zgo_bal_log = zcl_bc_bal_log=>factory(
        ziv_object     = zif_snow_constants=>zgc_bal_log-object
        ziv_sub_object = zif_snow_constants=>zgc_bal_log-sub_object
    ).

  ENDMETHOD.

Using the ServiceNow web services

The ServiceNow webservices including instructions on how to download the WSDL are published on the ServiceNow help web pages.

Download the WSDL file and follow the instructions from this blog to import the WSDL file inso SE80 and generate the ABAP web service proxy object. In SOAMANAGER setup the logical port towards your ServiceNow installation and make sure the connection is working.

Then implement the ABAP code as above.

In stead of calling the REST service, you now call the ABAP proxy generated:

* Data Declarations
DATA: zcl_proxy TYPE REF TO zco_zbapidemowebservice, " Proxy Class
      zdata_in  TYPE zzbapidemo, " Proxy Input
      zdata_out TYPE zzbapidemoresponse, " Proxy Output
      zfault    TYPE REF TO cx_root. " Generic Fault

* Instantiate the proxy class providing the Logical port name
CREATE OBJECT zcl_proxy EXPORTING logical_port_name = 'ZDEMOWS'.

* Set Fixed Values
zdata_in-zimport = '1'.

TRY .
    zcl_proxy->zbapidemo( EXPORTING input = zdata_in
                          IMPORTING output = zdata_out ).
    WRITE: / zdata_out-zexport.
  CATCH cx_root INTO zfault.
* here is the place for error handling

ENDTRY.

Off course you will use the generated in and out data from the generated service.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run system analysis overview

System analysis is part of the Root Cause analysis functions of Focused Run. It can be used for issues analysis of current issues and for longer term trending.

Questions that will be answered in this blog are:

  • How can I start execute System Analysis for a system?
  • Which type of systems can be analysed with System Analysis?
  • How can I use the System Analysis tool for immediate analysis of issues with a system?
  • How can I use the System Analysis tool for getting insight in the longer term trends inside a system?
  • How to set up System Analysis for performance analysis?

System analysis

Start the system analysis function by clicking on the Fiori tile for System Analysis:

Select the system you need to analyze for issues in the scope selection screen. In the first case we take an ABAP stack with time frame of the last 6 hours:

This overview might be bit overwhelming the first time. But you can see the performance was bad in the middle of the day (see top middle graph on average response time). Bottom middle graph shows CPU of some application servers was at 100%. And at the same time there were many dumps (right middle graph). This gives a clear direction were to look for issues.

The system analysis overview adjusts the information automatically to its content. This is the information for a HANA system:

Note here that the time frame here is from the last month. This is for getting longer term overview of the system behaviour. You can get this longer term overview by changing the time frame of the system analysis tool.

Page catalog

You can select a specific view from the page catalog list on the left button bar on the screen:

So you can easily filter the specific page for the type of system you need to analyze.

Performance analysis in System Analysis

In the system analysis function there is a special function to monitor system performance based on ST03 system data from the managed system.

Choose the menu option for ABAP performance:

The performance overview will now open:

You can click on many items now to get to the details.

Setup of Performance Analysis

To make the above function work;, click on the settings wheel and click on the Configure Collection of ABAP performance data:

Make sure the system you need analysis data from is activated correctly.

If the data collection is not ok, check the Collector Status button and Agent logs. Also check the backend system user used to see if this user has sufficient authorization to fetch the required data.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run license and usage

SAP Focused Run is a licensed product. The metric is amount of GB stored in the application.

If you have more systems, more detailed metrics, with short measurement times and many functions, the more GB you will use.

Questions that will be answered in this blog are:

  • How to check the current license usage?
  • What drives the usage?
  • How can I get a cost estimate?
  • How can I create a business case for Focused Run?

Checking the license usage

In SE38 start program FRUN_USAGE_UPDATE:

Now you can see which Focused Run function uses how many MB’s.

Read this note that explains the slower clean up on system analysis data: 3478938 – Housekeeping of System Analysis data in SAP Focused Run.

What drives the usage?

Usage is driven by:

Getting a cost estimate

Your SAP account manager or the Focused Run team in Germany can give you a good cost estimate. Material number for Focused Run in the price list is 7019453.

Input for cost estimate: sizes and numbers of systems, functions of Focused Run you want to deploy, and the retention period of the data.

Output: cost estimate.

Creating the business case

The business case has 2 aspects:

  • Cost: infrastructure, license, implementation
  • Benefits

Benefits is easier to quantify if your IT service is more mature.

Elements to consider:

  • How much does an hour of outage cost on your main ECC or S4HANA core system? For lager companies, this is easily 10.000 Euro per hour or more.
  • How much does your complaint handling cost per ticket?
  • How much time is currently spent on manual monitoring?

Benefits of SAP Focused Run are then in avoiding half the outages by faster insights and reducing the outage costs. You cannot avoid all outages, but you can act faster.

Benefits of Focused Run are in improved clean up and issue solving. This will both reduce issues in your systems and reduce complaints and tickets you need to handle.

For larger system landscapes (more than 50 systems) the business case is quite easy to create and will be positive fast.

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>

SAP Focused Run interface monitoring overview

The integration and cloud monitoring function of SAP Focused Run consists of 2 main functions:

  • Interface monitoring between SAP systems
  • Cloud monitoring between on premise and cloud SAP products (see blog)

This blog will give an overview of the interface monitoring between SAP systems.

Questions that will be answered in this blog are:

  • How does the interface monitoring in SAP Focused Run look like?
  • How much details and history can I see in SAP Focused Run interface monitoring?
  • How can I enable my systems for interface monitoring?
  • How do I set up a scenario to monitor?
  • How do I setup alerting for an interface scenario?
  • How do I setup Idoc monitoring?
  • How do I setup ODATA monitoring?
  • How do I setup qRFC monitoring?
  • How do I setup webservice monitoring?
  • How do I setup RFC monitoring?
  • How do I setup SLT monitoring?

Interface monitoring

To start the interface monitoring click on the Fiori tile:

In the next screen you now select one or multiple integration scenario’s:

Then you reach the scenario overview screen:

You can immediately see with the red colored scenario’s that there is an issue.

Click on the red scenario to open the details of the scenario topology:

The topology indicates most of the interfaces are correct. To see the detailed issue, click on the red line:

Click on the red error for the details:

On the right side of the you can click on the Dashboard icon to get an historical overview:

Link with alert management

Interface errors can be the trigger for an alert in the Alert Management function.

Technical scenario setup

e concept for interface monitoring is unfortunately a bit confusing at first.

There are 2 main things to remember:

  1. Systems data collection and alerting: this is where the action happens
  2. Graphical representation: this is where you make it visible

Unfortunately this means you have to do lot of double work.

Set up systems

Go to the Integration and cloud monitoring Fiori tile. On top right click on the configuration icon to change or add a scenario:

First add the systems:

Select the system:

Select the configuration categories:

Select the monitoring:

Here you must add the connections you want to monitor.

The alerting configuration is empty initially:

We will fill this later if we want alerts for a specific interface connection.

Save this system and repeat for the rest of the systems.

The system determines the actual data collection and actual alerting. The system can be re-used in multiple scenarios.

Scenario configuration

On the configuration screen now add the new scenario. Add a name and description for the scenario:

In the topology screen now add the systems in the drop down for Node Selection and use the + icon to add them to the screen:

Now select the source system (we will have 1 CUA central and 2 child systems) and select the Action box:

Select Add link to and then select the system.

Now add a filter to the link by clicking on the line:

In the dialog screen on the right now add the details:

Start by giving the group a name. Now add the filter. Give the filter a name (in this case RFC1). Select the central component and the category (in this case Connection monitoring SM59). Now add the RFC connection type (3) and connection name to be monitored.

Very important here: press Ok first to transfer the data. Only then press Save. Otherwise your data is lost. SAP UI is not ok for this area.

Repeat for the second scenario. The end result is that the dotted lines are replaced by straight lines:

Then Save.

The scenario is active now:

Reminder: you did have to add the same information in the system level as well in the Technical System as well: this will perform the data collection itself. If this is not done, then the scenario overview will show grey results for missing data collection.

The scenario is used to make the interfaces graphically visible.

Adding alert

When you have monitored the scenario long enough to see it is stable, the next step is to setup alerting so you get notified in the central alert inbox.

First add the alert in the Technical System as shown above. This will be the actual alert definition.

To add an alert to the graphical overview, go to the scenario definition and select the source system. Press the button alerts for component:

On the right hand side now add the alert by clicking the + button:

Then select the wanted Alert Category. And select the filter options. Add the connections for which you want to alert:

Give the filter also a name.

On the Description field you can set the alert to active:

You can also set the frequency of checking, and if an notification is to be send as well (via mail or towards outbound connector).

Also important here: first press Ok, then Save. Otherwise the data is lost. 

Set up Summary and final check

After you have finished the graphical topology, you need to go back to the Systems overview to validate if everything is activated ok for both monitoring and alerting:

Reminder: there is a split in graphical representation in the topology and scenarios and the actual system monitoring and alerting in the Technical System overview.

Interfaces that can be monitored

Full list of interfaces that can be monitored is published on the Focused Run expert portal.

Specific interface monitoring topics are explained below:

  • Idoc monitoring
  • ODATA Gateway monitoring
  • qRFC monitoring
  • RFC monitoring
  • SLT monitoring
  • Web Service monitoring

Idoc monitoring

SAP Focused Run can both report on idoc errors and delays in idoc processing. Delay in idoc processing can cause business impact and is sometimes hard to detect, since the idocs are in status 30 for outbound, or 64 for inbound, but are not processed. SAP Focused Run is one of the only tools I know which can alert on delays of idoc processing.

The monitoring starts with the Integration & Cloud monitoring tile. Then select the modelled idoc scenario (modeling is explained later in this blog):

On the alert ticker you can already see there are alerts for both idocs in error, but also alerts for idocs in delay:

In the main overview screen click on the interface line to get the overview of idocs sent:

You can now see the amount of idocs that were sent successfully, which are still in transit and which ones are in error. Click on the number to zoom in:

Click on the red error bar to zoom in further to the numbers:

Click on the idoc number to get further details:

Unfortunately, you cannot jump from SAP Focused Run into the managed system where the idoc error occurred.

Documents monitor for idocs

A different view on the idocs can be done using the documents monitor. You can select the documents monitor tool on the left side of the screen:

Now you goto the overview:

You can click on the blue numbers to dive into the details. Or you can click the Dashboards icon top right of the card to go into the dashboard mode:

This will show you the summary over time and per message type. Clicking on the bars will again bring you to the details.

Data collection and alerting setup for idoc monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Idoc monitoring:

In the monitoring filter, you can restrict the data collection to certain idoc types, receivers, senders, etc. Or leave all entries blank to check every idoc:

The graphical modelling for idocs is similar to the explanation of the example above.

Alerting for idoc errors

First alert we set up is the alert for errors.

Create a new alert and select the alert for idocs in status ERROR for longer than N minutes:

Now we add the filter. In our case we filter on outbound idocs of type DESADV:

A bit hidden at the bottom of this screen is the setting for the N for the minutes:

The time setting is depending on your technical setup of idoc reprocessing jobs (see for example this blog), and the urgency of the idocs for your business.

In the description tab add the notification variant in case you want next to the FRUN alert also mail to be sent (setup is explained in this blog).

You can set up multiple alerts. This means you can have different notification groups for different message types, different directions, different receiving parties.

Save the filter and make sure it is activated.

Idoc alert for backlog

Next to alerting on errors, Focused Run can also alert on delay of idocs. This can be done for both inbound and outbound idocs.

To set up an alert for backlog choose the option idocs in status BACKLOG for longer than N minutes:

In the filter tab set the idoc filter and at the bottom fill out the value for N minutes of backlog that should be alerted:

And in the final tab set the notification variant if wanted:

Save the filter and make sure it is activated.

Definition of delayed and error idocs

On the SAP Focused Run expert portal on idocs, there is this definition of the determination of idocs in delay and error:

Data clean up idoc monitoring

If you get too much data for idoc monitoring, apply OSS note 3241688 – Category wise table cleanup report (IDOC, PI). This note delivers program /IMA/TABLE_CLEANUP_REPORT for clean up.

ODATA gateway monitoring

We assume in this use case that end users are using the ODATA in FIORI apps. In case ODATA is consumed by external applications like Tibco, Mulesoft, Mendix, etc., you have to replace USER with the corresponding application.

Model end users in LMDB

Before we can start the scenario modelling, we first need to model the end users in LMDB as a Unspecific Standalone Application System), just like we did for TIBCO in this blog.

Name the ‘system’ USER:

Make sure the status is Active.

Add this new system USER to the Technical System list in the Integration Monitoring setup.

The system will be display only.

Data collection and alerting setup for ODATA interfaces

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Gateway Errors:

In the monitoring settings, you can filter on specific items if wanted, or leave everything blank to report on any error:

In the tab alerting setup the alerting:

The filter for monitoring and alerting can be different. It cloud be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Graphical modelling of ODATA interfaces

In the graphical modelling add the backend system and the system created for USER:

Now add the link starting with USER towards the backend system:

Save your changes.

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.
Monitoring usage of ODATA interfacing

The end result in operations looks as follows:

In the graphical overview click on the red line. The screen with the exceptions opens. Click on the red number to see the overview:

Here you can see the trends and zoom into the specific errors:

qRFC monitoring

qRFC connections are frequently used in communication from ECC to EWM and SCM systems. For generic tips and tricks for qRFC handling, read this blog.

OSS notes for bug fixing qRFC monitoring

Please make sure bug fix OSS note 3014667 – Wrong parameter for QRFC alerts is applied before starting with qRFC monitoring.

Other OSS notes:

Data collection and alerting setup for qRFC monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for qRFC Errors:

In the monitoring settings, you can filter on specific queues, direction and RFC name, or leave everything blank to report on everything:

In the alerting part check you can choose between age of qRFC entries and number of entries:

And set the filters for which ones, and the metric threshold for CRITICAL errors:

The filter for monitoring and alerting can be different. It could be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Queued RFC’s are normally back and forth between 2 systems. If this is the case you have to make the settings for both systems.

Graphical modelling of qRFC interfaces

In the graphical modelling add the filter between two systems for the qRFC monitoring:

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.

Queued RFC’s are normally back and forth between 2 systems. If this is the case you have to make the settings for both systems. You model first one direction and then model the direction back:

Monitoring qRFC usage

The end result in operations looks as follows:

You can see here qRFC is modelled back and forth between 2 systems. The blue line indicates messages in process. The red line is clicked on. Here you can see both messages in process and errors. Click on the red error number gives the details:

Monitoring RFC’s between SAP systems

RFC’s with fixed user ID

See the example above on CUA idoc monitoring.

Trusted RFC’s

If you have to setup an RFC monitoring for a trusted RFC (for example between Netweaver Gateway system and ECC system), then you have to take care of the user ID’s and rights. The system from which the SM59 test will run, will use that Focused Run user ID to log on to the other system. If your user ID’s are unique for each system you have to create the user ID in the other systems with the rights to be able to execute a ping and logon for the test.

End result RFC checks

The end results of the RFC is list of RFC’s with the latency time, availability and logon test overview:

Transactional RFC towards external system

To monitor transactional RFC (type T) towards an external system like TIBCO, Mulesoft, etc, you first need to model the external system in the LMDB. To do this goto the LMDB maintenance Fiori app:

Then select Single Customer Network and select the option Technical Systems. In this section choose the Type Unspecific Standalone Application System:

And press Create:

Fill out the details and Save. Make sure the status is Active.

Now the system can be added in the configuration of technical systems in the Interface monitoring configuration:

Now you can model the tRFC interface connection monitoring:

OSS notes for RFC monitoring

Relevant OSS notes:

SLT integration monitoring

This blog focuses specifically on SLT integration monitoring. Monitoring an SLT system itself is explained in this dedicated blog.

Set up SLT integration scenario

Start the integration and exception monitoring FIORI tile:

On the configuration add the SLT system:

Select SLT as specific scenario:

On the Monitoring part you can filter on a specific source system and/or SLT schema:

On the 3rd tab you can set the Alerting in cases of errors:

Now save and activate. The monitoring is active now.

Next step is to use this system in a model for your scenario:

Using the SLT integration monitor

If you open the Fiori tile and you have selected your scenario, you still need to perform an extra click to go to the SLT monitor:

First you get overview of your system(s):

You need to click on the blue numbers to drill down:

This gives overview of errors, source connection status and target connection status.

You cannot drill down further on this tile. If you see an error, you need to go to your SLT server and start transaction LTRO to see all detailed error and start fixing from there. Transaction LTRO can have errors shown that are not visible in transaction LTRC. Focused Run uses LTRO data.

Web service monitoring

Web services monitoring automates the monitoring in transaction SRT_MONI, which is extensively explained in this blog.

This monitoring does not check the connection availability of the web service. To make that happen, you would need to install a custom program from this blog, that writes an entry to SM21. From the SM21 entry, you can create a custom monitoring metric that alerts on the connection issue. How to setup custom metrics is explained in this blog.

SAP reference for web service monitoring can be found here.

Data collection and alerting setup for web service monitoring

In the configuration for interface monitoring in the Technical System settings, goto the monitoring part and activate the data collection for Web Service Errors:

In the monitoring settings, you can filter on specific criteria, or leave everything blank to report on everything:

In the alerting part check you can choose between amount of entries and number of error entries:

And set the filters for the alerting:

The filter for monitoring and alerting can be different. It could be you want to monitor all errors, but only activate specific important ones.

Save your monitoring data collection and alerting settings.

Graphical modelling of web services monitoring

In the graphical modelling add the filter between two systems for the web service monitoring:

Also here: first scroll down to see the OK button. Press first OK before pressing Save, or you might loose the data and have to re-enter it. This it bit annoying in the UI.
Monitoring usage of web services

The end result looks as follows:

You can click on the errors or success messages and zoom all the way down to individual messages:

<< This blog was originally posted on SAP Focused Run Guru by Frank Umans. Repost done with permission. >>