BCI Scheduling via 3rd party

Transcription

BCI Scheduling via 3rd party
Informatica BCI and Third Party Scheduling
Objective:
A large number of Informatica customers use third party software to schedule workflows and tasks. The
current BCI implementation delivered by Informatica uses a LMAPI SDK plugin to serially execute BCI send requests
and process mappings. SAP logical systems have the capability to process multiple extractors concurrently. The
new mappings and workflows delivered with this solution will enhance the BCI process to gain performance and
recovery using any external scheduling software. I have also included the below additional enhancements:
1.
2.
3.
Partitioned the “source_for_bci” table and modified the cleanup mapping to truncate the
partition instead of deleting from the table.
Enhanced the indicator table with additional columns for use by the updated send request
mapping.
A database view that reports on extracts that have been executed.
Requirements:
This solution is currently built on Oracle, but should be transferrable to DB2 or SQL server with some
minor changes to the SQL in the mapplet and modify the cleanup SQL that truncates a partition. You will need to
configure your source SAP system for Informatica BCI. The zip file is located at
https://sourceforge.net/projects/infabcischeduling/files/?source=navbar
Zip file contents:





BCI_Listener_third_party_scheduling.XML – Informatica XML file for all objects needed for solution.
source_for_bci_partitioned_table_create.sql – SQL to create partitioned SOURCE_FOR_BCI table.
Sample parameter file.ipf – Sample parameter file to use for the send_request workflow that
contains the parameters needed for the java transformation.
JTX jar files folder – Two jar files that need to exist on the integration server. These are used by the
java transformation and the send_request session.
Encryptstring folder – This is a small program that will encrypt a string and is needed for the
parameter file and java transformation. It is similar to the pmpasswd program.
Installation steps:
1.
2.
3.
4.
Import PowerCenter objects from the “BCI_Listener_third_party_scheduling.XML” file
Copy the JTX jar files to your Integration server.
Copy the sample parameter file to your integration server.
Create database tables for the following relational targets:
a.
b.
c.
d.
e.
f.
g.
Source_For_BCI (See SQL script provided as example for partitioning)
RSINFOStaging
Indicator
DocumentNumber
IDOC_INTERPRETER_FOR_ZSIN1000
CONTROLRECORD
BCILOOKUPTABLE
Configuration:
1.
2.
3.
Edit the Router transformation in the Listener mapping to verify that it matches the basic IDoc type in the
SAP source system.
Edit the properties tab in the send request session in the processing workflow to the correct path on the
Integration server where the above jar files were copied to. Alternatively, you can set this on the
Integration server level and not have to set in the session.
Execute the “Encryptstring” java program with database password where Indicator table is located and
put in parameter file.
BCI Process:
The process below is to show how one data source is requested and processed. With this new process you can
execute multiple workflows concurrently to speed up execution time for your SAP extract and loads.
1.
Start the BCI Listener workflow.
a.
2.
The BCI Listener waits for an RSSEND and RSINFO Doctype sent to the logical system
from SAP. If the doctype is RSINFO, it is passed through the IDoc RSINFO interpreter to
extract the requestID and RQState as well as the RSINFOStaging table. If the RQState is
greater than 2, then the data inserted into the indicator table. (Notice: The LMAPI target
is no longer needed in the mapping)
Execute the wf_process_0ARTICLE_ATTR workflow.
a.
The send_request and processing workflows are now combined into one workflow.
3.
The “Send Request mapping” is executed:
a.
b.
4.
The first pipeline in the mapping sends an IDoc request to SAP.
The second pipeline reads the generating Data Command “echo 0ARTICLE_ATTR” in
SQ_ff_input and passes that string into the mplt_get_rqst_status mapplet.
i. The mapplet has a java transformation that executes a query on the database
and loops until the datasource name is found in the indicator table inserted by
the BCI Listener. Once the entry is found in the table, the END_TIMESTAMP is
updated to the system date and time and the PROC_FLAG column set to ‘Y’ so
that the program no longer looks at that row in future runs.
c. The send Request session succeeds after mapplet ends indicating all of the data for that
datasource has been extracted and inserted into the SOURCE_FOR_BCI table.
The “process” mapping executes (no changes have been made to this mapping).
5.
The “BCI_Cleaning_Staging_Area” mapping executes
a.
6.
i. The mapping reads the generating Data Command “echo ARTICLE_ATTR” in
SQ_ff_input and passes that string into the expression that builds the SQL
statement. The SQL statement is executed on the SOURCE_FOR_BCI table. For
Oracle, you can truncate a single partition and takes seconds even for millions
of rows.
1. alter table SOURCE_FOR_BCI truncate partition ARTICLE_ATTR;
2. For DB2 and SQL Server there are similar types of functionality that
may be easer done by create a stored procedure and have the SQL
transformation execute that.
I have created a view that parses the RSINFOStaging data and joins it with other tables for
reporting on BCI processing.
A sample output is show below.