Introduction to Pipeline
Pipeline encompasses multiple stages designed to facilitate data and schema migration, code transformation, validation, and execution of your workloads in batches. These stages include:
- Migration: Converts schema to target-equivalent and migrates data to target tables.
- Transformation: Transforms legacy EDW scripts to target-native equivalent code.
- Validation: Validates the migrated data and the transformed queries.
- ETL Conversion: Converts ETL scripts such as Informatica, DataStage, Ab Initio to target compatible workflows.
- Orchestration: Transforms legacy scheduler scripts into modern scheduler tools.
- BI Conversion: Transforms legacy BI scripts into modern target platform equivalent.
- Analytics Conversion: Transforms legacy analytics workloads such as SAS into modern target platform equivalent.
- Mainframe Conversion: Transforms Mainframe scripts such as Cobol and JCL to the target cloud platform.
- Execution: Executes the transformed artifacts on to target environment.
You can configure a pipeline in an integrated fashion or also as individual operations as required. It works in both ways. The integrated one eases the configuration part as the subsequent stages in the pipeline automatically consume the input from the previous stage.
This extremely easy-to-use interface enables you to perform specific tasks through individual transformations. Just drag and drop the required transformation, configure it with the required workloads, execute the pipeline, and see the reports.