Target Type |
Input |
Spark |
- In Output Type, the default output type for the transformation is set to Python.
- In Source Database Connection, select the required source database to load the data such as Oracle, SQL Server, Teradata, Netezza, etc.
- In Default Database, select default database for the queries for which the database type is not defined in the uploaded artifacts. Selecting Not Sure will convert only those queries whose database type is available.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In File Base Path, specify the base path for input files or output data.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
|
AWS Glue Studio |
- In Source Database Connection, select the required source database to load the data such as Oracle, or SQL Server.
- In Default Database, select default database for the queries for which the database type is not defined in the uploaded artifacts. Selecting Not Sure will convert only those queries whose database type is available.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
|
AWS Glue Job |
- In Source Database Connection, select the required source database to load the data such as Oracle, SQL Server, Teradata, Netezza, etc.
- In Default Database, select default database for the queries for which the database type is not defined in the uploaded artifacts. Selecting Not Sure will convert only those queries whose database type is available.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
|
Databricks Lakehouse |
- In Source Database Connection, select the required source database to load the data such as Oracle, SQL Server, Teradata, etc.
- In DBFS File Base Path, specify the DBFS (Databricks File System) location where you need to fetch the input files or store the transformed data. In other words, it is a base path for input files or output data.
- In Default Database, select default database for the queries for which the database type is not defined in the uploaded artifacts. Selecting Not Sure will convert only those queries whose database type is available.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
|
Databricks Notebook |
- In Output Type, the default output type for the transformation is set to Spark Scala.
- In Source Database Connection, select the required source database to load the data such as Oracle, or SQL Server.
- In DBFS File Base Path, specify the DBFS (Databricks File System) location where you need to fetch the input files or store the transformed data. In other words, it is a base path for input files or output data.
- In Default Database, select default database for the queries for which the database type is not defined in the uploaded artifacts. Selecting Not Sure will convert only those queries whose database type is available.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
|