Target Type |
Input |
Spark |
- In Output Type, select Python 2, or Python 3 Jobs as the output type format for the generated artifacts.
- In Source Database Connection, select the required source database to load the data such as Oracle, SQL Server, Teradata, Netezza, etc.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In File Base Path, specify the base path for input files or output data.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
|
AWS Glue Studio |
- In Source Database Connection, select the required source database to load the data such as Oracle, or SQL Server.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
|
AWS Glue Job |
- In Source Database Connection, select the required source database to load the data such as Oracle, SQL Server, Teradata, Netezza, etc.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
- In Artifacts Location, specify the location from where you need to call external files such as parameter files, orchestration scripts.
|
Databricks Lakehouse |
- In Source Database Connection, select the required source database to load the data such as Oracle, or SQL Server.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In DBFS File Base Path, specify the DBFS (Databricks File System) location where you need to fetch the input files or store the transformed data. In other words, it is a base path for input files or output data.
|
Databricks Notebook |
- In Output Type, select Spark Scala as the output type format for the generated artifacts.
- In Source Database Connection, select the required source database to load the data such as Oracle, or SQL Server.
- In Attainable Automation, select the way you want the system to calculate achievable automation for the transformation of the source scripts.
- Assessment-Based: Calculates the level of automation based on assessment logic. The conversion-config.json file contains a pre-defined automation percentage for each component and you can also modify it as required.
- Transformation-Based: Calculates the level of automation based on actual conversion. In this method, automation percentage is calculated for each component based on the used, supported, and unsupported properties.
- In DBFS File Base Path, specify the DBFS (Databricks File System) location where you need to fetch the input files or store the transformed data. In other words, it is a base path for input files or output data.
|