Target Type |
Input |
Spark |
- To convert the sequence jobs to Airflow equivalent and generate the corresponding Python artifacts turn on the Convert Sequence Jobs to Airflow toggle. If you did not activate this toggle, the sequence jobs will convert to Spark equivalent jobs.
- In Output Type, select Python 2 or Python 3 as the output type format for the generated artifacts
- To perform syntax validation, turn on the Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|
Snowflake |
- To perform syntax validation, turn on the Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|
Delta Live Tables |
Databricks Notebook |
- In Output Type, select Python3 or Juypter as the output type format for the generated artifacts.
- To convert the DataStage sequence jobs to Databricks workflows equivalent and generate the corresponding JSON artifacts, turn on the Convert Sequence Jobs to Databricks Workflows toggle.
- To perform syntax validation, turn on the Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|
Databricks Lakehouse |
- In Output Type, select Python or DBT as the output type format for the generated artifacts.
- To perform syntax validation, turn on Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|
AWS Glue Studio |
- In Database Name, Schema Name, and Prefix, provide database name, schema name, and prefix respectively. If the prefix is provided, the table name displays in prefix_database_tablename format.
- In AWS Glue Catalog Database, provide the AWS Glue catalog database connection details to connect the database and schema.
- In S3 Bucket Base Path, specify the S3 storage repository path to store the files.
- In UDF File Location and UDF Jar Location, specify the UDF file and Jar location path respectively to define the new UDF location.
- In Target Connection Name, provide the connection name to add the predefined connection to Glue.
- To perform syntax validation, turn on Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|
AWS Glue Job |
Matillion ETL |
- In Output Type, select JSON as the output type format for the generated artifacts.
- To perform syntax validation, turn on Validation toggle.
- In Source Data Source, select the data source (DDL) which contains corresponding metadata to ensure accurate query conversion.
- In Target Data Source, select the target data source to perform syntax validation.
|