Source |
Target |
Input |
Oracle |
Amazon Redshift |
Select the Output Type as Python. |
Amazon Aurora PostgreSQL |
Select the Output Type as SQL. |
Databricks Lakehouse |
Select the Output Type as Python 3. |
Databricks Notebook |
Select the Output Type as Python. |
Google Big Query |
Select the Output Type as Python. |
Hive |
Select the Output Type as Java, Scala, or Python. |
Spark |
Select the Output Type as Java, Scala, Python 2, or Python 3. |
Snowflake |
Select the Output Type as Python or Snowflake Scripting. |
Teradata |
Amazon Redshift |
Select the Output Type as Python or RSQL. |
Databricks Lakehouse |
Select the Output Type as Python 3. |
Databricks Notebook |
Select the Output Type as Python. |
Google Big Query |
Select the Output Type as Python. |
Hive |
Select the Output Type as Java, Scala, HPL/SQL, or Python. |
Spark |
Select the Output Type as Java, Scala, Python 2, or Python 3. |
Snowflake |
Select the Output Type as Python or Snowflake Scripting. |
DataStage |
AWS Glue Job |
- Provide Database Name, Schema Name and Prefix.
- In AWS Glue Catalog Database, provide the AWS Glue Catalog Database connection details to connect the database and schema.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
|
Databricks Lakehouse |
Select the Output Type as Python or DBT. |
Databricks Notebook |
Nil |
Delta Live Tables |
Nil |
Spark |
Select the Output Type as Python 2, or Python 3. |
Snowflake |
Nil |
AWS Glue Job |
Nil |
Informatica |
AWS Glue Studio |
- To choose the required databases, turn on Is JDBC Required toggle and then select the required databases such as Oracle, SQL Server, etc.
- Provide Database Name, Schema Name and Prefix.
- In AWS Glue Catalog Database, provide the AWS Glue Catalog Database connection details to connect the database and schema.
- In S3 Bucket Base Path, provide the S3 storage repository path where you need to store the source and target files.
|
Databricks Lakehouse |
Nil |
Databricks Notebook |
Nil |
Delta Live Tables |
Nil |
Spark |
- Enable Spark-Native: External toggle and then select the databases to fetch input data from an external source such as Oracle, Netezza, Teradata, etc., and process that data in Spark, and then move the processed or output data to an external target.
- Select the Output Type as Python2 or Python 3.
|