Skip to the content
LeaplogicLeaplogic
  • Home
  • About Us
  • Contact
SIGN IN
  • Home
  • About Us
  • Contact

  • Getting Started
    • Before You Begin
    • Creating an Account
    • Logging into LeapLogic
    • Reset Password
    • Quick Tour of the Web Interface
    • LeapLogic in 15 minutes
      • Prerequisites
      • Step 1. Log into LeapLogic
      • Step 2. Create Assessment and Get Insights
      • Step 3. Create Transformation Pipeline and See Results
      • Step 4. Edit or Optimize the Transformed Code
      • Step 5: Complete the Transformation Lifecycle
  • Introduction to LeapLogic
    • Overview
    • High Level Architecture
    • Supported Legacy and Cloud Platforms
    • Key Features
  • LeapLogic Deployment Architecture & Prerequisites
  • Workload Assessment
    • Overview
    • Value Proposition
    • Creating Assessment
      • Prerequisites
      • Step 1. Provide Primary Inputs
        • Automation Coverage
      • Step 2. Add the Additional Inputs
        • Table Stat Extraction Steps
          • Teradata
          • Oracle
          • Netezza
      • Step 3. Update the Source Configuration
      • Step 4. Configure the Recommendation Settings
    • Assessment Listing
    • Understanding Insights and Recommendations
      • Volumetric Info
      • EDW
        • Oracle
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Vertica
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Snowflake
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Azure Synapse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Teradata
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Netezza
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Google Big Query
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Redshift
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • PostgreSQL
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Duck DB
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • ClickHouse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Exasol
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • DB2
          • Highlights
          • Analysis
          • Optimization
          • Recommendations
          • Lineage
          • Downloadable Reports
      • ETL
        • Informatica
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Ab Initio
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • DataStage
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Talend
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSIS
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Informatica BDM
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Oracle Data Integrator
          • Highlights
          • Analysis
          • Downloadable Reports
        • Pentaho
          • Highlights
          • Analysis
          • Downloadable Reports
        • Azure Data Factory
          • ARM Template
          • Highlights
          • Analysis
          • Downloadable Reports
        • Matillion
          • Highlights
          • Analysis
          • Downloadable Reports
        • SnapLogic
          • Highlights
          • Analysis
          • Downloadable Reports
      • Orchestration
        • AutoSys
          • Highlights
          • Analysis
          • Downloadable Reports
        • Control-M
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
      • BI
        • OBIEE
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Tableau
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • IBM Cognos
          • Highlights
          • Analysis
          • Downloadable Reports
        • MicroStrategy
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Power BI
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSRS
          • Highlights
          • Analysis
          • Downloadable Reports
        • SAP BO
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • WebFOCUS
          • Highlights
          • Analysis
          • Downloadable Reports
      • Analytics
        • SAS
          • Highlight
          • Analysis
          • Lineage
          • Downloadable Reports
        • Alteryx
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
      • Shell
        • Highlights
        • Analysis
        • Lineage
        • Downloadable Reports
      • Integrated Assessment (EDW, ETL, Orchestration, BI)
        • Highlights
        • Analysis
        • Optimization
        • Lineage
        • Recommendations
    • Managing Assessment Reports
      • Downloading Report
      • Input Report Utility
      • View Configuration
    • Lineage - Dependency Analysis
    • Column-Level Lineage
      • Manage Lineage
    • Complexity Calculation Logic
    • Key Benefits
    • Ad hoc Query
  • Script/ Query Log/ Code Extraction Prerequisites
    • Cloud
      • Azure Data Factory
      • Snowflake
      • Azure Synapse
      • Google BigQuery
      • Redshift
      • Azure SQL Database Hyperscale
      • Aurora PostgreSQL
    • EDW
      • Oracle
      • Netezza
      • Teradata
      • Vertica
      • SQL Server
      • Db2
      • MySQL
      • PostgreSQL
    • ETL
      • DataStage
      • Informatica
      • SSIS
      • Talend
      • ODI
      • IICS
      • DBT
      • Pentaho
      • Matillion
      • SnapLogic
      • Ab Initio
      • SAP BODS
      • TAC
      • WebFOCUS
    • BI
      • IBM Cognos
      • OBIEE
      • Tableau
      • Metabase
      • MicroStrategy
      • PowerBI
      • LeapLogic Utility for SAP BO
      • SAP BO Universe and Web Intelligence
      • SSRS
    • Analytics
      • SAS
      • Alteryx
    • Orchestration
      • AutoSys
      • Control-M
      • SQL Server
    • Mainframe
  • Metadata Management
    • Overview
    • Introduction to Data Catalog
      • Managing Data Catalog
        • Building Data Catalog
        • Insights to Data Catalog
        • Managing the Repository and Data Source
      • Creating Repository (Repo)
      • Creating Data Source
    • Tag Management
    • Key benefits
  • Batch Processing using Pipeline
    • Introduction
    • Designing Pipeline
      • How to create a pipeline
        • Configuring Migration Stage
          • Schema Optimization
        • Configuring Transformation Stage
          • On-premises to Cloud
          • Cloud-to-Cloud
            • Assigning Roles
            • Intelligent Modernization
          • LeapFusion
        • Configuring Validation Stage
          • Data Validation
            • Table
            • File
            • File and Table
            • Cell-by-cell validation
          • Query Validation
            • Query Validation (When Data is Available)
            • Query Validation (When Data is Not Available)
          • Schema Validation
        • Configuring Execution Stage
        • Configuring ETL Conversion Stage
          • Ab Initio
          • Informatica
          • Informatica BDM
          • Matillion
          • DataStage
          • SSIS
          • IICS
          • Talend
          • Oracle Data Integrator
          • Pentaho
          • SnapLogic
        • Configuring Mainframe Conversion Stage
          • Cobol
          • JCL
        • Configuring Orchestration Stage
          • AutoSys
          • Control-M
        • Configuring BI Conversion Stage
          • OBIEE to Power BI
          • OBIEE to AWS QuickSight
          • Tableau to Amazon QuickSight
          • Tableau to Power BI
          • Tableau to Superset
          • Tableau to Looker
          • IBM Cognos to Power BI
        • Configuring Analytics Conversion Stage
          • SAS
          • Alteryx
        • Configuring Script Conversion Stage
    • Key Features
      • How to schedule a pipeline
      • Configuring Parameters
  • Pipeline Reports
    • Overview of Pipeline Report
    • Pipeline Listing
    • Reports and Insights
      • Migration
      • Transformation
        • On-premises to Cloud
        • Cloud-to-Cloud
      • Validation
        • Data
          • File
          • Table
          • File and Table
        • Query
          • Query Validation Report (When Data is Available)
          • Query Validation Report (When Data is not Available)
        • Schema
      • Execution
      • ETL
        • Ab Initio
        • Informatica
        • Informatica BDM
        • Matillion
        • DataStage
        • SSIS
        • IICS
        • Talend
        • Oracle Data Integrator
        • Pentaho
        • SnapLogic
      • Mainframe
        • Cobol
        • JCL
      • Orchestration
        • AutoSys
        • Control-M
      • BI
        • OBIEE to Power BI
        • OBIEE to Amazon QuickSight
        • Tableau to Amazon QuickSight
        • Tableau to Power BI
        • Tableau to Superset
        • Tableau to Looker
        • IBM Cognos to Power BI
      • Analytics
        • SAS
        • Alteryx
      • Shell Script
      • Common Model
    • Automation Level Indicator
      • ETL
        • Informatica
        • Matillion
        • DataStage
        • Informatica BDM
        • SnapLogic
        • IICS
        • Ab Initio
        • SSIS
        • Talend
        • Pentaho
      • Orchestration
        • AutoSys
        • Control-M
      • EDW
      • Analytics
        • SAS
        • Alteryx
      • BI
      • Shell Script
    • Error Specifications & Troubleshooting
  • SQL Transformation
    • Overview
    • Creating and Executing the Online Notebook
      • How to Create and Execute the Notebook
      • Supported Features
    • Configuring the Notebook
      • Transformation
      • Unit Level Validation
      • Script Level Validation
    • Notebook Listing
  • Operationalization
    • Overview
      • Basic
      • Advanced
      • Cron Expression
    • Parallel Run Pipeline Listing
  • Transformation Source
    • Introduction
    • Creating Transformation Source Type
  • Governance
    • Summary of Governance - Roles and Permissions
    • User Creation
      • Creating a new User Account
    • Adding Roles and permissions
      • How to add Roles and Permissions to a new user?
    • Adding Group Accounts
    • Manage Quota
    • Product Usage Metrics
  • License
    • Workload Assessment
    • EDW Conversion
    • ETL Conversion
    • BI Conversion
    • Orchestration Conversion
    • Data Migration
    • Data Validation
  • LeapLogic Desktop Version
    • Overview
    • Registration and Installation
    • Getting Started
    • Creating Assessment
      • ETL
      • DML
      • Procedure
      • Analytics
      • Hadoop
    • Reports and Insights
      • Downloadable Reports
      • Reports for Estimation
    • Logging and Troubleshooting
    • Sample Scripts
    • Desktop vs. Web Version
    • Getting Help
  • LeapLogic (Version 5.0) Deployment
    • System Requirements
    • Prerequisites
    • Deployment
      • Extracting Package
      • Placing License Key
      • Executing Deployment Script
      • Accessing LeapLogic
    • Uploading License
    • Appendix
    • Getting Help
  • Removed Features
    • Configuring File Validation Stage
    • Variable Extractor Stage
      • Variable Extractor Report
    • Configuring Meta Diff Stage
      • Meta Diff
    • Configuring Data Load Stage
      • Data Load
    • Configuring Multi Algo Stage
  • FAQs
  • Tutorial Videos
  • Notice
Home   »  Workload Assessment   »  Understanding Insights and Recommendations   »  ETL  »  Azure Data Factory Assessment Report

Azure Data Factory Assessment Report

This topic contains information about the Azure Data Factory assessment report. The assessment assesses workloads and produces in-depth insights that help plan the migration. The Azure Data Factory assessment supports only ZIP files (ARM template) as input files.

To learn how to export an ARM template from the Azure portal, click Export ARM Template.

In This Topic:

  • Highlights
    • Summary
    • Pipelines
    • Queries
  • Analysis
    • Files
    • Pipelines
    • Resources
  • Downloadable Reports
    • Insights and Recommendations
    • Source Inventory Analysis

Highlights

The highlights section gives you a high-level overview of your assessment summary of the analytics performed on the selected workloads. It includes information about resource types and pipelines.

Summary

This section summarizes the source scripts and the associated workload inventory. It includes information about pipelines, activities, resources, procedures, and so on.

  • Datasets: Displays the number of datasets present in the source files.
  • Triggers: Displays the number of triggers present in the source files.
  • Dataflows: Displays the number of data flows present in the source files. In ADF, data flow is used to visually design and perform complex data transformations within pipelines.
  • Pipelines: Displays the number of pipelines present in the source files. A pipeline is a logical grouping of activities used to accomplish a task or workflow.
  • Linked Services: Displays the number of linked services present in the input source files.
  • Activities: Displays the number of activities in the source files. Each pipeline can contain multiple activities. Activities are primary components that specify actions such as validating data, deleting data, obtaining metadata, etc.
  • Files: Displays the total number of input files.
  • Entity: Displays the number of entities used in the datasets.

Pipelines

This section provides information about the total number of pipelines in the entire inventory along with the complexity breakup.

Queries

This section provides information about the total, unique, analyzed, and This section provides information about the total, unique, analyzed, and unanalyzed queries in the entire inventory.

  • Total Queries: Displays the total number of queries.
  • Unique Queries: Displays the number of unique queries.
  • Analyzed: Displays the number of analyzed queries.
  • Not Analyzed: Displays the number of unanalyzed queries.

Analysis

This topic provides a detailed examination of the source files.

Files

This section provides a comprehensive report of the source files along with information about the total number of files, pipelines, activities, datasets, and so on.

  • File Name: Displays the file name. The used naming convention is folder_name/file_name.
  • Pipelines: Displays the number of pipelines existing in the file. Pipeline is a set of activities to accomplish a task or work.
  • Activities: Displays the number of activities. Each pipeline contains multiple activities. Activities are primary components that specify the actions such as validating data, deleting data, obtaining metadata, etc.
  • Datasets: Displays the number of datasets. It is a collection of data which is used in various activities.
  • Procedures: Displays the number of procedures. Procedures are a set of SQL queries to perform an action or task.
  • Queries: Displays the number of queries.

Pipelines

This section provides detailed information about pipelines, including their activities, complexity, dependency conditions, and relative files.

  • Pipeline Name: Displays the name of the pipeline.
  • Relative File: Displays the relative file path on which the pipeline is available.
  • Activities: Displays the number of activities in each pipeline.
  • Complexity: Displays the pipeline complexity.
  • Dependency Conditions: Provides details about the resources on which the pipeline depends. For instance, the pipeline may be dependent on other pipelines, datasets, link services, etc.

Browse through each pipeline to get more insights into the associated activities.

  • Activity Name: Displays the name of the activity.
  • Type: Displays the type of activities carried out within the pipeline such as copying data, filtering data, get metadata, etc.
  • Called Pipeline Name: Displays the name of the pipeline called by the activity.
  • Child Activity Count: Displays the number of child activities.

Resources

This section provides detailed information about various resource types such as datasets, linked services, triggers, and Data flows. Resources are manageable services or entities, for instance, databases, storage accounts, virtual networks, etc.

Datasets

This section lists all the datasets. Datasets are a collection of data that is used in various activities.

  • Dataset Name: Displays the name of the dataset.
  • Type: Displays the dataset type.
  • Relative File: Displays the relative file path of the dataset.
  • Linked Service Name: Displays the name of the linked service associated with the dataset.
  • Schema: Displays the associated schema name.
  • Table: Displays the associated table name.

Linked Service

This section lists all the linked services. Linked services are connection strings that contain connection details such as databases, URLs, file paths, etc., to connect to different services (cloud, legacy data warehouses, etc.).

  • Linked Service Name: Displays the name of the linked service.
  • Type: Displays the type of the linked services.
  • Relative File: Displays the relative file path of the linked services.
  • Dependency: Provides details about the resources on which the linked service depends. For instance, the linked service may be dependent on other datasets, link services, etc.

Trigger

This section lists all triggers. Triggers are used to execute a pipeline.

  • Trigger Name: Displays the name of the trigger.
  • Type: Displays the type of trigger. There are three types of triggers:
    • Schedule: Execute the pipeline based on a predefined schedule.
    • BlobEvents: Execute the pipeline when a new or updated Azure blob storage is identified.
    • Tumbling window: Execute the pipeline at a periodic interval irrespective of past and future dated data.
  • Relative File: Displays the relative file path of the trigger.
  • Pipeline: Displays the pipeline associated with each trigger.
  • Runtime State: Specifies the runtime state of each trigger, such as started or stopped.
  • Frequency: Displays the frequency such as hour, minute, etc., at which the trigger is scheduled to execute.
  • Interval: Provides the time interval at which the trigger is scheduled to execute.

Dataflows

This section lists all the data flows. In ADF, data flow is used to visually design and perform complex data transformations within pipelines.

  • File Name: Displays the name of the file that contains the data flow.
  • Dataflow Name: Displays the name of the data flow.
  • Source Datasets: Displays the source datasets associated with each data flow.
  • Sink Datasets: Displays the sink or target datasets associated with each data flow.
  • Transformation Name: Displays the transformations associated with each data flow.
  • Transformation Count: Displays the number transformations associated with each data flow.


Downloadable Reports

Downloadable reports allow you to export detailed ADF assessment reports of your source data which enables you to gain in-depth insights with ease. To access these assessment reports, click Reports.

Types of Reports

In the Reports section, you can see various types of reports such as Insights and Recommendations, and Source Inventory Analysis. Each report type offers detailed information allowing you to explore your assessment results.

Insights and Recommendations

This report provides an in-depth insight into the source input files.

ADF Assessment Report.xlsx: This report provides insights about the source inventory. It includes information about the source inventory and pipelines.

This report contains the following information:

  • Report Summary: Provides information about all the generated artifacts.
  • Volumetric Info: Presents a summary of the aggregated inventory after analyzing the source files. For instance, it provides total number of pipelines, linked services, datasets, triggers, procedures, activities, and more.
  • Pipeline Summary: Lists all the pipelines associated with the input files. It also provides information about activities, pipeline-level complexity, components, and more.
  • Pipeline Dependency: Provides details of dependencies between pipeline and their associated resources along with the dependent resource type.
  • Trigger Details: Lists all the triggers including their frequency, interval, scheduled start time, end time, and more.
  • Linked Service Details: Lists all the linked services (contains connection metadata details such as databases, URLs, file paths, etc., to connect to different services) along with their type and dependency details.
  • Dataset Details: Lists all the datasets along with their type, associated schema, tables, linked services, and likewise.
  • Dataflow Details: Provides information about dataflows, including the source dataset, target dataset, associated transformations, and more.
  • Activity Details: Lists all the activities along with their type, procedure, source and target folders, files, used datasets, and more.
  • Activity Dependency: Provides details of dependencies between activities including the dependent activity, activity type, and dependency condition. Dependent activities vary according to the Succeeded or Failed conditions defined in the input source file.

Source Inventory Analysis

It is an intermediate report which helps to debug failures or calculate the final report. It includes all the generated CSV reports. Here, you can view the folders—activity, dataflows, datasets, file_level, linkedServices, pipelines, and triggers—along with the adf_query_info.csv report.

Browse through activity folder to access activity_dependency.csv and activity_details.csv reports.

activity_dependency.csv: This report provides details about each activity, including its dependent activity, activity type, and other related data.

activity_details.csv reports: This report provides detailed information about activities, including their type, procedure, called pipeline, source and target folders, tables, and more.

Browse through dataflow folder to access dataflow_details.csv report.

dataflow_details.csv: This report provides detailed information about data flows, including their source and target datasets, transformations, dependencies, and other related data.

Browse through dataset folder to access dataset_details.csv report.

dataset_details.csv: This report provides detailed information about datasets, including type, container, linked service, and more.

Browse through file_level folder to access ARMTemplateForFactory_params.csv report.

ARMTemplateForFactory_params.csv: This report provides detailed information about parameters, including their type and default value

Browse through linkedServices folder to access linked_service_details.csv report.

linked_service_details.csv: This report provides detailed information about linked services, including their type and dependency.

Browse through pipelines folder to access Pipeline Parameter.csv, pipeline_dependency.csv, and pipeline_summary reports.

Pipeline Parameter.csv: This report provides detailed information about parameters used in the pipeline, including their type and default value.

pipeline_dependency.csv: This report provides information about pipelines, including their dependent resources and the types of those resources.

pipeline_summary.csv: This report provides detailed information about pipelines, including the total number of activities, their complexity, and the types of activities used, such as Delete, DatabricksSparkPython, IfCondition, ForEach, and more.

Browse through triggers folder to access trigger_details.csv report.

trigger_details.csv: This report provides detailed information about triggers, including their associated pipeline, frequency, interval, type, and more.

adf_query_info.csv: This report provides information about queries including their type, complexity, parser type, impacted table, and more.


To learn more, contact our support team or write to: info@leaplogic.io

Copyright © 2025 Impetus Technologies Inc. All Rights Reserved

  • Terms of Use
  • Privacy Policy
  • License Agreement
To the top ↑ Up ↑