Skip to the content
LeaplogicLeaplogic
  • Home
  • About Us
  • Contact
SIGN IN
  • Home
  • About Us
  • Contact

  • Getting Started
    • Before You Begin
    • Creating an Account
    • Logging into LeapLogic
    • Reset Password
    • Quick Tour of the Web Interface
    • LeapLogic in 15 minutes
      • Prerequisites
      • Step 1. Log into LeapLogic
      • Step 2. Create Assessment and Get Insights
      • Step 3. Create Transformation Pipeline and See Results
      • Step 4. Edit or Optimize the Transformed Code
      • Step 5: Complete the Transformation Lifecycle
  • Introduction to LeapLogic
    • Overview
    • High Level Architecture
    • Supported Legacy and Cloud Platforms
    • Key Features
  • Workload Assessment
    • Overview
    • Value Proposition
    • Creating Assessment
      • Prerequisites
      • Step 1. Provide Primary Inputs
        • Automation Coverage
      • Step 2. Add the Additional Inputs
        • Table Stat Extraction Steps
          • Teradata
          • Oracle
          • Netezza
      • Step 3. Update the Source Configuration
      • Step 4. Configure the Recommendation Settings
    • Assessment Listing
    • Understanding Insights and Recommendations
      • Volumetric Info
      • EDW
        • Oracle
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Vertica
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Snowflake
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Azure Synapse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Teradata
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Netezza
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Google Big Query
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Redshift
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • PostgreSQL
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Duck DB
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • ClickHouse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Exasol
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • DB2
          • Highlights
          • Analysis
          • Optimization
          • Recommendations
          • Lineage
          • Downloadable Reports
      • ETL
        • Informatica
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Ab Initio
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • DataStage
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Talend
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSIS
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Informatica BDM
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Oracle Data Integrator
          • Highlights
          • Analysis
          • Downloadable Reports
        • Pentaho
          • Highlights
          • Analysis
          • Downloadable Reports
        • Azure Data Factory
          • ARM Template
          • Highlights
          • Analysis
          • Downloadable Reports
        • Matillion
          • Highlights
          • Analysis
          • Downloadable Reports
        • SnapLogic
          • Highlights
          • Analysis
          • Downloadable Reports
      • Orchestration
        • AutoSys
          • Highlights
          • Analysis
          • Downloadable Reports
        • Control-M
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
      • BI
        • OBIEE
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Tableau
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • IBM Cognos
          • Highlights
          • Analysis
          • Downloadable Reports
        • MicroStrategy
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Power BI
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSRS
          • Highlights
          • Analysis
          • Downloadable Reports
        • SAP BO
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • WebFOCUS
          • Highlights
          • Analysis
          • Downloadable Reports
      • Analytics
        • SAS
          • Highlight
          • Analysis
          • Lineage
          • Downloadable Reports
        • Alteryx
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
      • Integrated Assessment (EDW, ETL, Orchestration, BI)
        • Highlights
        • Analysis
        • Optimization
        • Lineage
        • Recommendations
    • Managing Assessment Reports
      • Downloading Report
      • Input Report Utility
      • View Configuration
    • Complexity Calculation Logic
    • Key Benefits
    • Ad hoc Query
  • Metadata Management
    • Overview
    • Introduction to Data Catalog
      • Managing Data Catalog
        • Building Data Catalog
        • Insights to Data Catalog
        • Managing the Repository and Data Source
      • Creating Repository (Repo)
      • Creating Data Source
    • Tag Management
    • Key benefits
  • Batch Processing using Pipeline
    • Introduction
    • Designing Pipeline
      • How to create a pipeline
        • Configuring Migration Stage
          • Schema Optimization
        • Configuring Transformation Stage
          • On-premises to Cloud
          • Cloud-to-Cloud
          • LeapLogic Express
        • Configuring Validation Stage
          • Data Validation
            • Table
            • File
            • File and Table
            • Cell-by-cell validation
          • Query Validation
            • Query Validation (When Data is Available)
            • Query Validation (When Data is Not Available)
          • Schema Validation
        • Configuring Execution Stage
        • Configuring ETL Conversion Stage
          • Ab Initio
          • Informatica
          • Informatica BDM
          • Matillion
          • DataStage
          • SSIS
          • IICS
          • Talend
          • Oracle Data Integrator
          • Pentaho
          • SnapLogic
        • Configuring Mainframe Conversion Stage
          • Cobol
          • JCL
        • Configuring Orchestration Stage
          • AutoSys
          • Control-M
        • Configuring BI Conversion Stage
          • OBIEE to Power BI
          • OBIEE to AWS QuickSight
          • Tableau to Amazon QuickSight
          • Tableau to Power BI
          • Tableau to Superset
          • Tableau to Looker
          • IBM Cognos to Power BI
        • Configuring Analytics Conversion Stage
          • SAS
          • Alteryx
        • Configuring Script Conversion Stage
    • Key Features
      • How to schedule a pipeline
      • Configuring Parameters
  • Pipeline Reports
    • Overview of Pipeline Report
    • Pipeline Listing
    • Reports and Insights
      • Migration
      • Transformation
        • On-premises to Cloud
        • Cloud-to-Cloud
        • LeapLogic Express
      • Validation
        • Data
          • File
          • Table
          • File and Table
        • Query
          • Query Validation Report (When Data is Available)
          • Query Validation Report (When Data is not Available)
        • Schema
      • Execution
      • ETL
        • Ab Initio
        • Informatica
        • Informatica BDM
        • Matillion
        • DataStage
        • SSIS
        • IICS
        • Talend
        • Oracle Data Integrator
        • Pentaho
        • SnapLogic
      • Mainframe
        • Cobol
        • JCL
      • Orchestration
        • AutoSys
        • Control-M
      • BI
        • OBIEE to Power BI
        • OBIEE to Amazon QuickSight
        • Tableau to Amazon QuickSight
        • Tableau to Power BI
        • Tableau to Superset
        • Tableau to Looker
        • IBM Cognos to Power BI
      • Analytics
        • SAS
        • Alteryx
      • Shell Script
      • Common Model
    • Automation Level Indicator
      • ETL
        • Informatica
        • Matillion
        • DataStage
        • Informatica BDM
        • SnapLogic
        • IICS
        • Ab Initio
        • SSIS
        • Talend
        • Pentaho
      • Orchestration
        • AutoSys
        • Control-M
      • EDW
      • Analytics
        • SAS
        • Alteryx
      • BI
      • Shell Script
    • Error Specifications & Troubleshooting
  • SQL Transformation
    • Overview
    • Creating and Executing the Online Notebook
      • How to Create and Execute the Notebook
      • Supported Features
    • Configuring the Notebook
      • Transformation
      • Unit Level Validation
      • Script Level Validation
    • Notebook Listing
  • Operationalization
    • Overview
      • Basic
      • Advanced
      • Cron Expression
    • Parallel Run Pipeline Listing
  • Transformation Source
    • Introduction
    • Creating Transformation Source Type
  • Governance
    • Summary of Governance - Roles and Permissions
    • User Creation
      • Creating a new User Account
    • Adding Roles and permissions
      • How to add Roles and Permissions to a new user?
    • Adding Group Accounts
    • Default Quota Limits
    • Product Usage Metrics
  • License
    • EDW
    • ETL
  • Script/ Query Log/ Code Extraction Prerequisites
    • Cloud
      • Azure Data Factory
      • Snowflake
      • Azure Synapse
      • Google BigQuery
      • Redshift
      • Azure SQL Database Hyperscale
      • Aurora PostgreSQL
    • EDW
      • Oracle
      • Netezza
      • Teradata
      • Vertica
      • SQL Server
      • Db2
      • MySQL
      • PostgreSQL
    • ETL
      • DataStage
      • Informatica
      • SSIS
      • Talend
      • ODI
      • IICS
      • DBT
      • Pentaho
      • Matillion
      • SnapLogic
      • Ab Initio
      • SAP BODS
      • TAC
      • WebFOCUS
    • BI
      • IBM Cognos
      • OBIEE
      • Tableau
      • Metabase
      • MicroStrategy
      • PowerBI
      • LeapLogic Utility for SAP BO
      • SAP BO Universe and Web Intelligence
      • SSRS
    • Analytics
      • SAS
      • Alteryx
    • Orchestration
      • AutoSys
      • Control-M
      • SQL Server
    • Mainframe
  • LeapLogic Desktop Version
    • Overview
    • Registration and Installation
    • Getting Started
    • Creating Assessment
      • ETL
      • DML
      • Procedure
      • Analytics
      • Hadoop
    • Reports and Insights
      • Downloadable Reports
      • Reports for Estimation
    • Logging and Troubleshooting
    • Sample Scripts
    • Desktop vs. Web Version
    • Getting Help
  • LeapLogic (Version 4.9) Deployment
    • System Requirements
    • Prerequisites
    • Deployment
      • Extracting Package
      • Placing License Key
      • Executing Deployment Script
      • Accessing LeapLogic
    • Uploading License
    • Appendix
    • Getting Help
  • Removed Features
    • Configuring File Validation Stage
    • Variable Extractor Stage
      • Variable Extractor Report
    • Configuring Meta Diff Stage
      • Meta Diff
    • Configuring Data Load Stage
      • Data Load
    • Configuring Multi Algo Stage
  • FAQs
  • Tutorial Videos
  • Notice
Home   »  Script/ Query Log/ Code Extraction Prerequisites   »  Cloud  »  Azure SQL Database Hyperscale Query Execution Logs Extraction and Source Code Prerequisites

Azure SQL Database Hyperscale Query Execution Logs Extraction and Source Code Prerequisites

This topic briefs about the Azure SQL Database Hyperscale assessment query execution logs extraction and source code prerequisites.

In This Topic:

  • Introduction
  • Assessment Process
  • Source Code Assessment

Introduction

LeapLogic’s Assessment profiles existing inventory, identifies workload complexity, interdependencies, and provides comprehensive actionable recommendations for migration to modern data platform.

Assessment checklist

Information Format Availability?
Database Object Count File Export in text format Yes/No
Database Volume File Export in text format Yes/No
High Data Volume Tables File Export in text format Yes/No
Total Number of Users File Export in text format Yes/No
Total Number of User Tables File Export in text format Yes/No
Objects File Export in text format Yes/No
Total I/O Usage by Database File Export in text format Yes/No
Top CPU and I/O Consuming Queries File Export in text format Yes/No
Top CPU and I/O Consuming Procedures File Export in text format Yes/No
Top CPU and I/O Consuming Functions File Export in text format Yes/No
Data for Partitioning and Bucketing File Export in text format Yes/No
Server Max & Avg CPU File Export in text format Yes/No
List of Databases Having Avg_cpu > =70 File Export in text format Yes/No
Server Configuration File Export in text format Yes/No
Source Code Assessment File Export in text format Yes/No

Follow the steps in section 2, 3 to collect the required information for assessment.


Assessment Process

All the environment specific variables are highlighted in the document. The start and end date should be consistent for all the queries.

Databases

This query provides the total number of databases. Execute it and save the results in CSV format.

select count (*) from sys.databases

Database Volume

This query provides the database volume in MBs. Execute it and save the results in CSV format.

DECLARE @Databases TABLE

(

  database_nm VARCHAR(50),

  log_write DECIMAL(8,2),

  row_size_mb DECIMAL(8,2),

  total_size_mb DECIMAL(8,2)

)

INSERT  INTO @Databases

SELECT

database_name = CAST(database_name AS VARCHAR(50))

,log_write_secs = max(avg_log_write_percent*cpu_limit)

, row_size_mb = max(storage_in_megabytes)

,total_size_mb = max(allocated_storage_in_megabytes)

FROM sys.resource_stats

where sku=’Hyperscale’

GROUP BY database_name

select * FROM @Databases

High Data Volume Tables

This query provides the result for tables with a high data volume. Execute it for all the databases that needs to be migrated and save the results in CSV format.

DECLARE @UserTable TABLE

(

  database_name VARCHAR(100),

  Table_Name VARCHAR(500),

  Scheme_Name VARCHAR(50),

  Row_Count BIGINT,

  Space_Total_GB DECIMAL(10,2),

  Space_Used_GB DECIMAL(10,2),

  Space_UnUsed_GB DECIMAL(10,2) 

)

INSERT  INTO @UserTable

select  

database_name = CAST(DB_NAME(DB_ID()) AS VARCHAR(50)),

t.NAME AS TableName,

s.Name AS SchemaName,

p.rows AS RowCounts,

CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00/1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceGB,

CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00/1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceGB, 

CAST(ROUND(((SUM(a.total_pages) – SUM(a.used_pages)) * 8) / 1024.00/1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceGB

FROM     

    sys.tables t

INNER JOIN      

    sys.indexes i ON t.OBJECT_ID = i.object_id

INNER JOIN 

    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id

INNER JOIN 

    sys.allocation_units a ON p.partition_id = a.container_id

LEFT OUTER JOIN 

    sys.schemas s ON t.schema_id = s.schema_id

WHERE 

  t.is_ms_shipped = 0

    AND i.OBJECT_ID > 255 

GROUP BY 

    t.Name, s.Name, p.Rows

HAVING CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00/1024.00), 2) AS NUMERIC(36, 2)) >=10

ORDER BY     UsedSpaceGB

select * FROM @UserTable

This SQL will collect databases with volume equal or above 10GB. This step must be taken for all the databases.

Note:

The filter condition can be changed for collecting lower volumes.

Users

This query provides the total number of users. Execute it for all the databases that need to be migrated and save the results in CSV format.

DECLARE @UserType TABLE

(

  database_name VARCHAR(50),

  UserType VARCHAR(50),

  Num INT

)

INSERT  INTO @UserType

select  

database_name = CAST(DB_NAME(DB_ID()) AS VARCHAR(50)),

type ,count(*) 

from sys.database_principals

group by type

select * FROM @UserType

User Tables

This query provides the total number of user tables. Execute it for all the databases and save the results in CSV format.

DECLARE @UserTable TABLE

(

  database_name VARCHAR(100),

  Table_Name VARCHAR(500),

  Table_Type VARCHAR(50),

  Create_Date DATETIME,

  ModifiedDate DATETIME

)

INSERT  INTO @UserTable

select  

database_name = CAST(DB_NAME(DB_ID()) AS VARCHAR(50)),

Name,type_desc,create_date,modify_date

from sys.tables

select * FROM @UserTable

Objects

This query provides all the objects in a database. Execute it for all the databases that need to be migrated and save the results in CSV format.

The results can be restricted for system tables such as SYSTEM_TABLE,INTERNAL_TABLE etc

Eg: where type_desc not in (‘SYSTEM_TABLE’,’INTERNAL_TABLE’,’SERVICE_QUEUE’)

DECLARE @UserTable TABLE

(

  database_name VARCHAR(100),

  Obj_Name VARCHAR(500),

  obj_Type VARCHAR(50),

  Create_Date DATETIME,

  ModifiedDate DATETIME

)

INSERT  INTO @UserTable

select  

database_name = CAST(DB_NAME(DB_ID()) AS VARCHAR(50)),

Name,type_desc,create_date,modify_date

from sys.objects

select * FROM @UserTable

Total I/O Usage by Database

This query provides the total I/O usage by databases. Execute it and save the results in CSV format.

SELECT Name AS Database_Name

,SUM(num_of_reads)AS Number_of_Reads

,SUM(num_of_writes)AS Number_of_Writes

,(SUM(num_of_reads) + SUM(num_of_writes)) as TotalIO

FROM sys.dm_io_virtual_file_stats(NULL,NULL) I

INNER JOIN sys.databases D ON I.database_id = d.database_id

GROUP BY name

ORDER BY  TotalIO desc

Top CPU and I/O Consuming Queries

This query provides top CPU and I/O consuming queries. Execute it and save the results in CSV format.

DECLARE @startdt VARCHAR(10)

DECLARE @enddt VARCHAR(10)

– – Provide start and end date

SET @startdt = ‘2017-03-05’

SET @enddt = ‘2022-04-05’

select CONCAT(‘IDWWM’, ‘~~’,

               sess.original_login_name,’~~’,

    AppId,’~~’,

               sess.client_interface_name,’~~’,

    convert(VARCHAR,qs.last_execution_time,120),’~~’,

    (total_worker_time/execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes)/execution_count,’~~’,

   0,’~~’,

    convert(VARCHAR,(qs.last_execution_time + qs.total_elapsed_time/86400000000),120),’~~’,

    convert(VARCHAR,qs.last_execution_time,120),’~~’,

    0,’~~’,

    row_number() over (ORDER BY total_worker_time/execution_count desc),’~~’,

    (total_worker_time/execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes),’~~’,

    tb4.ampcputime,’~~’,

    tb4.TotalIOCount,’~~’,

    qs.total_elapsed_time/1000000.0,’~~’,

    OBJECT_SCHEMA_NAME(st.objectid,st.dbid), ‘~~’,

    db.name,’~~’,

    REPLACE(CAST(text as NVARCHAR(MAX)), CHAR(10), ‘ ‘),’~~’,

    execution_count

               )

        from sys.dm_exec_query_stats qs

CROSS APPLY sys.dm_exec_sql_text(qs.plan_handle) st

INNER JOIN

(

SELECT DISTINCT

a.last_execution_time,

SUM(total_worker_time)/1000000.0 as AMPCPUTIME,

(SUM(total_logical_reads)+sum(total_physical_reads)+sum(total_logical_writes)) AS TOTALIOCOUNT

FROM

sys.dm_exec_query_stats a

WHERE

cast(a.last_execution_time as date) BETWEEN CAST(@startdt+’ 00:00:00:00′ as DATETIME)  AND CAST(@enddt+’ 00:00:00:00′ AS DATETIME)

GROUP BY a.last_execution_time

)TB4 ON TB4.last_execution_time = qs.last_execution_time

inner join sys.databases db

on ( db.database_id = st.dbid)

LEFT JOIN (

SELECT DISTINCT es.program_name as AppId, es.client_interface_name,es.original_login_name,sp.dbid

FROM sys.dm_exec_sessions es inner join sys.sysprocesses sp

on es.session_id = sp.spid

) AS Sess

ON Sess.dbid = st.dbid;

Change the value of start date and end date in the parameter at the beginning of the script. Save the output of above query in CSV format if you are executing it from an editor such as SSMS. If the log size is too large, then use the alternative method (see below) to generate the output.

SQL_Server_query_log

Copy the attached SQL script (SQL_Server_query_log.sql) at an appropriate location. Change start and end date as required. Execute it using the below command from Windows command line and share the output file.

This method cab be used to execute any SQL command through the command prompt.

sqlcmd -S SQL_Server_Name -U username -P password -i ./SQL_Server_query_log.sql -o output_file.log

Top CPU and I/O Consuming Procedures

This query provides top CPU and I/O consuming procedures.

DECLARE @startdt VARCHAR(10)

DECLARE @enddt VARCHAR(10)

– – Provide start and end date

SET @startdt = ‘2019-03-05’

SET @enddt = ‘2022-04-05’

select CONCAT(‘IDWWM’, ‘~~’,

               sess.original_login_name,’~~’,

    AppId,’~~’,

               sess.client_interface_name,’~~’,

    convert(VARCHAR,d.last_execution_time,120),’~~’,

    (d.total_elapsed_time/d.execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes)/d.execution_count,’~~’,

   0,’~~’,

    convert(VARCHAR,(d.last_execution_time + d.total_elapsed_time/86400000000),120),’~~’,

    convert(VARCHAR,d.last_execution_time,120),’~~’,

    0,’~~’,

    row_number() over (ORDER BY total_worker_time/d.execution_count desc),’~~’,

    (total_worker_time/execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes),’~~’,

    tb4.ampcputime,’~~’,

    tb4.TotalIOCount,’~~’,

    d.total_elapsed_time/1000000.0,’~~’,

    OBJECT_SCHEMA_NAME(d.object_id,d.database_id), ‘~~’,

    db.name,’~~’,

    REPLACE(CAST(text as NVARCHAR(MAX)), CHAR(10), ‘ ‘),’~~’,

    execution_count

               )

FROM sys.dm_exec_procedure_stats AS d 

CROSS APPLY sys.dm_exec_sql_text(d.sql_handle) st

INNER JOIN

(

SELECT DISTINCT

a.last_execution_time,

SUM(total_worker_time)/1000000.0 as AMPCPUTIME,

(SUM(total_logical_reads)+sum(total_physical_reads)+sum(total_logical_writes)) AS TOTALIOCOUNT

FROM

sys.dm_exec_procedure_stats AS a

WHERE

cast(a.last_execution_time as date) BETWEEN CAST(@startdt+’ 00:00:00:00′ as DATETIME)  AND CAST(@enddt+’ 00:00:00:00′ AS DATETIME)

GROUP BY a.last_execution_time

)TB4 ON TB4.last_execution_time = d.last_execution_time

inner join sys.databases db

on ( db.database_id = d.database_id)

LEFT JOIN (

SELECT DISTINCT es.program_name as AppId, es.client_interface_name,es.original_login_name,sp.dbid

FROM sys.dm_exec_sessions es inner join sys.sysprocesses sp

on es.session_id = sp.spid

) AS Sess

ON Sess.dbid = d.database_id;

Top CPU and I/O Consuming Functions

This query provides top CPU and I/O consuming functions.

DECLARE @startdt VARCHAR(10)

DECLARE @enddt VARCHAR(10)

– – Provide start and end date

SET @startdt = ‘2017-03-05’

SET @enddt = ‘2022-04-05’

select CONCAT(‘IDWWM’, ‘~~’,

               sess.original_login_name,’~~’,

    AppId,’~~’,

               sess.client_interface_name,’~~’,

    convert(VARCHAR,d.last_execution_time,120),’~~’,

    (d.total_elapsed_time/d.execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes)/d.execution_count,’~~’,

   0,’~~’,

    convert(VARCHAR,(d.last_execution_time + d.total_elapsed_time/86400000000),120),’~~’,

    convert(VARCHAR,d.last_execution_time,120),’~~’,

    0,’~~’,

    row_number() over (ORDER BY total_worker_time/d.execution_count desc),’~~’,

    (total_worker_time/execution_count)/1000000.0,’~~’,

    (total_logical_reads+total_physical_reads+total_logical_writes),’~~’,

    tb4.ampcputime,’~~’,

    tb4.TotalIOCount,’~~’,

    d.total_elapsed_time/1000000.0,’~~’,

    OBJECT_SCHEMA_NAME(d.object_id,d.database_id), ‘~~’,

    db.name,’~~’,

    REPLACE(CAST(text as NVARCHAR(MAX)), CHAR(10), ‘ ‘),’~~’,

    execution_count

               )

FROM sys.dm_exec_function_stats AS d 

CROSS APPLY sys.dm_exec_sql_text(d.sql_handle) st

INNER JOIN

(

SELECT DISTINCT

a.last_execution_time,

SUM(total_worker_time)/1000000.0 as AMPCPUTIME,

(SUM(total_logical_reads)+sum(total_physical_reads)+sum(total_logical_writes)) AS TOTALIOCOUNT

FROM

sys.dm_exec_function_stats AS a

WHERE

cast(a.last_execution_time as date) BETWEEN CAST(@startdt+’ 00:00:00:00′ as DATETIME)  AND CAST(@enddt+’ 00:00:00:00′ AS DATETIME)

GROUP BY a.last_execution_time

)TB4 ON TB4.last_execution_time = d.last_execution_time

inner join sys.databases db

on ( db.database_id = d.database_id)

LEFT JOIN (

SELECT DISTINCT es.program_name as AppId, es.client_interface_name,es.original_login_name,sp.dbid

FROM sys.dm_exec_sessions es inner join sys.sysprocesses sp

on es.session_id = sp.spid

) AS Sess

ON Sess.dbid = d.database_id;

An alternative method to generate the log file for Queries/Procedures or Functions is using sqlcmd (T-SQL) CLI

Data for Partitioning and Bucketing

This query extracts table size and column details to be utilized for partition and bucket recommendation. Execute it and save the results in CSV format.

Note:

This query provides the details for the database that we are in. You need to execute it for all the databases. Also, you need to generate the statistics before executing this query.

DECLARE @UserTable TABLE

(

  database_name VARCHAR(100),

  Table_Name VARCHAR(500),

  Table_Size_Bytes DECIMAL(18,2),

  Row_Num INT,

  Column_Name VARCHAR(100),

  Unique_Val_Num INT  

)

INSERT  INTO @UserTable

SELECT isc.table_catalog AS DATABASE_NAME, 

       t.NAME            AS Table_Name, 

       Cast(Round(( Sum(a.used_pages) / 128.00 ), 2) AS NUMERIC(36, 2)) * ( 2014 * 1024 )   AS TABLE_SIZE_BYTES, 

       p.rows            AS NUM_ROWS, 

       isc.column_name   AS Column_Name, 

       ” AS num_unique_val 

FROM   sys.tables t 

       INNER JOIN sys.indexes i 

               ON t.object_id = i.object_id 

       INNER JOIN sys.partitions p 

               ON i.object_id = p.object_id 

                  AND i.index_id = p.index_id 

       INNER JOIN sys.allocation_units a 

               ON p.partition_id = a.container_id 

       INNER JOIN sys.schemas s 

               ON t.schema_id = s.schema_id 

       INNER JOIN information_schema.columns AS isc 

               ON t.NAME = isc.table_name 

GROUP  BY t.NAME, 

          s.NAME, 

          p.rows, 

          isc.table_catalog, 

          isc.column_name

select * FROM @UserTable

Server Max & Avg CPU

This query extracts Average and Max CPU Utilization, Data IO & Memory usage. Execute it and save the results in CSV format.

SELECT    

    AVG(avg_cpu_percent) AS ‘Average CPU Utilization In Percent’,  

    MAX(avg_cpu_percent) AS ‘Maximum CPU Utilization In Percent’,  

    AVG(avg_data_io_percent) AS ‘Average Data IO In Percent’,  

    MAX(avg_data_io_percent) AS ‘Maximum Data IO In Percent’,  

    AVG(avg_log_write_percent) AS ‘Average Log Write I/O Throughput Utilization In Percent’,  

    MAX(avg_log_write_percent) AS ‘Maximum Log Write I/O Throughput Utilization In Percent’,  

    AVG(avg_memory_usage_percent) AS ‘Average Memory Usage In Percent’,  

    MAX(avg_memory_usage_percent) AS ‘Maximum Memory Usage In Percent’  

FROM sys.dm_db_resource_stats;

List of Databases Having Avg_cpu > =70 %

This query List Database name which are having CPU utilization >=70%. Execute it and save the results in CSV format.

Note:

The filter condition can be changed for collecting lower AVG CPU.

DECLARE @s datetime;  

DECLARE @e datetime;  

SET @s= DateAdd(d,-7,GetUTCDate());  

SET @e= GETUTCDATE();  

SELECT database_name, AVG(avg_cpu_percent) AS Average_CPU_Utilization , cpu_limit As ‘Number of Cores’   ,GETUTCDATE() as Date_of_extraction

FROM sys.resource_stats  

WHERE sku = ‘Hyperscale’

 and start_time BETWEEN @s AND @e  

GROUP BY database_name,cpu_limit  

HAVING AVG(avg_cpu_percent) >= 70;

Server Configuration

This query List Details Related to server instance for eg : type of Service tier,CInstance type and server name. Execute it and save the results in CSV format.

select D.service_objective,D.edition,CONVERT(nvarchar(50),SERVERPROPERTY(‘ServerName’)) as server_name ,X.vcores

from sys.database_service_objectives D CROSS JOIN

(select max(cpu_limit) as vcores from sys.resource_stats  ) X where D.edition=’Hyperscale’


Source Code Assessment

Provide the below active or in-scope SQL Server source code artifacts as applicable in the migration scope.

Code Artifact Criticality Remarks
Orchestration Scripts (Control-M / AutoSys / Cron etc.) Must To identify interdependencies across scheduler scripts / jobs, queries, and dependent workloads
Procedures / Functions* Must To identify workload complexity, query patterns, query count etc. for effort estimation and technical debt
Views Must To identify view complexity, patterns and effort estimations
Shell Scripts* Must To identify count, dependencies, SQL queries and PL/SQL, logic (example: email, notification etc.) and effort estimations
DDL Must To identify column usage, and provide recommendation on column level lineage, and query optimization on the target system
DML / SQL Files* Must To identify count, dependencies, SQL queries and effort estimations

Note:

Limit: Assuming the orchestration script is a trigger point for every single use case execution in the existing setup. If customer is not comfortable sharing all the workloads, then share those workloads which are referred or executed through the orchestration scripts. However, in such scenarios the scope and effort estimates will be based on the given workloads.


To learn more, contact our support team or write to: info@leaplogic.io

Copyright © 2025 Impetus Technologies Inc. All Rights Reserved

  • Terms of Use
  • Privacy Policy
  • License Agreement
To the top ↑ Up ↑