Skip to the content
LeaplogicLeaplogic
  • Home
  • About Us
  • Contact
SIGN IN
  • Home
  • About Us
  • Contact

  • Getting Started
    • Before You Begin
    • Creating an Account
    • Logging into LeapLogic
    • Reset Password
    • Quick Tour of the Web Interface
    • LeapLogic in 15 minutes
      • Prerequisites
      • Step 1. Log into LeapLogic
      • Step 2. Create Assessment and Get Insights
      • Step 3. Create Transformation Pipeline and See Results
      • Step 4. Edit or Optimize the Transformed Code
      • Step 5: Complete the Transformation Lifecycle
  • Introduction to LeapLogic
    • Overview
    • High Level Architecture
    • Supported Legacy and Cloud Platforms
    • Key Features
  • LeapLogic Deployment Architecture & Prerequisites
  • Workload Assessment
    • Overview
    • Value Proposition
    • Creating Assessment
      • Prerequisites
      • Step 1. Provide Primary Inputs
        • Automation Coverage
      • Step 2. Add the Additional Inputs
        • Table Stat Extraction Steps
          • Teradata
          • Oracle
          • Netezza
      • Step 3. Update the Source Configuration
      • Step 4. Configure the Recommendation Settings
    • Assessment Listing
    • Understanding Insights and Recommendations
      • Volumetric Info
      • EDW
        • Oracle
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Vertica
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Snowflake
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Azure Synapse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Teradata
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Netezza
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Google Big Query
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Redshift
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • PostgreSQL
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Duck DB
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • ClickHouse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Exasol
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • DB2
          • Highlights
          • Analysis
          • Optimization
          • Recommendations
          • Lineage
          • Downloadable Reports
      • ETL
        • Informatica
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Ab Initio
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • DataStage
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Talend
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSIS
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Informatica BDM
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Oracle Data Integrator
          • Highlights
          • Analysis
          • Downloadable Reports
        • Pentaho
          • Highlights
          • Analysis
          • Downloadable Reports
        • Azure Data Factory
          • ARM Template
          • Highlights
          • Analysis
          • Downloadable Reports
        • Matillion
          • Highlights
          • Analysis
          • Downloadable Reports
        • SnapLogic
          • Highlights
          • Analysis
          • Downloadable Reports
      • Orchestration
        • AutoSys
          • Highlights
          • Analysis
          • Downloadable Reports
        • Control-M
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
      • BI
        • OBIEE
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Tableau
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • IBM Cognos
          • Highlights
          • Analysis
          • Downloadable Reports
        • MicroStrategy
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Power BI
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSRS
          • Highlights
          • Analysis
          • Downloadable Reports
        • SAP BO
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • WebFOCUS
          • Highlights
          • Analysis
          • Downloadable Reports
      • Analytics
        • SAS
          • Highlight
          • Analysis
          • Lineage
          • Downloadable Reports
        • Alteryx
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
      • Shell
        • Highlights
        • Analysis
        • Lineage
        • Downloadable Reports
      • Integrated Assessment (EDW, ETL, Orchestration, BI)
        • Highlights
        • Analysis
        • Optimization
        • Lineage
        • Recommendations
    • Managing Assessment Reports
      • Downloading Report
      • Input Report Utility
      • View Configuration
    • Complexity Calculation Logic
    • Key Benefits
    • Ad hoc Query
  • Script/ Query Log/ Code Extraction Prerequisites
    • Cloud
      • Azure Data Factory
      • Snowflake
      • Azure Synapse
      • Google BigQuery
      • Redshift
      • Azure SQL Database Hyperscale
      • Aurora PostgreSQL
    • EDW
      • Oracle
      • Netezza
      • Teradata
      • Vertica
      • SQL Server
      • Db2
      • MySQL
      • PostgreSQL
    • ETL
      • DataStage
      • Informatica
      • SSIS
      • Talend
      • ODI
      • IICS
      • DBT
      • Pentaho
      • Matillion
      • SnapLogic
      • Ab Initio
      • SAP BODS
      • TAC
      • WebFOCUS
    • BI
      • IBM Cognos
      • OBIEE
      • Tableau
      • Metabase
      • MicroStrategy
      • PowerBI
      • LeapLogic Utility for SAP BO
      • SAP BO Universe and Web Intelligence
      • SSRS
    • Analytics
      • SAS
      • Alteryx
    • Orchestration
      • AutoSys
      • Control-M
      • SQL Server
    • Mainframe
  • Metadata Management
    • Overview
    • Introduction to Data Catalog
      • Managing Data Catalog
        • Building Data Catalog
        • Insights to Data Catalog
        • Managing the Repository and Data Source
      • Creating Repository (Repo)
      • Creating Data Source
    • Tag Management
    • Key benefits
  • Batch Processing using Pipeline
    • Introduction
    • Designing Pipeline
      • How to create a pipeline
        • Configuring Migration Stage
          • Schema Optimization
        • Configuring Transformation Stage
          • On-premises to Cloud
          • Cloud-to-Cloud
            • Assigning Roles
            • Intelligent Modernization
          • LeapFusion
        • Configuring Validation Stage
          • Data Validation
            • Table
            • File
            • File and Table
            • Cell-by-cell validation
          • Query Validation
            • Query Validation (When Data is Available)
            • Query Validation (When Data is Not Available)
          • Schema Validation
        • Configuring Execution Stage
        • Configuring ETL Conversion Stage
          • Ab Initio
          • Informatica
          • Informatica BDM
          • Matillion
          • DataStage
          • SSIS
          • IICS
          • Talend
          • Oracle Data Integrator
          • Pentaho
          • SnapLogic
        • Configuring Mainframe Conversion Stage
          • Cobol
          • JCL
        • Configuring Orchestration Stage
          • AutoSys
          • Control-M
        • Configuring BI Conversion Stage
          • OBIEE to Power BI
          • OBIEE to AWS QuickSight
          • Tableau to Amazon QuickSight
          • Tableau to Power BI
          • Tableau to Superset
          • Tableau to Looker
          • IBM Cognos to Power BI
        • Configuring Analytics Conversion Stage
          • SAS
          • Alteryx
        • Configuring Script Conversion Stage
    • Key Features
      • How to schedule a pipeline
      • Configuring Parameters
  • Pipeline Reports
    • Overview of Pipeline Report
    • Pipeline Listing
    • Reports and Insights
      • Migration
      • Transformation
        • On-premises to Cloud
        • Cloud-to-Cloud
      • Validation
        • Data
          • File
          • Table
          • File and Table
        • Query
          • Query Validation Report (When Data is Available)
          • Query Validation Report (When Data is not Available)
        • Schema
      • Execution
      • ETL
        • Ab Initio
        • Informatica
        • Informatica BDM
        • Matillion
        • DataStage
        • SSIS
        • IICS
        • Talend
        • Oracle Data Integrator
        • Pentaho
        • SnapLogic
      • Mainframe
        • Cobol
        • JCL
      • Orchestration
        • AutoSys
        • Control-M
      • BI
        • OBIEE to Power BI
        • OBIEE to Amazon QuickSight
        • Tableau to Amazon QuickSight
        • Tableau to Power BI
        • Tableau to Superset
        • Tableau to Looker
        • IBM Cognos to Power BI
      • Analytics
        • SAS
        • Alteryx
      • Shell Script
      • Common Model
    • Automation Level Indicator
      • ETL
        • Informatica
        • Matillion
        • DataStage
        • Informatica BDM
        • SnapLogic
        • IICS
        • Ab Initio
        • SSIS
        • Talend
        • Pentaho
      • Orchestration
        • AutoSys
        • Control-M
      • EDW
      • Analytics
        • SAS
        • Alteryx
      • BI
      • Shell Script
    • Error Specifications & Troubleshooting
  • SQL Transformation
    • Overview
    • Creating and Executing the Online Notebook
      • How to Create and Execute the Notebook
      • Supported Features
    • Configuring the Notebook
      • Transformation
      • Unit Level Validation
      • Script Level Validation
    • Notebook Listing
  • Operationalization
    • Overview
      • Basic
      • Advanced
      • Cron Expression
    • Parallel Run Pipeline Listing
  • Transformation Source
    • Introduction
    • Creating Transformation Source Type
  • Governance
    • Summary of Governance - Roles and Permissions
    • User Creation
      • Creating a new User Account
    • Adding Roles and permissions
      • How to add Roles and Permissions to a new user?
    • Adding Group Accounts
    • Manage Quota
    • Product Usage Metrics
  • License
    • Workload Assessment
    • EDW Conversion
    • ETL Conversion
    • BI Conversion
    • Orchestration Conversion
    • Data Migration
    • Data Validation
  • LeapLogic Desktop Version
    • Overview
    • Registration and Installation
    • Getting Started
    • Creating Assessment
      • ETL
      • DML
      • Procedure
      • Analytics
      • Hadoop
    • Reports and Insights
      • Downloadable Reports
      • Reports for Estimation
    • Logging and Troubleshooting
    • Sample Scripts
    • Desktop vs. Web Version
    • Getting Help
  • LeapLogic (Version 5.0) Deployment
    • System Requirements
    • Prerequisites
    • Deployment
      • Extracting Package
      • Placing License Key
      • Executing Deployment Script
      • Accessing LeapLogic
    • Uploading License
    • Appendix
    • Getting Help
  • Removed Features
    • Configuring File Validation Stage
    • Variable Extractor Stage
      • Variable Extractor Report
    • Configuring Meta Diff Stage
      • Meta Diff
    • Configuring Data Load Stage
      • Data Load
    • Configuring Multi Algo Stage
  • FAQs
  • Tutorial Videos
  • Notice
Home   »  Script/ Query Log/ Code Extraction Prerequisites   »  EDW  »  Teradata Query Execution Logs Extraction and Source Code Prerequisites

Teradata Query Execution Logs Extraction and Source Code Prerequisites

This topic briefs about the Teradata assessment query execution logs extraction and source code prerequisites.

In This Topic:

  • Introduction
  • Assessment Process
  • Database Object DDL Extraction
  • Source Code Assessment

Download


Introduction

LeapLogic Assessment profiles existing inventory, identify complexity, lineage and provides comprehensive recommendations for migration to modern data platform.

Assessment checklist

Information Format Availability?
Teradata Query Logs File Export in text format Yes/ No
Total I/O Usage by Days File Export in text format Yes/ No
Teradata Hardware Config for Total CPU Calculation Inline in the document Yes/ No
Database Object Count File Export in text format Yes/ No
Database Volume File Export in text format Yes/ No
High Data Volume Tables File Export in text format Yes/ No
Total Number of Databases and Users File Export in text format Yes/ No
Data for Partitioning bucketing File Export in text format Yes/ No
Source Code File Export Yes/ No

Follow the steps in section 2, 3 to collect the required information for assessment.


Assessment Process

All the environment specific variables are highlighted. The start and end date should be consistent for all the queries.

Logs

This requires query execution logs in a CSV format. The CSV file must have the following columns (~~ separated file) retrieved from Dbqlogtbl and QryLogSQL tables of PDCRInfo/DBC database.

FILLER~~UserName~~AppID~~ClientID~~StartTime~~AMPCPUTime~~TotalIOCount~~ParserCPUTime~~FirstRespTime~~FirstStepTime~~ProcID~~QueryID~~MaxAMPCPUTime~~MaxAmpIO~~totalCPU~~totalIO~~Query_Execution_Time~~OBJECTDATABASENAME~~SQLROWNO~~StatementType~~ LogonSource~~ QUERYBAND~~SQLTEXTINFO

Two options are available for extracting query execution logs: the first involves utilizing the TPT script, while the second involves using FastExport. The recommended approach is to use the TPT script for query log extraction.

Note:

Option 1, which uses TPT Export, is recommended for extracting query execution logs. Option 2 (FastExport) should not be used unless Options 1 can’t be used pertaining to certain reasons. This is because Option 2 also exports special characters which needs special cleansing exercise before performing assessment. Therefore, it becomes an additional overhead and may increase assessment time and effort.

Option 1: TPT Export (Recommended)

An effective approach to extract information from query logs is by utilizing a TPT script. This method uses the TPT utility to retrieve the query execution logs, resulting in the generation of a log specific to the provided start and end dates.

The typical steps are:

  • Copy the following artifact into a folder on the system where TTU (Teradata Tools and Utilities) is installed.
QueryLogExtractDownload
  • Next, modify the specified properties in the file.
    • Update the destination path for storing the query log output.

VARCHAR DirectoryPath=‘/home/ec2-user/output’,

    • Update the TPID with the real Teradata system name or IP address.

    VARCHAR Tdpid=‘54.234.225.211’,

    • Update the Teradata system’s username and password with the real credentials.

VARCHAR UserName=’dbc‘,

VARCHAR UserPAssword=’dbc‘,

  • Lastly, execute the TPT script using the below command. Please make sure to adjust start and end date for which the query execution log needs to be downloaded or exported.

tbuild -f /path/QueryLogExtract.tpt -u “start_date=‘2023-07-01’, end_date=‘2023-07-20’“| tee QueryLogExtract.tpt.log

Option 2: FastExport (Not Preferred)

An alternative approach to extract information from DBQL logs involves using FastExport. However, it’s important to keep in mind that utilizing FastExport may result in larger output file size.

The typical steps are:

  • Save the below FastExport script with a filename ‘TD_fastexport.fe’

.LOGTABLE dbname.fastexp_log; 

.LOGON TDPID/username,password; 

   DATABASE dbname; 

   .BEGIN EXPORT SESSIONS 10; 

      .EXPORT OUTFILE D:\path\Teradata_fastExp_query_log.txt 

      MODE RECORD FORMAT TEXT;

      select cast(‘IDWWM’||’~~’||

                   rtrim(cast(tb1.Username as char(128)))||’~~’||

                rtrim(cast(tb1.AppID as char(30)))||’~~’||

                rtrim(cast(tb1.ClientID as char(30)))||’~~’||

                rtrim(cast(tb1.StartTime as char(21)))||’~~’||

                rtrim(cast(tb1.ampcputime as char(10)))||’~~’||

                rtrim(cast(tb1.TotalIOCount as char(20)))||’~~’||

                rtrim(cast(tb1.ParserCPUTime as char(10)))||’~~’||

                rtrim(cast(tb1.firstresptime as char(21)))||’~~’||

                rtrim(cast(tb1.firststeptime as char(21)))||’~~’||

                rtrim(cast(tb1.ProcID as char(6)))||’~~’||

                rtrim(cast(tb1.QueryID as char(20)))||’~~’||

                rtrim(cast(tb1.MaxAMPCPUTime as char(10)))||’~~’||

                rtrim(cast(tb1.MaxAMPIO as char(20)))||’~~’||

                                                               rtrim(cast(tb4.ampcputime as char(10)))||’~~’||

                                                               rtrim(cast(tb4.TotalIOCount as char(20)))||’~~’||

                                                               rtrim(cast(((CAST((CAST(tb1.firstresptime AS DATE)- CAST(tb1.firststeptime AS DATE)) AS DECIMAL(18,6)) * 60*60*24)

                                                               + ((EXTRACT(  HOUR FROM tb1.firstresptime) – EXTRACT(  HOUR FROM tb1.firststeptime)) * 60*60)

                                                               + ((EXTRACT(MINUTE FROM tb1.firstresptime) – EXTRACT(MINUTE FROM tb1.firststeptime)) * 60)

                                                               +  (EXTRACT(SECOND FROM tb1.firstresptime) – EXTRACT(SECOND FROM tb1.firststeptime))  )  as char(20)))||’~~’||

                                                  rtrim(cast(coalesce(TB3.OBJECTDATABASENAME,’Not Present’) as char(500)))||’~~’||

                                                  rtrim(cast(tb2.sqlrowno as char(2)))||’~~’||

                                                  rtrim(cast(tb1.statementtype as char(100)))||’~~’||

                                                  rtrim(cast(tb1.logonsource as char(100)))||’~~’||

                                                  rtrim(cast(coalesce(tb1.queryband,’Not Present’) as char(6160)))||’~~’||          

                                                               rtrim(cast(tb2.SQLTEXTINFO  as char(20000)))

                                                  as char(27000))

                                                  )                                              

from

               pdcrinfo.dbqlogtbl tb1 inner join pdcrinfo.dbqlsqltbl tb2

               on tb1.queryid = tb2.queryid

               and tb1.procid = tb2.procid

               and tb1.logdate = tb2.logdate

               left outer join

               (

               select queryid,procid,logdate,tdstats.udfconcat(trim(objectdatabasename)) as objectdatabasename

               from pdcrinfo.dbqlobjtbl

               where objecttype=’db’ and logdate between ‘2017/01/01’ AND ‘2017/05/29’

               group by queryid,procid,logdate

               )

               tb3

               on tb1.queryid = tb3.queryid

               and tb1.procid = tb3.procid

               and tb1.logdate = tb3.logdate

               inner join

               (

               select logdate,sum(ampcputime) ampcputime,sum(totaliocount) as totaliocount

    from pdcrinfo.dbqlogtbl

    where

logdate between ‘2023/01/01’ AND ‘2023/05/29’

               group by logdate

               )tb4 on tb4.logdate = tb1.logdate

where

               tb1.logdate between ‘2023/01/01’ AND ‘2023/05/29’

               and tb1.errorcode=0

                               order by tb1.procid,tb1.queryid,tb2.sqlrowno;

   .END EXPORT;

.LOGOFF;

  • Highlighted variable to be modified as:
    • Replace ‘2023/01/01’ AND ‘2023/05/29‘ with actual export assessment start and end date within the attached query.
    • Replace dbname with actual database name.
    • Replace ‘TDPID’ with actual Teradata Server IP.
    • Replace ‘username, password’ with actual database username and password within the attached query.
    • Replace database name ‘dbc’ with actual database within the attached query.
    • Replace DATAFILE path ‘D:\path\Teradata_fastExp_query_log.txt’ with actual path and fileName within the attached query.
  • Execute the FastExport script.

fexp < TD_fastexport.fe

Notes for Troubleshooting:

If the user does not have access to function tdstats.udfconcat then:

  1. Grant the required permission to user before running the script again

GRANT SELECT ON TDSTATS to “USERNAME”;

  1. Change “tdstats.udfconcat(trim(OBJECTDATABASENAME)) as OBJECTDATABASENAME” to “trim(OBJECTDATABASENAME) as OBJECTDATABASENAME”
  2. Grant CREATE TABLE and DROP TABLE privileges to the Teradata user to create the fast export log table

GRANT CREATE TABLE, DROP TABLE

ON dbname

TO userid;

If the TPT script is modified on Windows machine and uploaded to Linux system, then,

  • use dos2unix tool to convert text files from DOS line endings (carriage return + line feed) to Unix line endings (line feed).

# dos2unix QueryLogExtract.tpt

dos2unix: converting file QueryLogExtract.tpt to UNIX format …

Total I/O Usage by Days

Save the results of this query in a CSV file:

SELECT THEDATE AS RUNDATE,

CAST(SUM(FILEACQREADS+FILEPREREADS+FILEWRITES) AS BIGINT) AS TOTALIOREADS

FROM DBC.RESUSAGESPMA

WHERE THEDATE BETWEEN ‘2023/01/01’ AND ‘2023/05/29’ GROUP BY THEDATE;

Replace ‘2023/01/01’ AND ‘2023/05/29’ with actual export assessment start and end date.

Hardware Configuration

The details of hardware configuration are required for the calculation of total CPU utilization. Please provide the following details.

  • Teradata series and specs (Cores, RAM, HDD/SDD)
  • Number of Teradata nodes
  • TPerf
  • Total available AMPCPU seconds per day

Database Objects

This query provides the count of database objects in use. Save its results in a CSV file.

select databasename,tablekind,count(tablename) from dbc.tables group by databasename,tablekind order by databasename

Database Volume

This SQL will collect databases with volume equal or above 10GB. Save its results in a CSV file.

select databasename,cast (sum(currentperm)/1024/1024/1024 as decimal(18,2))

from dbc.diskspace

group by databasename having cast (sum(currentperm)/1024/1024/1024 as decimal (18,2)) >= 10 order by 2 desc

Note:

The filter condition can be changed for collecting lower volumes.

High Data Volume Tables

This query provides result for tables with high data volume. Save its results in a CSV file.

select

 ts.databasename,

 ts.tablename,

 rc.num_rows,

 ts.currentsize_gb,

 pv.ConstraintType,

 pv.ConstraintText

from

 (select

               databasename,tablename,sum(currentperm)/1024/1024/1024 as currentsize_gb

 from

  dbc.tablesize

  group by 1,2

  having sum(currentperm)/1024/1024/1024 >= 10

 )ts

 left join

 (select databasename,tablename,max(sv.rowcount) num_rows from dbc.statsv sv group by databasename,tablename) rc

    on ts.databasename=rc.databasename and ts.tablename=rc.tablename

 left join DBC.PartitioningConstraintsv pv

 on ts.databasename=pv.databasename and ts.tablename=pv.tablename  order by ts.currentsize_gb desc

Note:

The filter condition can be changed for collecting lower volumes.

Databases and Users

This query provides the total number of databases and users. Save its results in a CSV file.

select dbkind,count(*) from dbc.databases group by dbkind

Data for Partitioning / Bucketing

This query extracts table size and column details to be utilized for partitioning and bucketing recommendations. Save the results in a CSV file.

SELECT cols.databasename                AS DATABASE_NAME,

       cols.tablename                   AS Table_Name,

       Cast(alsp.currentperm AS BIGINT) AS TABLE_SIZE_BYTES,

       ”                               AS NUM_ROWS,

       cols.columnname                  AS Column_Name,

       ”                               AS num_unique_val

FROM   dbc.columns AS cols

       JOIN ( select databasename,tablename, cast(sum (currentperm) as bigint) as currentperm from dbc.allspace group by 1 ,2 ) alsp

         ON  cols.tablename = alsp.tablename and  cols.databasename = alsp.databasename

WHERE  cols.tablename <> ‘All’;


Database Object DDL Extraction

LeapLogic needs specific artifacts to perform an assessment. You can either copy these artifacts from your GIT repository or export them, including table DDL scripts, stored procedures, functions, macros, views, etc., using the shell script provided later in this section.

Please ensure that you run the shell script on a machine where the TTU (Teradata Tools and Utility) software is installed.

teradata_ddl_extractDownload

Update the variables in the attached shell script as needed.

# Teradata system ID and credentials

tpdtd=172.26.52.161

username=dbc

password=dbc

# list of databases

databases=”‘DS_TBL_DB’, ‘TESTQLV’“

The shell script will create a “scripts” folder containing all the DDLs in the directory where the script is executed.


Source Code Assessment

Provide the below active or in-scope Teradata source code artifacts as applicable in the migration scope.

Code Artifact Criticality Remarks
Orchestration Scripts (Control-M / Autosys / Cron etc.) Must To identify interdependencies across scheduler scripts / jobs, queries, and dependent workloads
Procedures / Functions Must To identify workload complexity, query patterns, query count etc. for effort estimation and technical debt
Views Must To identify view complexity, patterns and effort estimations
Shell Scripts Must To identify count, dependencies, SQL queries and PL/SQL, logic (example: email, notification etc.) and effort estimations
DDL Must To identify column usage, and provide recommendation on column level lineage, and query optimization on the target system
DML / SQL Files Must To identify count, dependencies, SQL queries and effort estimations
CTL Files Must To identify source data type and formats, example Delimiters, JSON etc.
BTEQ Must To identify count, dependencies, SQL queries and PL/SQL, transformation logics and effort estimations
TPT, Fexp, mLoad Must To identify count, complexity of data ingestion scripts and effort estimation

Note:

Limit: Assuming the orchestration script is a trigger point for every single use case execution in the existing setup. If customer is not comfortable sharing all the workloads, then share those workloads which are referred or executed through the orchestration scripts. However, in such scenarios the scope and effort estimates will be based on the given workloads.


To learn more, contact our support team or write to: info@leaplogic.io

Copyright © 2025 Impetus Technologies Inc. All Rights Reserved

  • Terms of Use
  • Privacy Policy
  • License Agreement
To the top ↑ Up ↑