Skip to the content
LeaplogicLeaplogic
  • Home
  • About Us
  • Contact
SIGN IN
  • Home
  • About Us
  • Contact

  • Getting Started
    • Before You Begin
    • Creating an Account
    • Logging into LeapLogic
    • Reset Password
    • Quick Tour of the Web Interface
    • LeapLogic in 15 minutes
      • Prerequisites
      • Step 1. Log into LeapLogic
      • Step 2. Create Assessment and Get Insights
      • Step 3. Create Transformation Pipeline and See Results
      • Step 4. Edit or Optimize the Transformed Code
      • Step 5: Complete the Transformation Lifecycle
  • Introduction to LeapLogic
    • Overview
    • High Level Architecture
    • Supported Legacy and Cloud Platforms
    • Key Features
  • Workload Assessment
    • Overview
    • Value Proposition
    • Creating Assessment
      • Prerequisites
      • Step 1. Provide Primary Inputs
        • Automation Coverage
      • Step 2. Add the Additional Inputs
        • Table Stat Extraction Steps
          • Teradata
          • Oracle
          • Netezza
      • Step 3. Update the Source Configuration
      • Step 4. Configure the Recommendation Settings
    • Assessment Listing
    • Understanding Insights and Recommendations
      • Volumetric Info
      • EDW
        • Oracle
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Vertica
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Snowflake
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Azure Synapse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Teradata
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Netezza
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Google Big Query
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Redshift
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • PostgreSQL
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Duck DB
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • ClickHouse
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • Exasol
          • Highlights
          • Analysis
          • Optimization
          • Lineage
          • Recommendations
          • Downloadable Reports
        • DB2
          • Highlights
          • Analysis
          • Optimization
          • Recommendations
          • Lineage
          • Downloadable Reports
      • ETL
        • Informatica
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Ab Initio
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • DataStage
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Talend
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSIS
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Informatica BDM
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Oracle Data Integrator
          • Highlights
          • Analysis
          • Downloadable Reports
        • Pentaho
          • Highlights
          • Analysis
          • Downloadable Reports
        • Azure Data Factory
          • ARM Template
          • Highlights
          • Analysis
          • Downloadable Reports
        • Matillion
          • Highlights
          • Analysis
          • Downloadable Reports
        • SnapLogic
          • Highlights
          • Analysis
          • Downloadable Reports
      • Orchestration
        • AutoSys
          • Highlights
          • Analysis
          • Downloadable Reports
        • Control-M
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SQL Server
          • Highlights
          • Analysis
      • BI
        • OBIEE
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Tableau
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • IBM Cognos
          • Highlights
          • Analysis
          • Downloadable Reports
        • MicroStrategy
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • Power BI
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • SSRS
          • Highlights
          • Analysis
          • Downloadable Reports
        • SAP BO
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
        • WebFOCUS
          • Highlights
          • Analysis
          • Downloadable Reports
      • Analytics
        • SAS
          • Highlight
          • Analysis
          • Lineage
          • Downloadable Reports
        • Alteryx
          • Highlights
          • Analysis
          • Lineage
          • Downloadable Reports
      • Integrated Assessment (EDW, ETL, Orchestration, BI)
        • Highlights
        • Analysis
        • Optimization
        • Lineage
        • Recommendations
    • Managing Assessment Reports
      • Downloading Report
      • Input Report Utility
      • View Configuration
    • Complexity Calculation Logic
    • Key Benefits
    • Ad hoc Query
  • Metadata Management
    • Overview
    • Introduction to Data Catalog
      • Managing Data Catalog
        • Building Data Catalog
        • Insights to Data Catalog
        • Managing the Repository and Data Source
      • Creating Repository (Repo)
      • Creating Data Source
    • Tag Management
    • Key benefits
  • Batch Processing using Pipeline
    • Introduction
    • Designing Pipeline
      • How to create a pipeline
        • Configuring Migration Stage
          • Schema Optimization
        • Configuring Transformation Stage
          • On-premises to Cloud
          • Cloud-to-Cloud
          • LeapLogic Express
        • Configuring Validation Stage
          • Data Validation
            • Table
            • File
            • File and Table
            • Cell-by-cell validation
          • Query Validation
            • Query Validation (When Data is Available)
            • Query Validation (When Data is Not Available)
          • Schema Validation
        • Configuring Execution Stage
        • Configuring ETL Conversion Stage
          • Ab Initio
          • Informatica
          • Informatica BDM
          • Matillion
          • DataStage
          • SSIS
          • IICS
          • Talend
          • Oracle Data Integrator
          • Pentaho
          • SnapLogic
        • Configuring Mainframe Conversion Stage
          • Cobol
          • JCL
        • Configuring Orchestration Stage
          • AutoSys
          • Control-M
        • Configuring BI Conversion Stage
          • OBIEE to Power BI
          • OBIEE to AWS QuickSight
          • Tableau to Amazon QuickSight
          • Tableau to Power BI
          • Tableau to Superset
          • Tableau to Looker
          • IBM Cognos to Power BI
        • Configuring Analytics Conversion Stage
          • SAS
          • Alteryx
        • Configuring Script Conversion Stage
    • Key Features
      • How to schedule a pipeline
      • Configuring Parameters
  • Pipeline Reports
    • Overview of Pipeline Report
    • Pipeline Listing
    • Reports and Insights
      • Migration
      • Transformation
        • On-premises to Cloud
        • Cloud-to-Cloud
        • LeapLogic Express
      • Validation
        • Data
          • File
          • Table
          • File and Table
        • Query
          • Query Validation Report (When Data is Available)
          • Query Validation Report (When Data is not Available)
        • Schema
      • Execution
      • ETL
        • Ab Initio
        • Informatica
        • Informatica BDM
        • Matillion
        • DataStage
        • SSIS
        • IICS
        • Talend
        • Oracle Data Integrator
        • Pentaho
        • SnapLogic
      • Mainframe
        • Cobol
        • JCL
      • Orchestration
        • AutoSys
        • Control-M
      • BI
        • OBIEE to Power BI
        • OBIEE to Amazon QuickSight
        • Tableau to Amazon QuickSight
        • Tableau to Power BI
        • Tableau to Superset
        • Tableau to Looker
        • IBM Cognos to Power BI
      • Analytics
        • SAS
        • Alteryx
      • Shell Script
      • Common Model
    • Automation Level Indicator
      • ETL
        • Informatica
        • Matillion
        • DataStage
        • Informatica BDM
        • SnapLogic
        • IICS
        • Ab Initio
        • SSIS
        • Talend
        • Pentaho
      • Orchestration
        • AutoSys
        • Control-M
      • EDW
      • Analytics
        • SAS
        • Alteryx
      • BI
      • Shell Script
    • Error Specifications & Troubleshooting
  • SQL Transformation
    • Overview
    • Creating and Executing the Online Notebook
      • How to Create and Execute the Notebook
      • Supported Features
    • Configuring the Notebook
      • Transformation
      • Unit Level Validation
      • Script Level Validation
    • Notebook Listing
  • Operationalization
    • Overview
      • Basic
      • Advanced
      • Cron Expression
    • Parallel Run Pipeline Listing
  • Transformation Source
    • Introduction
    • Creating Transformation Source Type
  • Governance
    • Summary of Governance - Roles and Permissions
    • User Creation
      • Creating a new User Account
    • Adding Roles and permissions
      • How to add Roles and Permissions to a new user?
    • Adding Group Accounts
    • Default Quota Limits
    • Product Usage Metrics
  • License
    • EDW
    • ETL
  • Script/ Query Log/ Code Extraction Prerequisites
    • Cloud
      • Azure Data Factory
      • Snowflake
      • Azure Synapse
      • Google BigQuery
      • Redshift
      • Azure SQL Database Hyperscale
      • Aurora PostgreSQL
    • EDW
      • Oracle
      • Netezza
      • Teradata
      • Vertica
      • SQL Server
      • Db2
      • MySQL
      • PostgreSQL
    • ETL
      • DataStage
      • Informatica
      • SSIS
      • Talend
      • ODI
      • IICS
      • DBT
      • Pentaho
      • Matillion
      • SnapLogic
      • Ab Initio
      • SAP BODS
      • TAC
      • WebFOCUS
    • BI
      • IBM Cognos
      • OBIEE
      • Tableau
      • Metabase
      • MicroStrategy
      • PowerBI
      • LeapLogic Utility for SAP BO
      • SAP BO Universe and Web Intelligence
      • SSRS
    • Analytics
      • SAS
      • Alteryx
    • Orchestration
      • AutoSys
      • Control-M
      • SQL Server
    • Mainframe
  • LeapLogic Desktop Version
    • Overview
    • Registration and Installation
    • Getting Started
    • Creating Assessment
      • ETL
      • DML
      • Procedure
      • Analytics
      • Hadoop
    • Reports and Insights
      • Downloadable Reports
      • Reports for Estimation
    • Logging and Troubleshooting
    • Sample Scripts
    • Desktop vs. Web Version
    • Getting Help
  • LeapLogic (Version 4.9) Deployment
    • System Requirements
    • Prerequisites
    • Deployment
      • Extracting Package
      • Placing License Key
      • Executing Deployment Script
      • Accessing LeapLogic
    • Uploading License
    • Appendix
    • Getting Help
  • Removed Features
    • Configuring File Validation Stage
    • Variable Extractor Stage
      • Variable Extractor Report
    • Configuring Meta Diff Stage
      • Meta Diff
    • Configuring Data Load Stage
      • Data Load
    • Configuring Multi Algo Stage
  • FAQs
  • Tutorial Videos
  • Notice
Home   »  Script/ Query Log/ Code Extraction Prerequisites   »  EDW  »  Oracle Query Execution Logs Extraction and Source Code Prerequisites

Oracle Query Execution Logs Extraction and Source Code Prerequisites

This topic briefs about the Oracle assessment query execution logs and source code extraction prerequisites.

In This Topic:

  • Introduction
  • Prerequisites
  • Security Considerations and Disclaimer
  • Assessment Process
  • Database Objects
  • Source Code Assessment

Introduction

LeapLogic’s Assessment profiles existing inventory, identifies complexity, performs dependency analysis, and provides recommendations for migration to modern data platform.

Assessment checklist

Information Format Availability?
Oracle Query Logs (SQL given below) File Export in text format Yes/ No
Hardware Configuration File Export in text format Yes/ No
Software Version Inline in the document Yes/ No
Database Object Count File Export in text format Yes/ No
Database Volume File Export in text format Yes/ No
High Data Volume Tables File Export in text format Yes/ No
Total Number of Users File Export in text format Yes/ No
Data for Partitioning / Bucketing File Export in text format Yes/ No
Source Code (Extract) File Export Yes/ No

Follow the steps in section 2, 3 to collect the required information for assessment.


Prerequisites

See the below prerequisites for the extraction of Oracle query execution logs.

  • AWR should be up and running
  • AWR retention should be greater than 1, example, if retention is 30, system retrieves 30 days of query logs
  • TOPNSQL should be set to maximum. If it isn’t, we may not get all the SQL details
    Limitation – SQL with < 10 seconds execution time will not show up in logs (Oracle default behavior)
    SQL to check current AWR settings:

select * from dba_hist_wr_control

  • AWR settings can be modified using the below command

dbms_workload_repository.modify_snapshot_settings(interval => 60,retention => 43200,topnsql =>’MAXIMUM’);

Follow the steps in section 3,4 to collect the required information for assessment.


Security Considerations and Disclaimer

The intention of this document is to export just the database objects and schema. LeapLogic does not need any kind of customer data for executing assessment or code transformation.

  • The steps given in this document for exporting the required workloads does not alter or modify any data.
  • It primarily consists of Select queries only.
  • The required metadata is fetched from the system tables, not from the tables used in the customer’s environment.


Assessment Process

Use SQL Developer (Preferred) to export all your database objects and logs. Follow the below given steps to start exporting the workloads from your environment.

  1. Run the respective queries that you see in the subsequent sub-sections using SQL Developer.
  1. Right-click on the result and choose “Export“.
  1. In the Export wizard, choose CSV as the format.
  1. Enable the compression, specify the file name and location for the export file.
  1. Click “Next” and then “Finish” to export the query result to a compressed file.

Logs Extraction

This requires query log in a CSV format. It should be enabled on the data warehouse. The CSV file must have following columns (~~ separated file) retrieved from DBA_HIST* tables.

Filler~~UserName~~Progam~~ClientID~~StartTime~~CPUTime~~TotalIOCount~~ParserCPUTime~~FirstRespTime~~FirstStepTime~~ProcID~~QueryID~~MaxCPUTime~~MaxIO~~totalCPU~~totalIO~~Query_Execution_Time~~SchemaNAME ~~SQLTYPE~~SQLTEXTINFO~~  execution_frequency

There are two options for query execution log extraction. The preferred one is GUI based. In case, there’re any problems encountered with the GUI-based option, then PL/SQL-based option can be used.

Option 1: GUI Based

Preferably, use a GUI tool such as SQL Developer or Toad etc. for gathering the log information from Oracle. All the environment specific variables are highlighted in the document.

The SQL query to collect the required log information is as given below.

select ‘IDWWM’ ||’~~’||

username ||’~~’||

program ||’~~’||

client_id ||’~~’||

to_char(sample_time,’dd-mm-yyyy hh24:mi:ss’) ||’~~’||

CPU_TIME_DELTA/1000000 ||’~~’||

(DISK_READS_DELTA + DIRECT_WRITES_DELTA) ||’~~’||

0 ||’~~’||

to_char(sample_time + ELAPSED_TIME_DELTA/86400000000,’dd-mm-yyyy hh24:mi:ss’) ||’~~’||

to_char(sample_time,’dd-mm-yyyy hh24:mi:ss’) ||’~~’||

0 ||’~~’||

a.sql_id ||’~~’||

CPU_TIME_DELTA/1000000 ||’~~’||

(DISK_READS_DELTA + DIRECT_WRITES_DELTA) ||’~~’||

sum(CPU_TIME_DELTA/1000000) over (partition by trunc(sample_time)) ||’~~’||

sum(DISK_READS_DELTA + DIRECT_WRITES_DELTA) over(partition by trunc(sample_time)) ||’~~’||

ELAPSED_TIME_DELTA/1000000 ||’~~’||

parsing_schema_name ||’~~’||

v.command_name ||’~~’||

sql_text ||’~~’||

executions_delta

from

(

select t.*,row_number() over(partition by snap_id,sql_id order by sample_time) as rn from dba_hist_active_sess_history t

where cast( sample_time as date) between to_date(’15-11-2024′, ‘dd-mm-yyyy’) and to_date(’20-11-2024′, ‘dd-mm-yyyy’)

)a,

dba_hist_sqlstat b,

dba_hist_sqltext c,

dba_users d,

v$sqlcommand v

where

a.sql_id = b.sql_id

and a.snap_id = b.snap_id

and b.sql_id = c.sql_id

and d.user_id = a.user_id

and a.rn = 1

and c.command_type=v.command_type

order by a.sql_id;

Note:

Modify the highlighted text above to adjust date range.

Option 2: PL/SQL Based

In case of any problem with the GUI-based option, execute the attached PL/SQL block.

extract_oracle_logDownload

Note:

Replace all the highlighted variables (in Green) as per the requirements in the extract_oracle_log.docx

Hardware Configuration

The details of hardware configuration are required for the calculation of total CPU utilization. Please provide the following details.

  • Oracle series and specs (Cores, RAM, HDD/SDD)
  • Number of nodes in the cluster (RAC)

Software Version

This query provides the software version in use. Save its results in a CSV file.

select * from v$version

Database Objects

This query provides the count of database objects in use. Save its results in a CSV file.

select owner,object_type,count(object_name) from dba_objects group by owner,object_type;

Database Volume

This query provides the database volume greater than 10 GB in use. Save its results in a CSV file.

Note:

The filter condition can be changed for collecting lower volumes.

select owner,sum(bytes)/1024/1024/1024 from dba_segments group by owner order by 2 desc;

High Data Volume Tables

This query provides result for tables with high data volume. Save its results in a CSV file. This SQL will collect databases with volume equal or above 10 GB.

Note:

The filter condition can be changed for collecting lower volumes.

select

    a.owner,

    a.segment_name as table_name,

    a.tab_size_gb,

    b.partitioning_type,

    b.partition_count,

    c.num_rows

from

    (

    select owner,segment_name,sum(bytes)/1024/1024/1024 tab_size_gb from dba_segments where segment_type=’TABLE’

    group by owner,segment_name

    )a

    left outer join dba_part_tables b

    on a.owner = b.owner

    and a.segment_name = b.table_name

    left outer join dba_tables c

    on a.owner = c.owner and a.segment_name = c.table_name

Users

This query provides the total number of users. Save its results in a CSV file.

select username from dba_users

Data for Partitioning / Bucketing

This query extracts table size and column details to be utilized for partitioning and bucketing recommendations. Save the result in a CSV file.

SELECT Sys_context(‘userenv’, ‘db_name’) AS DATABASE_NAME,

DBA_TAB.table_name  AS Table_Name,

Sum(us.bytes)AS TABLE_SIZE_BYTES,

Nvl(ut.num_rows, 0) AS NUM_ROWS,

DBA_TAB.column_name AS Column_Name,

Nvl(DBA_TAB.num_distinct, 0)      AS num_unique_val

FROM   dba_tab_columns DBA_TAB

JOIN user_segments US

  ON ( DBA_TAB.table_name = us.segment_name )

JOIN user_tables ut

  ON( DBA_TAB.table_name = ut.table_name )

WHERE  owner NOT IN ( ‘ANONYMOUS’, ‘APEX_040200’, ‘APEX_PUBLIC_USER’,

 ‘APPQOSSYS’,

 ‘AUDSYS’, ‘BI’, ‘CTXSYS’, ‘DBSNMP’,

 ‘DIP’, ‘DVF’, ‘DVSYS’, ‘EXFSYS’,

 ‘FLOWS_FILES’, ‘GSMADMIN_INTERNAL’, ‘GSMCATUSER’,

 ‘GSMUSER’,

 ‘HR’, ‘IX’, ‘LBACSYS’, ‘MDDATA’,

 ‘MDSYS’, ‘OE’, ‘ORACLE_OCM’, ‘ORDDATA’,

 ‘ORDPLUGINS’, ‘ORDSYS’, ‘OUTLN’, ‘PM’,

 ‘SCOTT’, ‘SH’, ‘SI_INFORMTN_SCHEMA’,

 ‘SPATIAL_CSW_ADMIN_USR’,

 ‘SPATIAL_WFS_ADMIN_USR’, ‘SYS’, ‘SYSBACKUP’, ‘SYSDG’,

 ‘SYSKM’, ‘SYSTEM’, ‘WMSYS’, ‘XDB’,

 ‘SYSMAN’, ‘RMAN’, ‘RMAN_BACKUP’, ‘OLAPSYS’,

 ‘APEX_030200’, ‘OWBSYS’ )

GROUP  BY Sys_context(‘userenv’, ‘db_name’),

   DBA_TAB.table_name,

   Nvl(DBA_TAB.num_distinct, 0),

   Nvl(ut.num_rows, 0),

DBA_TAB.column_name;


Database Objects

There are two options for extracting database objects like tables, views, and procedures. The first option is to use the Oracle SQL Developer tool. The second option is to use a Java-based utility.

DDL Export – Using SQL Developer

Oracle SQL Developer is an integrated development environment for working with SQL in Oracle databases. You can download it as either an installer or zip file from the official website.

Follow these steps once SQL Developer is set up.

Connect to Oracle using SQL Developer

Connect to the server using SQL Developer by entering details such as the hostname/IP, username, password, port, and SID. Test the connection and save it.

After connecting to Oracle, navigate to Tools and select the Database Export option.

On the Source/Destination screen:

  1. Select the Oracle connection saved in the first step.
  2. Check only the Export DDL option and uncheck the Export Data option.
  3. Enable the Compression option.
  4. Specify the file path and file name.
  5. Retain all the default settings.
  6. Click Next.

On the Types to Export screen, retain all the default options and click Next.

On the Specify Objects screen:

  1. Click on More and ensure the Type is set to ALL OBJECTS.
  2. Click Next.

Finally, you will see the Export Summary. Click Finish to export the DDLs.

DDL Export – Using Java Utility

Another option is to export database objects using the Java Utility. Click here to download. This utility is compatible with both Linux and Windows environments.

Windows Environment

After downloading the utility, extract the zip file and navigate to the bin folder. Modify the highlighted variables in the schema_extract.bat file.

Note:

The utility has been tested with Java 8.

set IDW_WMG_HOME=C:\\Path\\Oracle_code_extractor

set IDW_WMG_LOG=%IDW_WMG_HOME%\\logs

set IDW_WMG_CONF=%IDW_WMG_HOME%\\conf

set JAVA_HOME=C:\\”Program Files”\\Java\\jdk1.8.0_121

%JAVA_HOME%\\bin\\java -Xms512m -Xmx1024m -Xmn128m -Xss256k -XX:MetaspaceSize=64m -XX:MaxMetaspaceSize=512m -XX:+UseParallelOldGC -XX:LargePageSizeInBytes=4m -XX:+PrintGCDetails -XX:+UseCompressedOops -XX:+PrintGCDateStamps -verbosegc -Xloggc:%IDW_WMG_HOME%\\run/gc_schema_ex.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=%IDW_WMG_HOME%\\run\\ -cp “%IDW_WMG_HOME%\\lib\\*” com.impetus.idw.wmg.db.core.code.extract.EDWCodeExtractor “oracle_assessment” “%IDW_WMG_HOME%\\temp\\” “oracle” “SID” “HOST” “PORT” “Username” “Password” “Schema1,schema2“

After making the required changes, double-click the schema_extract.bat file. The utility will begin extracting the database objects for the specified schema, saving them to the temp folder.

Linux Environment

After downloading the utility, extract the zip file and navigate to the bin folder. Modify the highlighted variables in the schema_extract.sh file.

Note:

The utility has been tested on Java 8.

export IDW_WMG_HOME=”$(cd “$(dirname “$0″)/../”; pwd)”;

export IDW_WMG_LOG=”${IDW_WMG_HOME}/logs/”

export IDW_WMG_CONF=”${IDW_WMG_HOME}/conf/”

export JAVA_HOME=“/opt/jdk1.8.0_181”

${JAVA_HOME}/bin/java -Xms512m -Xmx1024m -Xmn128m -Xss256k -XX:MetaspaceSize=64m -XX:MaxMetaspaceSize=512m -XX:+UseParallelOldGC -XX:LargePageSizeInBytes=4m -XX:+PrintGCDetails -XX:+UseCompressedOops -XX:+PrintGCDateStamps -verbosegc -Xloggc:${IDW_WMG_HOME}/run/gc_schema_ex.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${IDW_WMG_HOME}/run/ -cp :${IDW_WMG_HOME}/lib/* com.impetus.idw.wmg.db.core.code.extract.EDWCodeExtractor “oracle_assessment” “${IDW_WMG_HOME}/temp/” “oracle” “${sid}” “${IP}” “1521” “${username}” “${pwd}” “${SchemaName1},{SchemaName2}…”

After making the required changes, execute the schema_extract.sh file.

sh schema_extract.sh

The utility will begin extracting the database objects for the specified schema, saving them to the temp folder.


Source Code Assessment

Provide the below active or in-scope Oracle source code artifacts as applicable in the migration scope.

Code Artifact Criticality Remarks
Orchestration Scripts (Control-M / Autosys / Cron etc.) Must To identify interdependencies across scheduler scripts / jobs, queries, and dependent workloads
Procedures / Functions Must To identify workload complexity, query count, effort estimation and technical debt
Packages Must To identify PL/SQL count, complexity and effort estimations
Views Must To identify view complexity, patterns and effort estimations
Shell Scripts Must To identify count, dependencies, SQL queries and PL/SQL, logics (example email, notification etc.) and effort estimations
DDL Must To identify column usage, and provide recommendation on column level lineage, and query optimization on the target system
DML / SQL Files Must To identify count, dependencies, SQL queries and effort estimations
CTL Files Must To identify source data type and formats example Delimiters, JSON etc.
Sequences Must To estimate efforts, example, custom sequence
Indexes Could To recommend partitioning / bucketing strategy (depends on the target of choice)

Note:

Limit: Assuming the orchestration script is a trigger point for every single use case execution in the existing setup. If customer is not comfortable sharing all the workloads, then share those workloads which are referred or executed through the orchestration scripts. However, in such scenarios the scope and effort estimates will be based on the given workloads.


To learn more, contact our support team or write to: info@leaplogic.io

Copyright © 2025 Impetus Technologies Inc. All Rights Reserved

  • Terms of Use
  • Privacy Policy
  • License Agreement
To the top ↑ Up ↑