LeapFusion
LeapFusion is LeapLogic’s AI-driven feature designed to accelerate the modernization of EDW workloads. It automatically converts the queries that are not handled by the default LeapLogic Core transformation engine into optimized, target-native equivalents. It further validates the transformed queries leveraging other LLMs, referred to as ‘Helper Models’, and also suggest any corrections as required. LeapFusion streamlines complex query transformation – reducing manual effort, improving accuracy, and accelerating migration. It ensures seamless transformation of legacy workloads into modern cloud platforms.
LeapFusion uses two model types: Base Model and Helper Model.
- Base Model: Converts queries that are not handled by the default LeapLogic Core transformation engine to the target equivalent. It supports both offline (LeapLogic) and online (Amazon Bedrock) modes of transformation.
- Online Models (Native integration with all Amazon Bedrock including Claude LLM models): LeapLogic provides seamless integration with Amazon Bedrock, including access to Claude and other foundation models. These are referred to as Online models within the platform, as they run natively in the cloud and are continuously updated by their providers. This integration allows you to take advantage of cutting-edge generative AI models without additional configuration or setup.
By relying on Amazon Bedrock’s managed environment, you gain flexibility in choosing from multiple models while ensuring enterprise-grade scalability, availability, and security. The native support removes friction in adoption, making it easier to leverage generative AI for a wide range of transformation and modernization needs.
- Offline Models (Open source LLMs adopted and trained on LeapLogic’s datasets and code formats): Alongside online integration, LeapLogic leverages open-source LLMs that are customized and trained on its own datasets and supported code formats. These are referred to as Offline models, offering more control and flexibility in environments where cloud-based models are not feasible or preferred.
By aligning open-source LLMs with LeapLogic’s code and transformation patterns, the platform ensures consistent quality of outputs. This setup also helps organizations maintain compliance with specific data handling requirements while still benefiting from generative AI capabilities.
- Helper Model: Validates the transformed queries generated by the Base Model and suggests corrections if needed.
In This Topic:
Key Features
The key features of LeapFusion includes:
- Native integration with all Amazon Bedrock LLM models (referred as Online models).
- Plug-and-Evolve architecture – Auto-syncs with new LLM releases without manual updates or engineering effort.
- Open source LLMs adopted and trained on LeapLogic’s datasets and code formats (referred as Offline models).
- Dual-core, hybrid AI architecture that fuses deterministic pattern recognition (80–85%) with contextual generative reasoning (15–20%), compressing timelines & zeroing manual intervention.
- Situation-aware and semantic-preserving prompt injection engine.
- Intelligent cascading DQ checks and validations across multiple LLMs in a single conversion pipeline to ultimately generate the best output for LeapLogic customers.
- Intelligent recommendations for conversational interfaces rationalizing analytics & BI workloads and reducing TCO.
- Transparency & auditability into how and why code was transformed or rewritten using AI.
- Responsible AI with session-level LLM fine-tuning only, no cross-customer learning or sharing.
LeapLogic Modernization Capabilities
LeapLogic delivers advanced, AI-driven capabilities to simplify and accelerate workload modernization, ensure consistency, and deliver highly accurate transformation outcomes.
Plug-and-Evolve architecture (Auto-syncs with new LLM releases without manual updates or engineering effort)
The Plug-and-Evolve architecture is designed to automatically align with new LLM releases, ensuring that LeapLogic users always have access to the latest advancements without needing manual intervention. This capability eliminates the operational burden of constant engineering updates or patch management, allowing teams to focus on business outcomes instead of infrastructure upkeep.
This approach also future-proofs AI-driven modernization pipelines. As new models become available, the platform automatically incorporates them, ensuring compatibility, performance improvements, and access to innovative features without disrupting existing workflows.
Dual-core and Hybrid AI Architecture
LeapLogic’s hybrid AI architecture combines deterministic pattern recognition (80–85%) with contextual generative reasoning (15–20%). This dual-core design ensures a balance between reliability and adaptability, producing outputs that are both accurate and context-aware.
The deterministic engine accelerates routine transformations with consistency, while the generative reasoning layer handles exceptions and nuanced scenarios. Together, they help compress project timelines, minimize manual intervention, and deliver more predictable modernization outcomes.
Situation-aware and Semantic-Preserving Prompt Injection Engine
The platform features a prompt injection engine that is both situation-aware and semantic-preserving. This ensures that prompts are tailored to the context of the transformation task while retaining the original meaning of the code or logic being converted.
By carefully constructing prompts, the engine reduces ambiguity and enhances the precision of LLM responses. This results in higher-quality transformations and lowers the risk of unintended changes during code modernization.
Intelligent Cascading DQ Checks and Validations
LeapLogic applies intelligent cascading data quality (DQ) checks and validations across multiple LLMs in a single conversion pipeline. This layered approach ensures that outputs are validated at different stages, ultimately converging on the best possible result.
By distributing checks across multiple models, the platform maximizes reliability and minimizes error propagation. Customers benefit from cleaner, more trustworthy transformations that reduce the need for rework.
Intelligent Recommendations for Conversational Interfaces
The system provides intelligent recommendations that extend to conversational interfaces, helping streamline analytics and BI workloads. These recommendations are designed to optimize query handling and resource utilization, which in turn lowers total cost of ownership (TCO).
By embedding AI-driven suggestions into conversations, LeapLogic helps users navigate complex workloads more efficiently. This improves both productivity and decision-making while keeping infrastructure and licensing costs in check.
Transparency and Auditability
LeapLogic ensures transparency and auditability into how AI transforms or rewrites code. Users can trace why a particular output was generated, which rules were applied, and how decisions were made during the process.
This visibility builds trust and enables teams to review, validate, or refine transformations with confidence. It also supports compliance with enterprise governance and audit requirements, making AI-driven modernization both explainable and verifiable.
Responsible AI Practices
Responsible AI is embedded into the platform’s design. LeapLogic applies fine-tuning only at the session level, ensuring that learning does not extend across customers or projects. This means no cross-customer sharing of data, knowledge, or transformations.
By isolating sessions in this way, the platform prioritizes privacy and confidentiality. Customers can rely on AI-powered transformations while knowing that their data and code remain fully secure and protected from external influence.
For detailed steps on transforming your workloads with LeapFusion, click here.