GregOrigin - UELTX2: Unreal to LTX-2 Curated Video Generation

Read the manual

Watch promo video 1

Watch promo video 2

UELTX2: Unreal to LTX-2 Curated Video Generation
Generate cinematic placeholders, animated textures, and dynamic VFX locally inside Unreal Engine 5. A native bridge for Lightricks’ open-source LTX-2 model via ComfyUI. No cloud fees. 100% Private.


1. The Dynamic Asset Pipeline (VFX & Textures)

The Problem: Creating high-quality animated textures (Flipbooks/SubUVs) for fire, water, magic portals, or sci-fi screens requires complex simulations in Houdini or EmberGen, which take hours to set up and render.
The LTX-2 Solution:

  • Workflow: You prompt LTX-2: "Seamless looping video of green toxic smoke, top down view, 4k."

  • Result: You get a video file.

  • UE5 Integration: You automatically convert that video into a Flipbook Texture and plug it immediately into a Niagara Particle System.

  • Rationale: You can generate unique, specific VFX assets in 2 minutes inside the editor, rather than spending 4 hours in simulation software.

2. Rapid Pre-Visualization (Animatics)

The Problem: During the "Greyboxing" or layout phase, level designers usually put static mannequins in the scene. To visualize a cutscene or a complex event (e.g., a building collapsing), animators must create a "blocking" animation, which takes days.
The LTX-2 Solution:

  • Workflow: Place a 2D Plane in the level. Select it, type "Cyberpunk building collapsing into dust," and hit Generate.

  • Result: The plane plays the generated video.

  • Rationale: Directors and Level Designers can visualize timing, mood, and lighting of dynamic events zero animator hours. It allows for "Disposable Ideation"—generating 20 variations of a cutscene idea before committing to the expensive 3D production.

3. In-World Screens and Moving Backgrounds

The Problem: Creating content for TVs, holograms, or distant dynamic backdrops (like a busy city outside a window) in a game world is tedious. Rendering them in real-time 3D wastes performance (Draw Calls).
The LTX-2 Solution:

  • Workflow: Use Image-to-Video. Take a screenshot of your game assets, prompt LTX-2 to "animate traffic," or "add tv static and glitch effects."

  • Result: A video file you map to a MediaTexture.

  • Rationale: You get high-fidelity, "3D-looking" motion on a cheap 2D surface. This saves massive amounts of GPU frame time compared to rendering actual 3D cars in the distance or simulating complex UI glitches.

4. IP Privacy and Cost (The "Local" Rationale)

The Point: Why LTX-2 specifically, and not Veo/Runway/Sora/Kling?

  • Security: IP leaks are bad for you. If you use Runway/Midjourney, you are uploading concept art to a cloud server. LTX-2 runs locally. Your unannounced game assets never leave your internal network.

  • Cost: No subscription fees. If you iterate 500 times on a texture, it costs you electricity, not credits.

  • Modifiability: Since LTX-2 is open weights, you can Fine-Tune (LoRA) the model on your game's specific art style. You can train it to understand what your main character looks like, ensuring the generated placeholders actually look like your game.

Summary: The "Hybrid" Workflow

The ultimate point of this integration is the Hybrid Workflow.

Instead of:

Model -> Rig -> Animate -> Render -> Import

The workflow becomes:

Block out rough scene in UE5 -> Screenshot -> LTX-2 (Image-to-Video) -> Final Polish.

You use UE5 for structure and perspective (which it is perfect at) and use LTX-2 for detail, texture, and complex movement (which is hard to animate manually).

Additional information

🎥 The First Native LTX-2 Integration for Unreal Engine 5

Stop waiting for cloud queues and paying subscription fees for generative video. UELTX2 brings the power of Lightricks’ state-of-the-art LTX-2 (Diffusion Transformer) model directly into your Editor workflow.

This plugin acts as a native bridge between Unreal Engine 5 and your local ComfyUI instance, allowing you to generate 4K-ready video assets, moving textures, and pre-vis animatics without ever leaving the Viewport.

⚡ Use Cases

  • Dynamic Textures (I2V): Right-click any static texture in your Content Browser and use Image-to-Video to bring it to life. Turn a static photo of smoke into a looping flipbook source, or animate a TV screen texture in seconds.

  • Rapid Greyboxing: Need a cinematic of a building collapsing for a level mock-up? Generate it in 60 seconds with Text-to-Video instead of spending 3 days animating blocks.

  • VFX Elements: Generate "magical aura" or "fluid simulation" video files to plug directly into Niagara Media Plates.

🔒 Enterprise-Grade Privacy

Your prompt data and game assets never leave your local network. UELTX2 runs on localhost, making it safe for NDA-bound projects where uploading concept art to external cloud servers is prohibited.

🛠 Core Features

  • Native Editor Panel: A dockable Editor Utility Widget for streamlined prompting.

  • Context Menu Integration: Right-click Actions for "Animate this Texture" (I2V).

  • Auto-Import Pipeline: Generated .mp4 files are automatically imported as FileMediaSource assets, ready to play.

  • Hybrid Workflow: Uses UE5 for the UI and ComfyUI for the heavy lifting (VRAM management).

  • Open Architecture: Fully customizable JSON templates. Change samplers, steps, or resolutions by editing the included JSON files.

⚠️ Technical Requirements (Read Before Buying)

This plugin requires external software setup. It does not contain the AI model itself (which is 8GB+).

  1. Hardware: NVIDIA GPU with 12GB VRAM (Minimum for Quantized GGUF) or 24GB VRAM (Recommended for Full Precision).

  2. Backend: You must have ComfyUI installed locally.

  3. Model: You must download the LTX-2 weights (free via HuggingFace).

  4. OS: Windows 10/11 (Linux not supported in v1.0).

📦 What's Included

  • UELTX2 Plugin: C++ Runtime & Editor Modules.

  • Workflow Templates: Optimized .json workflows for LTX-2 Text-to-Video and Image-to-Video.

  • Documentation: Comprehensive HTML guide on setting up the local server and acquiring the model weights.

⚖️ Legal Notice

  • Plugin License: You are purchasing the UELTX2 Bridge Plugin code (Proprietary via Fab).

  • Model License: The LTX-2 Model weights are developed by Lightricks and are subject to their Open Access License. Users are responsible for adhering to the model's usage terms.

(c) Andras Gregori @ GregOrigin 2026

Hello all! Major 1.1 update released today. This Pro branch is only available on Fab.
Full Changelog and revised Readme pasted here for reference. Thank you for using UELTX2.

UELTX2 Plugin Changelog

2026-02-05

Major Architecture Overhaul

Backend Abstraction System

  • NEW: Created IUELTX2Backend interface (UELTX2Backend.h) with pure virtual methods for backend implementations
  • NEW: Created UUELTX2BackendBase base class with common functionality (HTTP helpers, delegate management)
  • NEW: Implemented UUELTX2ComfyUIBackend - Full ComfyUI implementation with:
    • REST API integration (/prompt, /history/{id}, /view, /interrupt)
    • WebSocket connection for real-time progress tracking (ws://host:port/ws)
    • Automatic reconnection with exponential backoff (1s → 30s max, 10 attempts)
    • GGUF and SafeTensors model format detection
    • Workflow template system with placeholder replacement
  • NEW: Implemented UUELTX2SwarmUIBackend - SwarmUI implementation with:
    • Session management with auto-refresh for expired sessions
    • REST API integration (/API/GetNewSession, /API/GenerateText2Image)
    • Output download from /View/ endpoint

Model Management

  • NEW: Created UUELTX2ModelScanner utility class for scanning model directories
  • Supports both .gguf and .safetensors model formats
  • Returns TArray<FLTX2ModelInfo> with file size, path, and display name
  • Configurable model directories in Project Settings

Type System (UELTX2Types.h)

  • NEW: Created centralized types header with all enums, structs, and delegates
  • EUELTX2BackendType - ComfyUI, SwarmUI, AutoDetect
  • EUELTX2GenerationMode - TextToVideo, ImageToVideo
  • EUELTX2Sampler - euler, eulerancestral, dpm_2m, dpm_2msde, lcm, ddim
  • EUELTX2GenerationState - Idle, Connecting, Submitting, Queued, Generating, Downloading, Importing, Completed, Failed, Cancelled
  • FLTX2ModelInfo - Model file metadata
  • FLTX2GenerationParams - Full generation parameters struct
  • FLTX2ProgressInfo - Progress tracking struct
  • Multicast delegates for all events

Workflow Templates

SafeTensors Workflows

  • MODIFIED: LTX2_T2V.json - Updated with placeholder tokens
  • MODIFIED: LTX2_I2V.json - Updated with placeholder tokens

GGUF Workflows

  • NEW: LTX2T2VGGUF.json - Text-to-Video workflow using UnetLoaderGGUF, CLIPLoaderGGUF
  • NEW: LTX2I2VGGUF.json - Image-to-Video workflow for GGUF models

Placeholder Tokens

All workflows now support these placeholders:

  • {{MODEL_NAME}} - Model filename
  • {{PROMPT}} - User prompt
  • {{NEGATIVE_PROMPT}} - Negative prompt
  • {{WIDTH}}, {{HEIGHT}} - Resolution
  • {{FRAMES}}, {{FPS}} - Frame count and rate
  • {{STEPS}} - Inference steps
  • {{CFG}} - CFG scale
  • {{SEED}} - Random seed
  • {{SAMPLER}} - Sampler name
  • {{DENOISE}} - Denoise strength (I2V)

Subsystem Improvements (UELTX2Subsystem)

Job Queue System

  • NEW: Sequential job queue (TArray<FLTX2GenerationParams> JobQueue)
  • ProcessNextJob() processes queue items one at a time
  • Queue position shown in progress updates
  • Failed jobs don't block the queue

API Changes

  • NEW: GenerateVideo(const FLTX2GenerationParams& Params) - Full parameter control
  • MODIFIED: GenerateVideoSimple(FString, FString) - Uses default settings from Project Settings
  • NEW: ScanForModels() - Scans configured directories
  • NEW: SelectModel(int32 ModelIndex) - Selects model by index
  • NEW: GetAvailableModels() - Returns scanned models
  • NEW: GetSelectedModel() - Returns currently selected model
  • NEW: CachedPrompt property for UI state restoration

Progress Tracking

  • Prompt ID tracking for exact output file correlation
  • Uses /history/{prompt_id} API instead of directory polling
  • Real-time step-by-step progress via WebSocket

Settings Expansion (UELTX2Settings)

Connection Settings

  • BackendType - Select ComfyUI, SwarmUI, or AutoDetect
  • ComfyURL - Server URL
  • ComfyOutputDir - Fallback output directory
  • PollingInterval - Status polling interval

Model Settings

  • ModelDirectories - Array of directories to scan for models
  • SelectedModelPath - Persisted model selection

Generation Defaults

  • DefaultWidth, DefaultHeight - Resolution defaults
  • DefaultFrames, DefaultFPS - Frame settings
  • DefaultSteps - Inference steps
  • DefaultCFGScale - CFG scale
  • DefaultSampler - Sampler selection
  • DefaultDenoise - Denoise strength for I2V

Workflow Settings

  • CustomWorkflowTemplate - Path to custom T2V workflow
  • CustomI2VWorkflowTemplate - Path to custom I2V workflow

Asset Settings

  • bCreateMaterial - Auto-create material from video
  • bCreateSequence - Auto-create Level Sequence
  • bCreateVFX - Auto-create Niagara system
  • bDeleteSourceAfterImport - Cleanup source files

Comprehensive UI Panel (UELTX2Panel)

Connection Status

  • TxtConnectionStatus - Visual connection indicator (green/red)
  • BtnTestConnection - Manual connection test

Model Selection

  • CmbModel - Dropdown populated with scanned models
  • BtnRefreshModels - Rescan model directories
  • TxtModelInfo - Shows format and file size

Resolution Controls

  • SpinWidth / SpinHeight - 64-2048, steps of 8
  • ChkLockAspect - Optional aspect ratio lock

Frame Controls

  • SpinFrames - 1-256 frames
  • SpinFPS - 1-60 fps
  • TxtDuration - Calculated video duration display

Generation Parameters

  • SpinSteps - 1-150 inference steps
  • SpinCFG - 1.0-30.0 CFG scale
  • CmbSampler - All sampler options
  • SpinDenoise - 0.0-1.0 for I2V mode (auto-enabled when source image selected)

Seed Controls

  • SpinSeed - Manual seed input (disabled when random checked)
  • BtnRandomSeed - Generate random seed
  • ChkRandomSeed - Use random seed checkbox

Prompt Input

  • InputPrompt - Multi-line prompt input with state restoration
  • InputNegativePrompt - Optional negative prompt

Source Image (I2V)

  • BtnUseSelected - Import texture from Content Browser
  • BtnClearImage - Clear source image (switch to T2V mode)
  • ImgPreview - Preview of selected source image
  • TxtSourceImage - Source image name display

Progress Display

  • ProgressGeneration - Visual progress bar (0-100%)
  • TxtProgressPercent - Percentage text
  • TxtStepCounter - "Step X/Y" display
  • TxtQueueStatus - Queue position indicator
  • TxtStatus - Status messages

Action Buttons

  • BtnGenerate - Submit generation (disabled during generation)
  • TxtGenerateButton - Button text (changes to "Generating...")
  • BtnCancel - Cancel current job (enabled during generation)

UI State Management

  • All inputs disabled during generation
  • Automatic state restoration on panel open
  • Proper delegate binding/unbinding in NativeConstruct/NativeDestruct

Build System Changes

UELTX2.Build.cs

  • Added WebSockets module dependency for WebSocket support

UELTX2Editor.Build.cs

  • All necessary UMG/Slate dependencies already present

Bug Fixes

Compilation Fixes

  • Fixed FGenericPlatformHttp::UrlEncode - Added correct include
  • Fixed HTTP include path - Changed to GenericPlatform/GenericPlatformHttp.h
  • Fixed missing CachedPrompt member
  • Fixed GenerateVideo signature mismatch in UI code
  • Fixed OnGenerationCompleted delegate signature mismatch

Functional Fixes

  • Fixed prompt ID tracking for exact output correlation
  • Fixed model path handling for GGUF vs SafeTensors
  • Fixed WebSocket connection drops with auto-reconnection
  • Fixed concurrent generation issues with job queue
  • Fixed file correlation using /history/{prompt_id} API

Files Added

  • Source/UELTX2/Public/UELTX2Types.h
  • Source/UELTX2/Public/UELTX2Backend.h
  • Source/UELTX2/Private/UELTX2Backend.cpp
  • Source/UELTX2/Public/UELTX2ComfyUIBackend.h
  • Source/UELTX2/Private/UELTX2ComfyUIBackend.cpp
  • Source/UELTX2/Public/UELTX2SwarmUIBackend.h
  • Source/UELTX2/Private/UELTX2SwarmUIBackend.cpp
  • Source/UELTX2/Public/UELTX2ModelScanner.h
  • Source/UELTX2/Private/UELTX2ModelScanner.cpp
  • Content/Workflows/LTX2T2VGGUF.json
  • Content/Workflows/LTX2I2VGGUF.json

Files Modified

  • Source/UELTX2/Public/UELTX2Settings.h
  • Source/UELTX2/Private/UELTX2Settings.cpp
  • Source/UELTX2/Public/UELTX2Subsystem.h
  • Source/UELTX2/Private/UELTX2Subsystem.cpp
  • Source/UELTX2Editor/Public/UELTX2Panel.h (major rewrite)
  • Source/UELTX2Editor/Private/UELTX2Panel.cpp (major rewrite)
  • Source/UELTX2/UELTX2.Build.cs
  • Content/Workflows/LTX2_T2V.json
  • Content/Workflows/LTX2_I2V.json

Usage Notes

Creating the UI Widget

  1. In Unreal Editor, create a new Editor Utility Widget
  2. Set parent class to UUELTX2Panel
  3. Add all required widgets with exact names matching the BindWidget properties
  4. Required widgets (non-optional):
    • TxtConnectionStatus, BtnTestConnection
    • CmbModel, BtnRefreshModels
    • SpinWidth, SpinHeight, SpinFrames, SpinFPS
    • SpinSteps, SpinCFG, CmbSampler, SpinDenoise
    • SpinSeed, BtnRandomSeed
    • InputPrompt, BtnUseSelected, ImgPreview
    • ProgressGeneration, TxtStatus
    • BtnGenerate, BtnCancel

Project Settings

Configure the plugin in Project Settings > Game > UELTX2 Generation:

  1. Set Backend Type to ComfyUI or SwarmUI
  2. Add model directories to Model Directories
  3. Set ComfyURL to your server address
  4. Adjust default generation parameters as needed

Model Support

  • GGUF models: Requires ComfyUI with GGUF nodes (UnetLoaderGGUF, CLIPLoaderGGUF)
  • SafeTensors models: Standard ComfyUI/SwarmUI support
  • Model format is auto-detected from file extension


Readme:

UELTX2: Curated Generation (LTX-2 Bridge for UE5)

UELTX2 is a native Unreal Engine 5 plugin that integrates Lightricks LTX-2, a state-of-the-art generative video model. It allows developers to generate 4K cinematic video assets, dynamic textures, and animatics directly inside the Unreal Editor.

This plugin operates as a "Bridge." It connects Unreal Engine (Frontend) to a local AI Backend (ComfyUI or SwarmUI) handling the heavy AI inference.


📋 Prerequisites

Hardware Requirements

LTX-2 is a heavy DiT (Diffusion Transformer) model.

  • GPU: NVIDIA RTX 3090 / 4090 (24GB VRAM) recommended for full precision.
    • Note: RTX 3060/4070 (12GB+) can run the model if using GGUF Quantization.
  • RAM: 32GB+ System RAM.
  • Storage: SSD with at least 20GB free space for models.

Software Requirements

  • Unreal Engine: 5.4 - 5.7+.
  • OS: Windows 10/11.
  • Backend: ComfyUI (Recommended) or SwarmUI.

🛠️ Phase 1: Setting up the Backend

You must have a local AI server running for this plugin to work.

1. Install ComfyUI (Recommended)

If you haven't installed it yet, usage of the standalone portable version is recommended for Windows users to avoid Python dependency hell.

  1. Download the ComfyUIwindows_portablenvidia.7z from the Official GitHub.
  2. Extract it to a short path, e.g., C:\ComfyUI.

2. Install The Node Manager (Crucial)

  1. Go to C:\ComfyUI\ComfyUI\custom_nodes.
  2. Open a terminal (CMD) in this folder.
  3. Run: git clone https://github.com/ltdrdata/ComfyUI-Manager.git
  4. Restart ComfyUI.

3. Install Required Custom Nodes

Unreal needs specific nodes to be present in ComfyUI to handle the JSON payloads.

  1. Open ComfyUI in your browser (http://127.0.0.1:8188).
  2. Click "Manager" in the floating menu.
  3. Click "Install Custom Nodes".
  4. Search for and install the following:
    • ComfyUI-VideoHelperSuite (Required for saving MP4s).
    • ComfyUI-GGUF (Highly recommended for LTX-2 memory optimization).
    • ComfyUI-Inspire-Pack (Optional, keeps workflows clean).

📥 Phase 2: Downloading the Models

You need the specific LTX-2 weights.

Option A: High-End GPUs (24GB+ VRAM)

  1. Download standard weights from HuggingFace Lightricks/LTX-2.
  2. Place ltx-video-2b-v0.9.safetensors in: C:\ComfyUI\ComfyUI\models\checkpoints\

Option B: Consumer GPUs (8GB - 16GB VRAM) - Recommended

  1. Download GGUF quantized weights (e.g., Q80 or Q5_km) from City96 or Unsloth.
  2. Place the .gguf file in: C:\ComfyUI\ComfyUI\models\checkpoints\

🔌 Phase 3: Plugin Configuration

Once compiled and enabled in your project:

  1. Open Project Settings > Game > UELTX2 Generation.
  2. Connection Settings:
    • Backend Type: Choose ComfyUI or SwarmUI.
    • Comfy URL: Default http://127.0.0.1:8188.
  3. Model Management:
    • Add your local model folders to Model Directories.
    • Select your preferred checkpoint from the Selected Model dropdown.
  4. Generation Defaults:
    • Set default Resolution (e.g., 768x512), FPS (24), and Sampling Steps (20-50).
  5. Automation:
    • Enable Create Material, Create Sequence, and Create Niagara to auto-generate usable assets upon video completion.

🚀 Phase 4: Usage Workflows

1. Text-to-Video (T2V)

  1. In the Level Editor, click the "LTX-2" button on the main toolbar.
  2. Enter a prompt (e.g., "Cyberpunk city, raining, neon lights").
  3. Click Generate.
  4. The result will be imported, and if configured, a Level Sequence will be created for instant drag-and-drop pre-visualization.

2. Image-to-Video (I2V) - "Living Texture"

  1. Select a texture in the Content Browser.
  2. In the LTX-2 Panel, click Set Source from Selection.
  3. Generate.
  4. The plugin creates a loopable video, a Media Player, and a Material Instance, ready to be applied to meshes.

3. VFX Generation

  1. Prompt for "Black background, isolated fire/smoke...".
  2. The plugin detects the context and creates an Additive Material and a Niagara System automatically.

🧠 Custom Workflows (Advanced)

You can customize the underlying JSON templates found in Plugins/UELTX2/Content/Templates/. The plugin injects values into specific placeholders:

  • {{PROMPT}}, {{NEGATIVE_PROMPT}}
  • {{WIDTH}}, {{HEIGHT}}, {{FPS}}, {{FRAMES}}
  • {{SEED}}, {{STEPS}}, {{CFG}}, {{SAMPLER}}
  • {{MODEL_NAME}}, {{DENOISE}}

Use these to add ControlNets, LoRAs, or Upscalers to your pipeline.


⚠️ Troubleshooting

Q: "Connection Refused" in Output Log? A: Ensure your Backend (ComfyUI/SwarmUI) is running. Check that the port matches your Project Settings and the Backend Type is correctly selected.

Q: ComfyUI shows red nodes? A: You are missing nodes. Drag LTX2_T2V.json into ComfyUI to identify missing custom nodes (likely VideoHelperSuite).

Q: Out of Memory (OOM)? A: Use GGUF quantized models (Q5km) and ensure ComfyUI-GGUF is installed.


📄 License

The LTX-2 Model weights are subject to Lightricks' Open Access License.