GANG OF DRAGON - Action Scene | Unreal Engine 5.7 Fanmade Cinematic

Overview

This project serves as a comprehensive exploration of modern cinematic workflows within Unreal Engine. In this article, I will break down the end-to-end pipeline used to create this fan-made short, from AI-assisted character modeling to advanced rendering techniques.

hinh1

1. Character Modeling & Metahuman Integration

For the main character, Ma Dong-Seok, I began by generating visual references using ChatGPT 5.2. The goal at this stage was not to create a final asset, but to generate high-quality, Unreal Engine style references that could support later 3D reconstruction.

“3D head model of Ma Dong Seok, rendered in Unreal Engine style, clay mode material”

I continued generating additional views (left, right, and three-quarter angles).

The purpose of generating multiple angles was to provide enough visual data for AI-based 3D model generation to interpret facial structure more accurately. For the actual 3D generation, I used Hitem3D to convert these 2D references into a base mesh. After several iterations to ensure quality, I selected the most accurate version.

After comparing the results, I selected the first generated head, as it provided the closest resemblance and the most stable topology for further work.

To integrate the head into Unreal Engine’s character system, I used a face-form warp approach. This process involved retopologizing the default Metahuman head mesh to match the Ma Dong-Seok head shape, ensuring compatibility with Metahuman rigs and facial systems.

After that, I created a new MetaHuman character and replaced the default head with the customized mesh.

Costume

For clothing adjustments, I used Meta Tailor to refine the outfit directly on the character. The costume itself comes from the Tokyo Back Alley pack, which I had previously used in my Batman short film.

hinh2

MetaTailor allowed me to quickly adapt proportions and fit without reauthoring the entire garment in an external DCC.

Environment

The initial concept for this film came from discovering the Foggy Street environment pack by Cosmos. The atmosphere and lighting conditions of this pack immediately suggested close-range, gritty hand-to-hand combat shots.

Because of this environment’s tone, I chose Gang of Dragon as the thematic direction, as it aligns strongly with my experience in staging action-focused cinematics.

The environment pack can be found here

Animation

For character movement and combat, I used the ActorCore – Hand-to-Hand Combat animation pack. Multiple animations were blended and rearranged to form continuous action sequences rather than isolated moves.

The key challenge here was maintaining motion continuity and believable weight transfer between animation clips.

hinh3

A reference video explaining how to blend and connect these animations smoothly can be found here

Camera

Camera motion was created using the CineMotion pack, primarily to introduce controlled camera shake and subtle instability during movement. This helped sell a handheld, in-the-action feeling rather than a perfectly stabilized cinematic look.

Facial Capture

Facial animation was captured using a self-recorded video of my own face. The footage was then transferred into Unreal Engine using Live Link, allowing facial motion data to drive the MetaHuman facial rig.

hinh4

This approach provided fast iteration and acceptable emotional readability without requiring a dedicated facial capture setup.

Lighting

Lighting evaluation was done using a False Color plugin to analyze luminance distribution across each shot. This allowed me to objectively assess exposure balance before making creative lighting decisions.

The workflow involved duplicating the camera and applying a false color material to one version, then comparing it directly against the standard cinematic camera. This method helped maintain consistent lighting intensity and avoid overexposed or underlit areas across sequences.

hinh5

Rendering

For rendering, I used Render Graph instead of the traditional Movie Render Queue. I strongly recommend this approach, especially for cinematic projects.

Render Graph significantly reduces issues related to streaming pool size after long renders. More importantly, its node-based structure and render layer control provide far greater flexibility in post-production.

For compositing and final polish, I typically move the output into DaVinci Resolve.

Color Grading & Sound

Color management was handled using OCIO to ensure consistent color space across all renders. After that, I used the Dehancer plugin to achieve a more filmic color response.

Sound effects and final audio adjustments were also completed in DaVinci Resolve, allowing me to keep the entire post-production pipeline centralized.

Conclusion

This project was created during the teaching process, and that context made the experience especially rewarding. Watching the cinematic evolve day by day, identifying its flaws after completion, and refining both artistic and technical decisions created a stronger sense of attachment to the work.

That feeling of building, adjusting, and understanding every part of the pipeline is something I personally value far more than pressing a single button on an AI video generation tool. Even with its imperfections, the final result feels earned and that is what makes the process meaningful to me.

1 Like

Hi @UAI-Thuan Once the sword came out, the battle instantly felt more intense and engaging, and the camera angles really helped make the experience feel more alive.

Your main protagonist is really giving off some Sammo Hung vibes, and not just because I am a fan of him and his career. Fantastic work!

1 Like

Thank you! Glad you like it :smiling_face_with_three_hearts:

1 Like

@PresumptivePanda @The_M0ss_Man Hello, i’m back