I built a custom NPR shader to achieve a hand-drawn, cel-shaded look. First, I feed a diffuse PBR texture into a basic lighting node to capture directional illumination. I then convert the lit result to RGB and quantize it into discrete bands for flat, toon-style shadows. To add crisp, semi-transparent highlights, I isolate lit regions in HSV space, generate a binary mask, and overlay a tinted lighting pass with adjustable intensity. For richer color variation, I paint detailed textures in Blender, mimicking the nuanced imperfections of 2D animation. Finally, I use a geometry node setup to draw consistent contour lines, dynamically adjusting their thickness based on the camera’s distance to the mesh, to reinforce the character’s silhouette.
To handle dynamic lighting in complex scenes, I use a Mix node to blend multiple lighting presets—each with its normal direction and light color—and keyframe the Mix factor to transition between them. To avoid tedious, per-material adjustments, I drive the Mix factor with a single animation driver, allowing me to animate lighting changes across all relevant materials.
The retopologized geometry is ready for hand-painted texture creation.
With head movements—like turns—I want the hair to respond naturally, driven by its inertia. I’ve tested several approaches, including blend shapes, weight painting, keyframe animation, and the Wiggle2 plug-in. Blend shapes work the best for complex, wind-driven motion and are ideal for shorter hair. It has the characteristic of custom exaggerated deformation, and has been used to create expressions for faces. In contrast, Wiggle2 lets you pin hair to specific regions and have its motion strictly influenced by a defined wind source. Finally, using bone trails ensures the hair follows the head’s movement smoothly: by designating a chain of bones as a bone trail in the wiggle settings, I can streamline your workflow and adjust keyframe timing more efficiently.
I built the rest of the body animation by importing and editing Mixamo clips. Using the non-linear animation (NLA) editor, I inserted new actions into the timeline and seamlessly blended them with the original baked motion capture data. This workflow preserved the subtle nuances of the mocap while letting me tweak the gestures to achieve exactly the poses I wanted.
Armature animation is added on top of the Mixamo action data. The keyframe is adjusted only to ensure the accuracy of large-scale movement.
Geometry node for uneven contour line creation based on flipped normals and emissive black material.
Wiggle2 is a force preset that is based on a set of bone trails bound to the skin. It automatically outputs the following animation after deciding the weight distribution and the direction of the force field.
I wanted the sparks to ignite precisely as each firework burst begins to fade, so I chose Houdini over a standard geometry node setup. Houdini’s particle system makes it far easier to control spark behavior procedurally—using VEX expressions to drive attributes like velocity or age allows me to fine-tune their timing and motion based on input parameters such as time after birth or current speed. This approach gives me the flexibility to dial in exactly when and how the sparks emerge. The pop group takes input expression ingroup =i@dead to effectively separate the selected particles that have decayed within a certain time range after birth. The Houdini is rendered using a mantra render, adding blooming effects using Adobe Premiere, and then imported into Blender for the final composition as emissive texture for the background plane.
Another crucial element for achieving a convincing 2D animation look is the facial expression shading. 2D face lighting is intentionally simpler than its 3D counterpart—avoiding scattered, fragmented shadows—so instead of relying on diffuse-map shadows, we generate shading via signed distance field (SDF) maps that react to a specified light direction. By clamping the brightness values on the SDF map, we produce a simplified “lit” surface and derive a fake normal from the face’s UV layout, giving us clean, stylized shadows that move naturally with the character’s expressions.
Facial expressions behave differently under PBR versus toon shading. With toon shading, expressions can be more stylized—sometimes exaggerated or not anatomically accurate—because the goal is simply to read well in the viewport. Rather than full facial rigging, blend shapes are a more efficient choice (as seen in games like Genshin Impact and Honkai: Star Rail) for dialing in precise, controllable expressions. To maximize variety, I initially create separate blend-shape targets for the eyebrows and for the rest of the face, then combine them in different ways. Each resulting expression is named clearly and organized so it’s easy to tweak during animation edits.
Shader node graph to take in the SDF map and the SDF map generated from a set of hand-painted shadow image sequences.
In my Houdini setup for a single firework, I configure the POP Network to drive both the burst and its trailing sparks. To animate the color shift over time, I insert a POP Color node and hook it up to a gradient ramp keyed to the particle’s lifespan. As the timeline advances, the gradient remaps the particles’ birth-to-death age to the ramp’s color stops, creating a smooth transition through the desired hues.
I examined character rigs from other games and crafted a custom library of blend shapes to meet our specific animation needs. By creating separate targets for the eyebrows and the eyelids—and then merging them into composite poses—I ensure they drive together seamlessly.