PROBLEM: Animations need expressive face/body params, but I have no animator on staff — only video reference of myself recording each gesture.
WHY IT MATTERS: Per-frame face param extraction is the bridge between human-recorded video and Rive runtime. Without it, animations would be hand-keyed (slow) or generic (off-brand).
STACK: Python (RTMLib for 2D landmarks, MediaPipe-style pose), JSON manifest, Rive runtime (Kotlin Android)