BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035309Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221207T140000 DTEND;TZID=Asia/Seoul:20221207T153000 UID:siggraphasia_SIGGRAPH Asia 2022_sess162_papers_191@linklings.com SUMMARY:Video-driven Neural Physically-based Facial Asset for Production DESCRIPTION:Technical Communications, Technical Papers\n\nVideo-driven Neu ral Physically-based Facial Asset for Production\n\nZhang, Zeng, Zhang, Li n, Cao...\n\nProduction-level workflows for producing convincing 3D dynami c human faces have long relied on an assortment of labor-intensive tools f or geometry and texture generation, motion capture and rigging, and expres sion synthesis. Recent neural approaches automate individual components bu t the corresponding latent representations cannot provide artists with exp licit controls as in conventional tools. In this paper, we present a new l earning-based, video-driven approach for generating dynamic facial geometr ies with high-quality physically-based assets. Two key components are well -structured latent spaces due to dense temporal samplings from videos and explicit facial expression controls to regulate the latent spaces. For dat a collection, we construct a hybrid multiview-photometric capture stage, c oupling with ultra-fast video cameras to obtain raw 3D facial assets. We t hen set out to model the facial expression, geometry and physically-based textures using separate VAEs where we impose a global multi-layer perceptr on (MLP) based expression mapping across the latent spaces of respective n etworks, to preserve characteristics across respective attributes while ma intaining explicit controls over facial geometry and texture generation. W e also introduce the idea to model the delta information as wrinkle maps f or the physically-based textures in our texture VAE, achieving high-qualit y 4K rendering of dynamic textures. We demonstrate our approach in high-fi delity performer-specific facial capture and cross-identity facial motion transfer and retargeting. In addition, our multi-VAE-based neural asset, a long with the fast adaptation schemes, can also be deployed to handle in-t he-wild videos. Besides, we motivate the utility of our explicit facial di sentangling strategy by providing various promising physically-based editi ng results like geometry and material editing or wrinkle transfer with hig h realism. Comprehensive experiments show that our technique provides high er accuracy and visual fidelity than previous video-driven facial reconstr uction and animation methods.\n\nRegistration Category: FULL ACCESS, ON-DE MAND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_191&sess=sess16 2 END:VEVENT END:VCALENDAR