BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035312Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221209T103000 DTEND;TZID=Asia/Seoul:20221209T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess177_papers_561@linklings.com SUMMARY:Capturing and Animation of Body and Clothing from Monocular Video DESCRIPTION:Technical Papers\n\nCapturing and Animation of Body and Clothi ng from Monocular Video\n\nFeng, Yang, Pollefeys, Black, Bolkart\n\nWhile recent work has shown progress on extracting clothed 3D human avatars from a single image, video, or a set of 3D scans, several limitations remain. Most methods use a holistic representation to jointly model the body and c lothing, which means that the clothing and body cannot be separated for ap plications like virtual try-on. Other methods separately model the body an d clothing, but they require training from a large set of 3D clothed human meshes obtained from 3D/4D scanners or physics simulations. Our insight i s that the body and clothing have different modeling requirements. While t he body is well represented by a mesh-based parametric 3D model, implicit representations and neural radiance fields are better suited to capturing the large variety in shape and appearance present in clothing. Building on this insight, we propose SCARF (Segmented Clothed Avatar Radiance Field), a hybrid model combining a mesh-based body with a neural radiance field. Integrating the mesh into the volumetric rendering in combination with a d ifferentiable rasterizer enables us to optimize SCARF directly from monocu lar videos, without any 3D supervision. The hybrid modeling enables SCARF to (i) animate the clothed body avatar by changing body poses (including h and articulation and facial expressions), (ii) synthesize novel views of t he avatar, and (iii) transfer clothing between avatars in virtual try-on a pplications. We demonstrate that SCARF reconstructs clothing with higher v isual quality than existing methods, that the clothing deforms with changi ng body pose and body shape, and that clothing can be successfully transfe rred between avatars of different subjects.\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DE MAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_561&sess=sess17 7 END:VEVENT END:VCALENDAR