BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Seoul
X-LIC-LOCATION:Asia/Seoul
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:KST
DTSTART:18871231T000000
DTSTART:19881009T020000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230103T035306Z
LOCATION:Auditorium\, Level 5\, West Wing
DTSTART;TZID=Asia/Seoul:20221206T100000
DTEND;TZID=Asia/Seoul:20221206T120000
UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_239@linklings.com
SUMMARY:Dressing Avatars: Deep Photorealistic Appearance for Physically Si
mulated Clothing
DESCRIPTION:Technical Papers\n\nDressing Avatars: Deep Photorealistic Appe
arance for Physically Simulated Clothing\n\nXiang, Bagautdinov, Stuyck, Pr
ada, Romero...\n\nDespite recent progress in developing animatable full-bo
dy avatars, realistic modeling of clothing - one of the core aspects of hu
man self-expression - remains an open challenge. State-of-the-art physical
simulation methods can generate realistically behaving clothing geometry
at interactive rates. Modeling photorealistic appearance, however, usually
requires physically-based rendering which is too expensive for interactiv
e applications. On the other hand, data-driven deep appearance models are
capable of efficiently producing realistic appearance, but struggle at syn
thesizing geometry of highly dynamic clothing and handling challenging bod
y-clothing configurations. To this end, we introduce pose-driven avatars w
ith explicit modeling of clothing that exhibit both photorealistic appeara
nce learned from real-world data and realistic clothing dynamics. The key
idea is to introduce a neural clothing appearance model that operates on t
op of explicit geometry: at training time we use high-fidelity tracking, w
hereas at animation time we rely on physically simulated geometry. Our cor
e contribution is a physically-inspired appearance network, capable of gen
erating photorealistic appearance with view-dependent and dynamic shadowin
g effects even for unseen body-clothing configurations. We conduct a thoro
ugh evaluation of our model and demonstrate diverse animation results on s
everal subjects and different types of clothing. Unlike previous work on p
hotorealistic full-body avatars, our approach can produce much richer dyna
mics and more realistic deformations even for many examples of loose cloth
ing. We also demonstrate that our formulation naturally allows clothing to
be used with avatars of different people while staying fully animatable,
thus enabling, for the first time, photorealistic avatars with novel cloth
ing.\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERI
ENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON
URL:https://sa2022.siggraph.org/en/full-program/?id=papers_239&sess=sess15
3
END:VEVENT
END:VCALENDAR