BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Seoul
X-LIC-LOCATION:Asia/Seoul
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:KST
DTSTART:18871231T000000
DTSTART:19881009T020000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230103T035311Z
LOCATION:Room 325-AB\, Level 3\, West Wing
DTSTART;TZID=Asia/Seoul:20221208T140000
DTEND;TZID=Asia/Seoul:20221208T153000
UID:siggraphasia_SIGGRAPH Asia 2022_sess169_papers_220@linklings.com
SUMMARY:Human Performance Modeling and Rendering via Neural Animated Mesh
DESCRIPTION:Technical Communications, Technical Papers\n\nHuman Performanc
e Modeling and Rendering via Neural Animated Mesh\n\nZhao, Jiang, Yao, Zha
ng, Wang...\n\nWe have recently seen tremendous progress in the neural adv
ances for photo-real human modeling and rendering. But it's still challeng
ing to integrate them into an existing mesh-based pipeline for downstream
applications. In this paper, we present a comprehensive neural approach fo
r high-quality reconstruction, compression, and rendering of human perform
ances from dense multi-view videos. Our core intuition is to bridge the tr
aditional animated mesh workflow with a new class of highly efficient neur
al techniques. We first introduce a neural surface reconstructor for high-
quality surface generation in minutes. It marries the implicit volumetric
rendering of the truncated signed distance field (TSDF) with multi-resolut
ion hash encoding. We further propose a hybrid neural tracker to generate
animated meshes, which combines explicit non-rigid tracking with implicit
dynamic deformation in a self-supervised framework. The former provides th
e coarse warping back into the canonical space, while the latter implicit
one further predicts the displacements using the 4D hash encoding as in ou
r reconstructor. Then, we discuss the rendering schemes using the obtained
animated meshes, ranging from dynamic texturing to lumigraph rendering un
der various bandwidth settings. To strike an intricate balance between qua
lity and bandwidth, we propose a hierarchical solution by first rendering
6 virtual views covering the performer and then conducting occlusion-aware
neural texture blending. We demonstrate the efficacy of our approach in a
variety of mesh-based applications and photo-realistic free-view experien
ces on various platforms, i.e., inserting virtual human performances into
real environments through mobile AR or immersively watching talent shows w
ith VR headsets.\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n
\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND
URL:https://sa2022.siggraph.org/en/full-program/?id=papers_220&sess=sess16
9
END:VEVENT
END:VCALENDAR