BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_220@linklings.com SUMMARY:Human Performance Modeling and Rendering via Neural Animated Mesh DESCRIPTION:Technical Papers\n\nHuman Performance Modeling and Rendering v ia Neural Animated Mesh\n\nZhao, Jiang, Yao, Zhang, Wang...\n\nWe have rec ently seen tremendous progress in the neural advances for photo-real human modeling and rendering. But it's still challenging to integrate them into an existing mesh-based pipeline for downstream applications. In this pape r, we present a comprehensive neural approach for high-quality reconstruct ion, compression, and rendering of human performances from dense multi-vie w videos. Our core intuition is to bridge the traditional animated mesh wo rkflow with a new class of highly efficient neural techniques. We first in troduce a neural surface reconstructor for high-quality surface generation in minutes. It marries the implicit volumetric rendering of the truncated signed distance field (TSDF) with multi-resolution hash encoding. We furt her propose a hybrid neural tracker to generate animated meshes, which com bines explicit non-rigid tracking with implicit dynamic deformation in a s elf-supervised framework. The former provides the coarse warping back into the canonical space, while the latter implicit one further predicts the d isplacements using the 4D hash encoding as in our reconstructor. Then, we discuss the rendering schemes using the obtained animated meshes, ranging from dynamic texturing to lumigraph rendering under various bandwidth sett ings. To strike an intricate balance between quality and bandwidth, we pro pose a hierarchical solution by first rendering 6 virtual views covering t he performer and then conducting occlusion-aware neural texture blending. We demonstrate the efficacy of our approach in a variety of mesh-based app lications and photo-realistic free-view experiences on various platforms, i.e., inserting virtual human performances into real environments through mobile AR or immersively watching talent shows with VR headsets.\n\nRegist ration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, T RADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_220&sess=sess15 3 END:VEVENT END:VCALENDAR