BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Seoul
X-LIC-LOCATION:Asia/Seoul
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:KST
DTSTART:18871231T000000
DTSTART:19881009T020000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230103T035347Z
LOCATION:Room 324\, Level 3\, West Wing
DTSTART;TZID=Asia/Seoul:20221207T110000
DTEND;TZID=Asia/Seoul:20221207T123000
UID:siggraphasia_SIGGRAPH Asia 2022_sess158@linklings.com
SUMMARY:Acquisition
DESCRIPTION:Technical Papers\n\nThe presentations will be followed by a 30
-min Interactive Discussion Session at Room 325-CD.\n\nThe Technical Paper
s program is the premier international forum for disseminating new scholar
ly work in computer graphics and interactive techniques. Technical Papers
are published as a special issue of ACM Transactions on Graphics. In addit
ion to papers selected by the SIGGRAPH Asia 2022 Technical Papers Jury, th
e conference presents papers that have been published in ACM Transactions
on Graphics during the past year. Accepted papers adhere to the highest sc
ientific standards.\n\nDeepMVSHair: Deep Hair Modeling from Sparse Views\n
\nKuang, Chen, Fu, Zhou, Zheng\n\nWe present DeepMVSHair, the first deep l
earning-based method for multi-view hair strand reconstruction. The key co
mponent of our pipeline is HairMVSNet, a differentiable neural architectur
e which represents a spatial hair structure as a continuous 3D hair growin
g direction field implicitly. Specific...\n\n---------------------\nAsynch
ronous Collaborative Autoscanning with Mode Switching for Multi-Robot Scen
e Reconstruction\n\nGuo, Li, Xia, Hu, Liu\n\nWhen conducting autonomous sc
anning for the online reconstruction of unknown indoor environments, robot
s have to be competent at exploring the scene structure and reconstructing
objects with high quality. Our key observation is that different tasks de
mand specialized scanning properties of robots: r...\n\n------------------
---\nPattern-Based Cloth Registration and Sparse-View Animation\n\nHalimi,
Stuyck, Xiang, Bagautdinov, Wen...\n\nWe propose a novel multi-view camer
a pipeline for the reconstruction and registration of dynamic clothing.\nO
ur proposed method relies on a specifically designed pattern that allows f
or precise video tracking in each camera view. \nWe triangulate the tracke
d points and register the cloth surface in a ...\n\n---------------------\
nReconstructing Personalized Semantic Facial NeRF Models From Monocular Vi
deo\n\nGao, Zhong, Xiang, Hong, Guo...\n\nWe present a novel semantic mode
l for human head defined with neural radiance field. The 3D-consistent hea
d model consist of a set of disentangled and interpretable bases, and can
be driven by low-dimensional expression coefficients. Thanks to the powerf
ul representation ability of neural radiance f...\n\n---------------------
\nAffordable Spectral Measurements of Translucent Materials\n\nIser, Ritti
g, Nogu�, Nindel, Wilkie\n\nWe present a spectral measurement approach for
the bulk optical properties of translucent materials using only low-cost
components. We focus on the translucent inks used in full-color 3D printin
g, and develop a technique with a high spectral resolution, which is impor
tant for accurate color reproduc...\n\n---------------------\nLearning Rec
onstructability for Drone Aerial Path Planning\n\nLiu, Lin, Hu, Xie, Fu...
\n\nWe introduce the first learning-based reconstructability predictor to
improve view and path planning for large-scale 3D urban scene acquisition
using unmanned drones. In contrast to previous heuristic approaches, our m
ethod learns a model that explicitly predicts how well a 3D urban scene wi
ll be re...\n\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nL
anguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND
END:VEVENT
END:VCALENDAR