BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035346Z LOCATION:Room 324\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221206T140000 DTEND;TZID=Asia/Seoul:20221206T153000 UID:siggraphasia_SIGGRAPH Asia 2022_sess154@linklings.com SUMMARY:Character Animation DESCRIPTION:Technical Communications, Technical Papers\n\nThe presentation s will be followed by a 30-min Interactive Discussion Session at Room 325- CD. \n\nThe Technical Papers program is the premier international forum fo r disseminating new scholarly work in computer graphics and interactive te chniques. Technical Papers are published as a special issue of ACM Transac tions on Graphics. In addition to papers selected by the SIGGRAPH Asia 202 2 Technical Papers Jury, the conference presents papers that have been pub lished in ACM Transactions on Graphics during the past year. Accepted pape rs adhere to the highest scientific standards.\n\nThe Technical Communicat ions program is a premier forum for presenting the latest developments and research still in progress. Leading international experts in academia and industry present work that showcase actual implementations of research id eas, works at the crossroads of computer graphics with computer vision, ma chine learning, HCI, VR, CAD, visualization, and many others.\n\nMotion In -betweening via Two-stage Transformers\n\nQin, Zheng, Zhou\n\nWe present a deep learning-based framework to synthesize motion in-betweening in a two -stage manner. Given some context frames and a target frame, the system ca n generate plausible transitions with variable lengths in a non-autoregres sive fashion. The framework consists of two Transformer Encoder-ba...\n\n- --------------------\nTransformer Inertial Poser: Real-time Human Motion R econstruction from Sparse IMUs with Simultaneous Terrain Generation\n\nJia ng, Ye, Gopinath, Won, Winkler...\n\nReal-time human motion reconstruction from a sparse set of (e.g. six) wearable IMUs provides a non-intrusive an d economic approach to motion capture. Without the ability to acquire posi tion information directly from IMUs, recent works took data-driven approac hes that utilize large human motion datas...\n\n---------------------\nCon trol VAE: Model-Based Learning of Generative Controllers for Physics-Based Characters\n\nYao, Song, Chen, Liu\n\nIn this paper, we introduce Control VAE, a novel model-based framework for learning generative motion control policies based on variational autoencoders (VAE). Our framework can learn a rich and flexible latent representation of skills and a skill-condition ed generative control policy from a diverse...\n\n---------------------\nM oRig: Motion-Aware Rigging of Character Meshes from Point Clouds\n\nXu, Zh ou, Yi, Kalogerakis\n\nWe present MoRig, a method that automatically rigs character meshes driven by single-view point cloud streams capturing the m otion of performing characters. Our method is also able to animate the 3D meshes according to the captured point cloud motion. At the heart of our a pproach lies a deep neural ...\n\n---------------------\nQuestSim: Human M otion Tracking from Sparse Sensors with Simulated Avatars\n\nWinkler, Won, Ye\n\nReal-time tracking of human body motion is crucial for interactive and\nimmersive experiences in AR/VR. However, very limited sensor data abo ut\nthe body is available from standalone wearable devices such as HMDs (H ead\nMounted Devices) or AR glasses. In this work, we present a reinforcem ent\nlearning f...\n\n---------------------\nLearning Virtual Chimeras by Dynamic Motion Reassembly\n\nLEE, Lee, Lee\n\nThe Chimera is a mythologica l hybrid creature composed of different animal parts. The chimera’s moveme nts are highly dependent on the spatial and temporal alignments of its com posing parts. In this paper, we present a novel algorithm that creates and animates chimeras by dynamically reassembli...\n\n---------------------\n SMPL-IK: Learned Morphology-Aware Inverse Kinematics for AI Driven Artisti c Workflows\n\nVoleti, Oreshkin, Bocquelet, Harvey, Ménard...\n\nOur appro ach unlocks novel artistic workflows using advanced AI tooling. Animator g rabs one of the many pictures available on the web and uses it to initiali ze an editable 3D scene.\n\n\nRegistration Category: FULL ACCESS, ON-DEMAN D ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND END:VEVENT END:VCALENDAR