BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035347Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221207T140000 DTEND;TZID=Asia/Seoul:20221207T153000 UID:siggraphasia_SIGGRAPH Asia 2022_sess162@linklings.com SUMMARY:Face, Speech, and Gesture DESCRIPTION:Technical Communications, Technical Papers\n\nThe presentation s will be followed by a 30-min Interactive Discussion Session at Room 325- CD.\n\nThe Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive tec hniques. Technical Papers are published as a special issue of ACM Transact ions on Graphics. In addition to papers selected by the SIGGRAPH Asia 2022 Technical Papers Jury, the conference presents papers that have been publ ished in ACM Transactions on Graphics during the past year. Accepted paper s adhere to the highest scientific standards.\n\nThe Technical Communicati ons program is a premier forum for presenting the latest developments and research still in progress. Leading international experts in academia and industry present work that showcase actual implementations of research ide as, works at the crossroads of computer graphics with computer vision, mac hine learning, HCI, VR, CAD, visualization, and many others\n\nAnimatomy: an Animator-centric, Anatomically Inspired System for 3D Facial Modeling, Animation and Transfer\n\nChoi, Eom, Mouscadet, Cullingford, Ma...\n\nWe p resent Animatomy, a novel anatomic+animator centric representation of the human face. Present FACS-based systems are plagued with problems of face m uscle separation, coverage, opposition, and redundancy. We, therefore, pro pose a collection of muscle fiber curves as an anatomic basis, whose contr ...\n\n---------------------\nPADL: Language-Directed Physics-Based Charac ter Control\n\nJuravsky, Guo, Fidler, Peng\n\nDeveloping systems that can synthesize natural and life-like motions for simulated characters has long been a focus for computer animation. But in order for these systems to be useful for downstream applications, they need not only produce high-quali ty motions, but must also provide an accessible an...\n\n----------------- ----\nMasked Lip-Sync Prediction by Audio-Visual Contextual Exploitation i n Transformers\n\nSun, Zhou, Wang, Wu, Hong...\n\nPrevious studies have ex plored generating accurately lip-synced talking faces for arbitrary target s given audio conditions. However, most of them deform or generate the who le facial area, leading to non-realistic results. In this work, we delve i nto the formulation of altering only the mouth shapes ...\n\n------------- --------\nRhythmic Gesticulator: Rhythm-Aware Co-Speech Gesture Synthesis with Hierarchical Neural Embeddings\n\nAo, Gao, Lou, Chen, Liu\n\nAutomati c synthesis of realistic co-speech gestures is an increasingly important y et challenging task in artificial embodied agent creation. Previous system s mainly focus on generating gestures in an end-to-end manner, which leads to difficulties in mining the clear rhythm and semantics due to the c...\ n\n---------------------\nTowards Virtual Humans without Gender Stereotype d Visual Features\n\nAraujo, Schaffer, Costa, Musse\n\nTo assess gender bi as towards a simulated baby, our research indicates that simply reporting gender may be sufficient to create a perception of gender that affects the participant's emotional response.\n\n---------------------\nVOCAL: Vowel and Consonant Layering for Expressive Animator-Centric Singing Animation\ n\nPan, Landreth, Fiume, Singh\n\nSinging and speaking are two fundamental forms of human communication. From a modeling perspective however, speaki ng can be seen as a subset of singing. We present VOCAL, a system that aut omatically generates expressive, animator-centric lower face animation fro m singing audio input. Articulatory ph...\n\n---------------------\nVideo- driven Neural Physically-based Facial Asset for Production\n\nZhang, Zeng, Zhang, Lin, Cao...\n\nProduction-level workflows for producing convincing 3D dynamic human faces have long relied on an assortment of labor-intensi ve tools for geometry and texture generation, motion capture and rigging, and expression synthesis. Recent neural approaches automate individual com ponents but the correspondi...\n\n\nRegistration Category: FULL ACCESS, ON -DEMAND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND END:VEVENT END:VCALENDAR