BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035348Z LOCATION:Room 324\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221208T153000 DTEND;TZID=Asia/Seoul:20221208T170000 UID:siggraphasia_SIGGRAPH Asia 2022_sess166@linklings.com SUMMARY:Image Editing and Manipulation DESCRIPTION:Technical Communications, Technical Papers\n\nThe presentation s will be followed by a 30-min Interactive Discussion Session at Room 325- CD.\n\nThe Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive tec hniques. Technical Papers are published as a special issue of ACM Transact ions on Graphics. In addition to papers selected by the SIGGRAPH Asia 2022 Technical Papers Jury, the conference presents papers that have been publ ished in ACM Transactions on Graphics during the past year. Accepted paper s adhere to the highest scientific standards.\n\nThe Technical Communicati ons program is a premier forum for presenting the latest developments and research still in progress. Leading international experts in academia and industry present work that showcase actual implementations of research ide as, works at the crossroads of computer graphics with computer vision, mac hine learning, HCI, VR, CAD, visualization, and many others.\n\nNeRFFaceEd iting: Disentangled Face Editing in Neural Radiance Fields\n\nJiang, Chen, Liu, Fu, Gao\n\nRecent methods for synthesizing 3D-aware face images have achieved rapid development thanks to neural radiance fields, allowing for high quality and fast inference speed. However, existing solutions for ed iting facial geometry and appearance independently usually require retrain ing and are not optim...\n\n---------------------\nWater Simulation and Re ndering from a Still Photograph\n\nSugimoto, He, Liao, Sander\n\nWe propos e an approach to simulate and render realistic water animation from a sing le still input photograph. We first segment the water surface, estimate re ndering parameters, and compute water reflection textures with a combinati on of neural networks and traditional optimization techniques. Then w...\n \n---------------------\nVideoReTalking: Audio-based Lip Synchronization f or Talking Head Video Editing In the Wild\n\nCheng, Cun, Zhang, Xia, Yin.. .\n\nWe present VideoReTalking, a new system to edit the faces of a real-w orld talking head video according to an input audio, producing a high-qual ity and lip-syncing output video even with a different emotion.Our system disentangles this objective into three sequential tasks: (1) face video ge neration ...\n\n---------------------\nNeural Photo-Finishing\n\nTseng, Zh ang, Jebe, Zhang, Xia...\n\nImage processing pipelines are ubiquitous and we rely on them either directly, by filtering or adjusting an image post-c apture, or indirectly, as image signal processing pipeline (ISP) on broadl y deployed camera systems. Used by artists, photographers, system engineer s, and for downstream vision tas...\n\n---------------------\nProduction-R eady Face Re-Aging for Visual Effects\n\nZoss, Chandran, Sifakis, Gross, G otardo...\n\nPhotorealistic digital re-aging of faces in video is becoming increasingly common in entertainment and advertising. But the predominan t 2D painting workflow often requires frame-by-frame manual work that can take days to accomplish, even by skilled artists. Although research on fa cial image re-agi...\n\n---------------------\nStitch it in Time: GAN-Base d Facial Editing of Real Videos\n\nTzaban, Mokady, Gal, Bermano, Cohen-Or\ n\nThe ability of Generative Adversarial Networks to encode rich semantics within their latent space has been widely adopted for facial image editin g. However, replicating their success with videos has proven challenging. Applying StyleGAN editing over real videos introduces two main challenges: (i) St...\n\n---------------------\nTraining-Free Neural Matte Extraction for Visual Effects\n\nElcott, Lewis, Kanazawa, Bregler\n\nA deep neural n etwork-based alpha matte extraction approach that requires no training dat a, so is well-suited for VFX, where one-of-a-kind subjects appear so brief ly that gathering training data is pointless.\n\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, O N-DEMAND END:VEVENT END:VCALENDAR