BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035346Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221206T153000 DTEND;TZID=Asia/Seoul:20221206T170000 UID:siggraphasia_SIGGRAPH Asia 2022_sess157@linklings.com SUMMARY:Image Generation DESCRIPTION:Technical Communications, Technical Papers\n\nThe presentation s will be followed by a 30-min Interactive Discussion Session at Room 325- CD.\n\nThe Technical Papers program is the premier international forum for disseminating new scholarly work in computer graphics and interactive tec hniques. Technical Papers are published as a special issue of ACM Transact ions on Graphics. In addition to papers selected by the SIGGRAPH Asia 2022 Technical Papers Jury, the conference presents papers that have been publ ished in ACM Transactions on Graphics during the past year. Accepted paper s adhere to the highest scientific standards.\n\nThe Technical Communicati ons program is a premier forum for presenting the latest developments and research still in progress. Leading international experts in academia and industry present work that showcase actual implementations of research ide as, works at the crossroads of computer graphics with computer vision, mac hine learning, HCI, VR, CAD, visualization, and many others.\n\nText2Light : Zero-shot Text-driven HDR Panorama Generation\n\nChen, Wang, Liu\n\nHigh -quality HDRIs (High Dynamic Range Images), typically HDR panoramas, are o ne of the most popular ways to create photorealistic lighting and 360-degr ee reflections of 3D scenes in graphics. Given the difficulty of capturing HDRIs, a versatile and controllable generative model is highly desired, w ...\n\n---------------------\nOVERPAINT: Automatic Multi-Layer Stencil Gen eration without Bridges\n\nFukushima, Qi, Shen, Igarashi\n\nWe propose a n ovel method to generate a sequence of bridge-free stencil sheets from an i nput image. Our method decomposes a given image into color layers, and con structs level maps.\n\n---------------------\nMake Your Own Sprites: Alias ing-Aware and Cell-Controllable Pixelization\n\nWu, Chai, Zhao, Deng, Liu. ..\n\nPixel art is a unique art style with the appearance of low resolutio n images. In this paper, we propose a data-driven pixelization method that can produce sharp and crisp cell effects with controllable cell size. Our approach overcomes the limitation of existing learning-based methods in c ell size c...\n\n---------------------\nDr.3D: Adapting 3D GANs to Artisti c Drawings\n\nJin, Ryu, Kim, Baek, Cho\n\nWhile 3D GANs have recently demo nstrated the high-quality synthesis of multi-view consistent images and 3D shapes, they are mainly restricted to photo-realistic human portraits. Th is paper aims to extend 3D GANs to a different, but meaningful visual form : artistic portrait drawings. However, extendi...\n\n--------------------- \nPopStage: The Generation of Stage Cross-Editing Video based on Spatio-Te mporal Matching\n\nLee, Yoo, Cho, Kim, Im...\n\nStageMix is a mixed video that is created by concatenating the segments from various performance vid eos of an identical song in a visually smooth manner by matching the main subject's silhouette presented in the frame. We introduce PopStage, which allows users to generate a StageMix automatically. P...\n\n--------------- ------\nSprite-from-Sprite: Cartoon Animation Decomposition with Self-supe rvised Sprite Estimation\n\nZhang, Wong, Liu\n\nWe present an approach to decompose cartoon animation videos into a set of ``sprites'' --- the basic units of digital cartoons that depict the contents and transforms of each animated objects. The sprites in real-world cartoons are unique: artists may draw arbitrary sprite animations for expressiven...\n\n--------------- ------\nDynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains\n \nKim, Kang, Kim, Baek, Cho\n\nFew-shot domain adaptation to multiple doma ins aims to learn a complex image distribution across multiple domains fro m a few training images. A naive solution here is to train a separate mode l for each domain using few-shot domain adaptation methods. Unfortunately, this approach mandates linearly-sc...\n\n\nRegistration Category: FULL AC CESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAN D END:VEVENT END:VCALENDAR