BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035306Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_487@linklings.com SUMMARY:DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains DESCRIPTION:Technical Papers\n\nDynaGAN: Dynamic Few-shot Adaptation of GA Ns to Multiple Domains\n\nKim, Kang, Kim, Baek, Cho\n\nFew-shot domain ada ptation to multiple domains aims to learn a complex image distribution acr oss multiple domains from a few training images. A naive solution here is to train a separate model for each domain using few-shot domain adaptation methods. Unfortunately, this approach mandates linearly-scaled computatio nal resources both in memory and computation time and, more importantly, s uch separate models cannot exploit the shared knowledge between target dom ains. In this paper, we propose DynaGAN, a novel few-shot domain-adaptatio n method for multiple target domains. DynaGAN has an adaptation module, wh ich is a hyper-network that dynamically adapts a pretrained GAN model into the multiple target domains. Hence, we can fully exploit the shared knowl edge across target domains and avoid the linearly-scaled computational req uirements. As it is still computationally challenging to adapt a large-siz e GAN model, we design our adaptation module to be lightweight using the r ank-1 tensor decomposition. Lastly, we propose a contrastive-adaptation lo ss suitable for multi-domain few-shot adaptation. We validate the effectiv eness of our method through extensive qualitative and quantitative evaluat ions.\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPER IENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_487&sess=sess15 3 END:VEVENT END:VCALENDAR