BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_321@linklings.com SUMMARY:Learning to Generate 3D Shapes from a Single Example DESCRIPTION:Technical Papers\n\nLearning to Generate 3D Shapes from a Sing le Example\n\nWu, Zheng\n\nExisting generative models for 3D shapes are ty pically trained on a large 3D dataset, often of a specific object category . In this paper, we investigate the deep generative model that learns fro m only a single reference 3D shape. Specifically, we present a multi-scale GAN-based model designed to capture the input shape's geometric features across a range of spatial scales. To avoid large memory and computational cost induced by operating on the 3D volume, we build our generator atop th e tri-plane hybrid representation, which requires only 2D convolutions. W e train our generative model on a voxel pyramid of the reference shape, wi thout the need of any external supervision or manual annotation. Once tra ined, our model can generate diverse and high-quality 3D shapes possibly o f different sizes and aspect ratios. The resulting shapes present variatio ns across different scales, and at the same time retain the global structu re of the reference shape. Through extensive evaluation, both qualitative and quantitative, we demonstrate that our model can generate 3D shapes of various types.\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCE SS, EXPERIENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN- PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_321&sess=sess15 3 END:VEVENT END:VCALENDAR