BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035306Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_273@linklings.com SUMMARY:CLIP-Mesh: Generating textured meshes from text using pretrained i mage-text models DESCRIPTION:Technical Papers\n\nCLIP-Mesh: Generating textured meshes from text using pretrained image-text models\n\nMohammad Khalid, Xie, Belilovs ky, Popa\n\nWe present a technique for zero-shot generation of a 3D model using only a target text prompt. Without any 3D supervision our method def orms the control shape of a limit subdivided surface along with its textur e map and normal map to obtain a 3D asset that corresponds to the input te xt prompt and can be easily deployed into games or modeling applications. We rely only on a pre-trained CLIP model that compares the input text prom pt with differentiably rendered images of our 3D model. While previous wor ks have focused on stylization or required training of generative models w e perform optimization on mesh parameters directly to generate shape, text ure or both. To constrain the optimization to produce plausible meshes and textures we introduce a number of techniques using image augmentations an d the use of a pretrained prior that generates CLIP image embeddings given a text embedding.\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_273&sess=sess15 3 END:VEVENT END:VCALENDAR