BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035306Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_112@linklings.com SUMMARY:Efficient Neural Radiance Fields for Interactive Free-viewpoint Vi deo DESCRIPTION:Technical Papers\n\nEfficient Neural Radiance Fields for Inter active Free-viewpoint Video\n\nLin, Peng, Xu, Yan, Shuai...\n\nThis paper aims to tackle the challenge of efficiently producing interactive free-vie wpoint videos. \nSome recent works equip neural radiance fields with image encoders, enabling them to generalize across scenes. When processing dyna mic scenes, they can simply treat each video frame as an individual scene and perform novel view synthesis to generate free-viewpoint videos. Howeve r, their rendering process is slow and cannot support interactive applicat ions. \nA major factor is that they sample lots of points in empty space w hen inferring radiance fields. \nWe propose a novel scene representation, called ENeRF, for the fast creation of interactive free-viewpoint videos. Specifically, given multi-view images at one frame, we first build the cas cade cost volume to predict the coarse geometry of the scene. The coarse g eometry allows us to sample few points near the scene surface, thereby sig nificantly improving the rendering speed. This process is fully differenti able, enabling us to jointly learn the depth prediction and radiance field networks from RGB images. Experiments show that our approach exhibits com petitive performance on the DTU, NeRF Synthetic, Real Forward-facing, ZJU- MoCap, and DynamicCap datasets while being at least 60 times faster than p revious generalizable radiance field methods. We demonstrate the capabilit y of our method to synthesize novel views of human performers in real-time . The code is available at https://zju3dv.github.io/enerf/.\n\nRegistrati on Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_112&sess=sess15 3 END:VEVENT END:VCALENDAR