BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035312Z LOCATION:Room 324\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221209T090000 DTEND;TZID=Asia/Seoul:20221209T103000 UID:siggraphasia_SIGGRAPH Asia 2022_sess172_papers_112@linklings.com SUMMARY:Efficient Neural Radiance Fields for Interactive Free-viewpoint Vi deo DESCRIPTION:Technical Communications, Technical Papers\n\nEfficient Neural Radiance Fields for Interactive Free-viewpoint Video\n\nLin, Peng, Xu, Ya n, Shuai...\n\nThis paper aims to tackle the challenge of efficiently prod ucing interactive free-viewpoint videos. \nSome recent works equip neural radiance fields with image encoders, enabling them to generalize across sc enes. When processing dynamic scenes, they can simply treat each video fra me as an individual scene and perform novel view synthesis to generate fre e-viewpoint videos. However, their rendering process is slow and cannot su pport interactive applications. \nA major factor is that they sample lots of points in empty space when inferring radiance fields. \nWe propose a no vel scene representation, called ENeRF, for the fast creation of interacti ve free-viewpoint videos. Specifically, given multi-view images at one fra me, we first build the cascade cost volume to predict the coarse geometry of the scene. The coarse geometry allows us to sample few points near the scene surface, thereby significantly improving the rendering speed. This p rocess is fully differentiable, enabling us to jointly learn the depth pre diction and radiance field networks from RGB images. Experiments show that our approach exhibits competitive performance on the DTU, NeRF Synthetic, Real Forward-facing, ZJU-MoCap, and DynamicCap datasets while being at le ast 60 times faster than previous generalizable radiance field methods. We demonstrate the capability of our method to synthesize novel views of hum an performers in real-time. The code is available at https://zju3dv.githu b.io/enerf/.\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLa nguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_112&sess=sess17 2 END:VEVENT END:VCALENDAR