BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Room 324\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221206T153000 DTEND;TZID=Asia/Seoul:20221206T170000 UID:siggraphasia_SIGGRAPH Asia 2022_sess155_papers_409@linklings.com SUMMARY:Learning-based Inverse Rendering of Complex Indoor Scenes with Dif ferentiable Monte Carlo Raytracing DESCRIPTION:Technical Communications, Technical Papers\n\nLearning-based I nverse Rendering of Complex Indoor Scenes with Differentiable Monte Carlo Raytracing\n\nZhu, Luan, Huo, Lin, Zhong...\n\nIndoor scenes typically exh ibit complex, spatially-varying appearance from global illumination, makin g the inverse rendering a challenging ill-posed problem. This work present s an end-to-end, learning-based inverse rendering framework incorporating differentiable Monte Carlo raytracing with importance sampling. The framew ork takes a single image as input to jointly recover the underlying geomet ry, spatially-varying lighting, and photorealistic materials. Specifically , we introduce a physically-based differentiable rendering layer with scre en-space ray tracing, resulting in more realistic specular reflections tha t match the input photo. In addition, we create a large-scale, photorealis tic indoor scene dataset with significantly richer details like complex fu rniture and dedicated decorations. Further, we design a novel out-of-view lighting network with uncertainty-aware refinement leveraging hypernetwork -based neural radiance fields to predict lighting outside the view of the input photo. Through extensive evaluations on common benchmark datasets, w e demonstrate superior inverse rendering quality of our method compared to state-of-the-art baselines, enabling various applications such as complex object insertion and material editing with high fidelity. Code and data w ill be made available at https://jingsenzhu.github.io/invrend.\n\nRegistra tion Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n\nForma t: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_409&sess=sess15 5 END:VEVENT END:VCALENDAR