BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_578@linklings.com SUMMARY:QuadStream: A Quad-Based Scene Streaming Architecture for Novel Vi ewpoint Reconstruction DESCRIPTION:Technical Papers\n\nQuadStream: A Quad-Based Scene Streaming A rchitecture for Novel Viewpoint Reconstruction\n\nHladky, Stengel, Vining, Kerbl, Seidel...\n\nCloud rendering is attractive when targeting thin cli ent devices such as phones or VR/AR headsets, or any situation where a hig h-end GPU is not available due to thermal or power constraints. However, i t introduces the challenge of streaming rendered data over a network in a manner that is robust to latency and potential dropouts. Current approache s range from streaming transmitted video and correcting it on the client-- -which fails in the presence of disocclusion events---to solutions where t he server sends geometry and all rendering is performed on the client. To balance the competing goals of disocclusion robustness and minimal client workload, we introduce QuadStream, a new streaming technique that reduces motion-to-photon latency by allowing clients to render novel views on the fly and is robust against disocclusions. Our key idea is to transmit an ap proximate geometric scene representation to the client which is independen t of the source geometry and can render both the current view frame and ne arby adjacent views. Motivated by traditional macroblock approaches to vid eo codec design, we decompose the scene seen from positions in a view cell into a series of view-aligned quads from multiple views, or QuadProxies. By operating on a rasterized G-Buffer, our approach is independent of the representation used for the scene itself. Our technical contributions are an efficient parallel quad generation, merging, and packing strategy for p roxy views that cover potential client movement in a scene; a packing and encoding strategy allowing masked quads with depth information to be trans mitted as a frame coherent stream; and an efficient rendering approach tha t takes advantage of modern hardware capabilities to turn our QuadStream r epresentation into complete novel views on thin clients. According to our experiments, our approach achieves superior quality compared both to strea ming methods that rely on simple video data and to geometry-based streamin g.\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIEN CE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_578&sess=sess15 3 END:VEVENT END:VCALENDAR