BEGIN:VCALENDAR
VERSION:2.0
PRODID:Linklings LLC
BEGIN:VTIMEZONE
TZID:Asia/Seoul
X-LIC-LOCATION:Asia/Seoul
BEGIN:STANDARD
TZOFFSETFROM:+0900
TZOFFSETTO:+0900
TZNAME:KST
DTSTART:18871231T000000
DTSTART:19881009T020000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20230103T035311Z
LOCATION:Room 325-AB\, Level 3\, West Wing
DTSTART;TZID=Asia/Seoul:20221208T140000
DTEND;TZID=Asia/Seoul:20221208T153000
UID:siggraphasia_SIGGRAPH Asia 2022_sess169_papers_578@linklings.com
SUMMARY:QuadStream: A Quad-Based Scene Streaming Architecture for Novel Vi
ewpoint Reconstruction
DESCRIPTION:Technical Communications, Technical Papers\n\nQuadStream: A Qu
ad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction\n
\nHladky, Stengel, Vining, Kerbl, Seidel...\n\nCloud rendering is attracti
ve when targeting thin client devices such as phones or VR/AR headsets, or
any situation where a high-end GPU is not available due to thermal or pow
er constraints. However, it introduces the challenge of streaming rendered
data over a network in a manner that is robust to latency and potential d
ropouts. Current approaches range from streaming transmitted video and cor
recting it on the client---which fails in the presence of disocclusion eve
nts---to solutions where the server sends geometry and all rendering is pe
rformed on the client. To balance the competing goals of disocclusion robu
stness and minimal client workload, we introduce QuadStream, a new streami
ng technique that reduces motion-to-photon latency by allowing clients to
render novel views on the fly and is robust against disocclusions. Our key
idea is to transmit an approximate geometric scene representation to the
client which is independent of the source geometry and can render both the
current view frame and nearby adjacent views. Motivated by traditional ma
croblock approaches to video codec design, we decompose the scene seen fro
m positions in a view cell into a series of view-aligned quads from multip
le views, or QuadProxies. By operating on a rasterized G-Buffer, our appro
ach is independent of the representation used for the scene itself. Our te
chnical contributions are an efficient parallel quad generation, merging,
and packing strategy for proxy views that cover potential client movement
in a scene; a packing and encoding strategy allowing masked quads with dep
th information to be transmitted as a frame coherent stream; and an effici
ent rendering approach that takes advantage of modern hardware capabilitie
s to turn our QuadStream representation into complete novel views on thin
clients. According to our experiments, our approach achieves superior qual
ity compared both to streaming methods that rely on simple video data and
to geometry-based streaming.\n\nRegistration Category: FULL ACCESS, ON-DEM
AND ACCESS\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND
URL:https://sa2022.siggraph.org/en/full-program/?id=papers_578&sess=sess16
9
END:VEVENT
END:VCALENDAR