BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_170@linklings.com SUMMARY:UmeTrack: Unified multi-view end-to-end hand tracking for VR DESCRIPTION:Technical Papers\n\nUmeTrack: Unified multi-view end-to-end ha nd tracking for VR\n\nHan, Wu, Zhang, Liu, Zhang...\n\nReal-time tracking of 3D hand pose in world space is a challenging problem\nand plays an impo rtant role in VR interaction. Existing work in this space are\nlimited to either producing root-relative (versus world space) 3D pose or rely\non mu ltiple stages such as generating heatmaps and kinematic optimization\nto o btain 3D pose. Moreover, the typical VR scenario, which involves multiview \ntracking from wide field of view (FOV) cameras is seldom addressed by\nt hese methods. In this paper, we present a unified end-to-end differentiabl e\nframework for multi-view, multi-frame hand tracking that directly predi cts\n3D hand pose in world space. We demonstrate the benefits of end-to-en d\ndifferentiabilty by extending our framework with downstream tasks such\ nas jitter reduction and pinch prediction. To demonstrate the efficacy of our\nmodel,we further present a newlarge-scale egocentric hand pose datase t that\nconsists of both real and synthetic data. Experiments show that ou r system\nhandles various challenging interactive motions, and has been su ccessfully\napplied to real-time VR applications.\n\nRegistration Category : FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, TRADE EXHIBITOR\ n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_170&sess=sess15 3 END:VEVENT END:VCALENDAR