BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035312Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221209T140000 DTEND;TZID=Asia/Seoul:20221209T153000 UID:siggraphasia_SIGGRAPH Asia 2022_sess178_papers_170@linklings.com SUMMARY:UmeTrack: Unified multi-view end-to-end hand tracking for VR DESCRIPTION:Technical Communications, Technical Papers\n\nUmeTrack: Unifie d multi-view end-to-end hand tracking for VR\n\nHan, Wu, Zhang, Liu, Zhang ...\n\nReal-time tracking of 3D hand pose in world space is a challenging problem\nand plays an important role in VR interaction. Existing work in t his space are\nlimited to either producing root-relative (versus world spa ce) 3D pose or rely\non multiple stages such as generating heatmaps and ki nematic optimization\nto obtain 3D pose. Moreover, the typical VR scenario , which involves multiview\ntracking from wide field of view (FOV) cameras is seldom addressed by\nthese methods. In this paper, we present a unifie d end-to-end differentiable\nframework for multi-view, multi-frame hand tr acking that directly predicts\n3D hand pose in world space. We demonstrate the benefits of end-to-end\ndifferentiabilty by extending our framework w ith downstream tasks such\nas jitter reduction and pinch prediction. To de monstrate the efficacy of our\nmodel,we further present a newlarge-scale e gocentric hand pose dataset that\nconsists of both real and synthetic data . Experiments show that our system\nhandles various challenging interactiv e motions, and has been successfully\napplied to real-time VR applications .\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENG LISH\n\nFormat: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_170&sess=sess17 8 END:VEVENT END:VCALENDAR