BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035312Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221209T103000 DTEND;TZID=Asia/Seoul:20221209T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess177_papers_379@linklings.com SUMMARY:Reconstructing Hand-Held Objects from Monocular Video DESCRIPTION:Technical Papers\n\nReconstructing Hand-Held Objects from Mono cular Video\n\nHuang, Ji, He, Sun, He...\n\nThis paper presents an approac h that reconstructs a hand-held object from a monocular video. In contrast to many recent methods that directly predict object geometry by a trained network, the proposed approach does not require any learned prior about t he object and is able to recover more accurate and detailed object geometr y. The key idea is that the hand motion naturally provides multiple views of the object and the motion can be reliably estimated by a hand pose trac ker. Then, the object geometry can be recovered by solving a multi-view re construction problem. We devise an implicit neural representation-based me thod to solve the reconstruction problem and address the issues of impreci se hand pose estimation, relative hand-object motion, and insufficient geo metry optimization for small objects. We also provide a newly collected da taset with 3D ground truth to validate the proposed approach.\n\nRegistrat ion Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n\nFormat : IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_379&sess=sess17 7 END:VEVENT END:VCALENDAR