BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_584@linklings.com SUMMARY:Scene Synthesis from Human Motion DESCRIPTION:Technical Papers\n\nScene Synthesis from Human Motion\n\nYe, W ang, Li, Park, Liu...\n\nLarge-scale capture of human motion with diverse, complex scenes, while immensely useful, is often considered prohibitively costly. Meanwhile, human motion alone contains rich information about the scene they reside in and interact with. For example, a sitting human sugg ests the existence of a chair, and their leg position further implies the chair’s pose. In this paper, we propose to synthesize diverse, semanticall y reasonable, and physically plausible scenes based on human motion. Our f ramework, Scene Synthesis from HUMan MotiON (SUMMON), includes two steps. It first uses ContactFormer, our newly introduced contact predictor, to ob tain temporally consistent contact labels from human motion. Based on thes e predictions, SUMMON then chooses interacting objects and optimizes physi cal plausibility losses; it further populates the scene with objects that do not interact with humans. Experimental results demonstrate that SUMMON synthesizes feasible, plausible, and diverse scenes and has the potential to generate extensive human-scene interaction data for the community.\n\nR egistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCE SS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_584&sess=sess15 3 END:VEVENT END:VCALENDAR