BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035311Z LOCATION:Room 325-AB\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221208T110000 DTEND;TZID=Asia/Seoul:20221208T123000 UID:siggraphasia_SIGGRAPH Asia 2022_sess168_papers_584@linklings.com SUMMARY:Scene Synthesis from Human Motion DESCRIPTION:Technical Papers\n\nScene Synthesis from Human Motion\n\nYe, W ang, Li, Park, Liu...\n\nLarge-scale capture of human motion with diverse, complex scenes, while immensely useful, is often considered prohibitively costly. Meanwhile, human motion alone contains rich information about the scene they reside in and interact with. For example, a sitting human sugg ests the existence of a chair, and their leg position further implies the chair’s pose. In this paper, we propose to synthesize diverse, semanticall y reasonable, and physically plausible scenes based on human motion. Our f ramework, Scene Synthesis from HUMan MotiON (SUMMON), includes two steps. It first uses ContactFormer, our newly introduced contact predictor, to ob tain temporally consistent contact labels from human motion. Based on thes e predictions, SUMMON then chooses interacting objects and optimizes physi cal plausibility losses; it further populates the scene with objects that do not interact with humans. Experimental results demonstrate that SUMMON synthesizes feasible, plausible, and diverse scenes and has the potential to generate extensive human-scene interaction data for the community.\n\nR egistration Category: FULL ACCESS, ON-DEMAND ACCESS\n\nLanguage: ENGLISH\n \nFormat: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_584&sess=sess16 8 END:VEVENT END:VCALENDAR