BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Auditorium\, Level 5\, West Wing DTSTART;TZID=Asia/Seoul:20221206T100000 DTEND;TZID=Asia/Seoul:20221206T120000 UID:siggraphasia_SIGGRAPH Asia 2022_sess153_papers_234@linklings.com SUMMARY:Motion In-betweening via Two-stage Transformers DESCRIPTION:Technical Papers\n\nMotion In-betweening via Two-stage Transfo rmers\n\nQin, Zheng, Zhou\n\nWe present a deep learning-based framework to synthesize motion in-betweening in a two-stage manner. Given some context frames and a target frame, the system can generate plausible transitions with variable lengths in a non-autoregressive fashion. The framework consi sts of two Transformer Encoder-based networks operating in two stages: in the first stage a Context Transformer is designed to generate rough transi tions based on the context and in the second stage a Detail Transformer is employed to refine motion details. Compared to existing Transformer-based methods which either use a complete Transformer Encoder-Decoder architect ure or additional 1D convolutions to generate motion transitions, our fram ework achieves superior performance with less trainable parameters by only leveraging the Transformer Encoder and masked self-attention mechanism. T o enhance the generalization of our transformer-based framework, we furthe r introduce Keyframe Positional Encoding and Learned Relative Positional E ncoding to make our method robust in synthesizing longer transitions excee ding the maximum transition length during training. Our framework is also artist-friendly by supporting full and partial pose constraints within the transition, giving artists fine control over the synthesized results. We benchmark our framework on the LAFAN1 dataset, and experiments show that o ur method outperforms the current state-of-the-art methods at a large marg in (an average of 16% for normal-length sequences and 55% for excessive-le ngth sequences). Our method trains faster than the RNN-based method and ac hieves a four-time speedup during inference. We implement our framework in to a production-ready tool inside an animation authoring software and cond uct a pilot study to validate the practical value of our method.\n\nRegist ration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, T RADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERSON URL:https://sa2022.siggraph.org/en/full-program/?id=papers_234&sess=sess15 3 END:VEVENT END:VCALENDAR