BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035307Z LOCATION:Room 324\, Level 3\, West Wing DTSTART;TZID=Asia/Seoul:20221206T140000 DTEND;TZID=Asia/Seoul:20221206T153000 UID:siggraphasia_SIGGRAPH Asia 2022_sess154_papers_234@linklings.com SUMMARY:Motion In-betweening via Two-stage Transformers DESCRIPTION:Technical Communications, Technical Papers\n\nMotion In-betwee ning via Two-stage Transformers\n\nQin, Zheng, Zhou\n\nWe present a deep l earning-based framework to synthesize motion in-betweening in a two-stage manner. Given some context frames and a target frame, the system can gener ate plausible transitions with variable lengths in a non-autoregressive fa shion. The framework consists of two Transformer Encoder-based networks op erating in two stages: in the first stage a Context Transformer is designe d to generate rough transitions based on the context and in the second sta ge a Detail Transformer is employed to refine motion details. Compared to existing Transformer-based methods which either use a complete Transformer Encoder-Decoder architecture or additional 1D convolutions to generate mo tion transitions, our framework achieves superior performance with less tr ainable parameters by only leveraging the Transformer Encoder and masked s elf-attention mechanism. To enhance the generalization of our transformer- based framework, we further introduce Keyframe Positional Encoding and Lea rned Relative Positional Encoding to make our method robust in synthesizin g longer transitions exceeding the maximum transition length during traini ng. Our framework is also artist-friendly by supporting full and partial p ose constraints within the transition, giving artists fine control over th e synthesized results. We benchmark our framework on the LAFAN1 dataset, a nd experiments show that our method outperforms the current state-of-the-a rt methods at a large margin (an average of 16% for normal-length sequence s and 55% for excessive-length sequences). Our method trains faster than t he RNN-based method and achieves a four-time speedup during inference. We implement our framework into a production-ready tool inside an animation a uthoring software and conduct a pilot study to validate the practical valu e of our method.\n\nRegistration Category: FULL ACCESS, ON-DEMAND ACCESS\n \nLanguage: ENGLISH\n\nFormat: IN-PERSON, ON-DEMAND URL:https://sa2022.siggraph.org/en/full-program/?id=papers_234&sess=sess15 4 END:VEVENT END:VCALENDAR