BEGIN:VCALENDAR VERSION:2.0 PRODID:Linklings LLC BEGIN:VTIMEZONE TZID:Asia/Seoul X-LIC-LOCATION:Asia/Seoul BEGIN:STANDARD TZOFFSETFROM:+0900 TZOFFSETTO:+0900 TZNAME:KST DTSTART:18871231T000000 DTSTART:19881009T020000 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20230103T035347Z LOCATION:Exhibition Hall 2\, Level 1\, West Wing DTSTART;TZID=Asia/Seoul:20221208T100000 DTEND;TZID=Asia/Seoul:20221208T180000 UID:siggraphasia_SIGGRAPH Asia 2022_sess189@linklings.com SUMMARY:Posters Gallery DESCRIPTION:Poster\n\nThe Posters program is an interactive forum for inno vative ideas that are not yet fully polished, high-impact practical contri butions, and offer a behind-the-scenes view of new commercial and artistic work, as well as solutions that help solve challenging problems. It is a cooperative setting where students, researchers, artists, enthusiasts, and industry veterans come together to present their research, art, and ideas to the global CG industry and encourage feedback on recently completed wo rk or tentative new approaches.\n\nIt's Me: VR-based Journaling for Improv ed Cognitive Self-Regulation\n\nWang, Pai, Minamizawa\n\nA VR-based journa l where users can record and review past selves in the virtual environment with visualized emotions. It suggest a safe condition to support betterin g self awareness and regulation.\n\n---------------------\nEye on the Ball : The effect of visual cues on virtual throwing\n\nYamac, O'Sullivan\n\nWe present an experiment to investigate how visual information affects the w ay participants perform virtual throws at different distances. A virtual b all was thrown in VR with varying visual feedback.\n\n-------------------- -\nLanguage-driven Diversified Image Retargeting\n\nWang, Huang, Tang, Don g, Lee\n\nThis study proposes a reinforcement learning framework for diver sified image retargeting guided by text. The experimental findings reveal that LDIR achieved stable and accurate results guided by texts.\n\n------- --------------\nRepresentation of FRP material damage in 3DCG\n\nKomori, I shikawa\n\nIn this research, we devised a method of expressing cracks in F RP materials and implemented it using Unity, a video game development tool .\n\n---------------------\nMethod of Creating Video Content that Cause Th e Sensation of Falling\n\nIwasaki, Sakamoto\n\nWe investigate the effect o f the spatial frequency and the peripheral visual field area to create vid eo content cause the sensation of falling. Video content with greater sens ation of falling was produced.\n\n---------------------\nAutomatic Deforma tion-based animation of 3D mesh\n\nMuraleedharan, Gowda, Shanthappa Vandro tti\n\nThis work introduces a novel method for automating the object anima tion from a 3D mesh used in various applications such as AR/VR, gaming, et c. The 3D mesh could be a generic inflated object generated from an image or sketch. The method describes a medial axis based control point estimati on to perfo...\n\n---------------------\nRibbon Font Neural Style Transfer for OpenType-SVG Font\n\nHuang, Hsieh\n\nThis paper presents a machine le arning based method for generating colored OpenType-SVG fonts which can be used in Photoshop, Illustrator, and Inkscape.\n\n---------------------\nA High Frame Rate Affordable Nystagmus Detection Method with Smartphones Us ed in Outpatient Clinic\n\nYang, Hsieh, Lin, Sun, Ouhyoung\n\nWe propose a new semi-automatic nystagmus diagnostic procedure for outpatient clinic u se, using the high-speed video recording feature of a smartphone and the O penCV library to detect and classify nystagmus.\n\n---------------------\n Visual Simulation of Tire Smoke\n\nTamagawa, Ishikawa\n\nIn this study, we propose a CG representation method of tire smoke based on physical simula tion. \nWe have confirmed that our method can reproduce the phenomenon of tire smoke.\n\n---------------------\nFused BVH to Ray Trace Level of Deta il Meshes\n\nKulkarni, Ikeda, Harada\n\nThe method proposes a heuristic to find such subtrees in non-Base LOD BVHs and an algorithm to find fusion o r insertion points for these subtrees in the Base LOD BVH.\n\n------------ ---------\nCognitionaware automatic viewpoint selection in scenes with cro wds of objects\n\nSakai, Sawahata, Miyashita, Komine\n\nWe present an algo rithm that automatically produces perceptually good views in 3DCG scenes c onsisting of many objects, predicting the scores of views rendered from gi ven viewpoints.\n\n---------------------\nRecursive Rendering of 2D Images for Accurate Pose Estimation in a 3D Mesh Map\n\nHanaoka, Suwanwimolkul, Komorita\n\nWe propose a method to accurately estimate the pose by recursi vely rendering 2D images from a generic 3D city mesh map and updating the pose.\n\n---------------------\nImproving Co-speech gesture rule-map gener ation via wild pose matching with gesture units\n\nAli, Hwang\n\nA method to generate co-speech text-to-gesture mapping for 3D digital humans using poses and transcripts from public videos and mapping noisy poses to MoCap Gestures.\n\n---------------------\nEffects of Font Type and Weight on Rea ding in VR\n\nKobayashi, Kanari, Sato\n\nWe examined effects of font type and weight on reading in VR. Our results showed a tendency for Antigothic to be more readable and less fatiguing than Mincho and Gothic.\n\n-------- -------------\nOptimal Composition Recommendation for Portrait Photography \n\nSong, Pan, Wu, Dong\n\nWe propose a new composition recommendation alg orithm to help users find appropriate poses and search the optimal positio n and size for the human subject within a given scene.\n\n---------------- -----\nProcedural Modeling of Crystal Clusters\n\nKita, Tsukii, Tsuru\n\nW e propose procedural modeling of crystal clusters. Based on crystallograph y, we model single crystals and then distribute single crystals on a base rock using a hierarchical sampling approach.\n\n---------------------\nAni me-Like Motion Transfer with Optimal Viewpoints\n\nKoroku, Fujishiro\n\nWe propose a method that respects production site techniques called “nakawa ri” to convert mocap data into anime-like motions by extracting more suita ble poses and viewpoints for animating on threes than uniform downsampling methods.\n\n---------------------\nCohand VR: Towards A Shareable Immersi ve Experience via Wearable Gesture Interface between VR Audiences and Ext ernal Audiences\n\nZhao, Yan, Shen\n\nwe superimposing the virtual world o nto physical environment and propose a shareable VR experience between VR audiences and external audiences via a wearable gesture interface.\n\n--- ------------------\nNeural Bidirectional Texture Function Compression and Rendering\n\nQuartesan, Pereira Santos\n\nComplex surface materials can be represented and rendered using bidirectional texture functions (BTF). In this work, we propose two changes that improve the state-of-the-art using neural networks for BTF.\n\n---------------------\nA Non-Associated MSCC M odel for Simulating Structured and Destructured Clays\n\nJing, Li, Zhu\n\n We present a non-associated Modified Structured Cam Clay model for simulat ing structured and destructured clays. Our method can generates visually p lausible results while allowing volume preservation.\n\n------------------ ---\nTemporal and Spatial Distortion for VR Rhythmic Skill Training\n\nMat sumoto, Wu, Koike\n\nIn many sports, rhythmic skills are considered import ant. In this paper, we take juggling as an example and propose a VR system that simplifies the acquisition of a sense of rhythm. The proposed system uses temporal and spatial distortion and other functions to assist the tr aining. A pilot study is ...\n\n---------------------\nAdjusting Level of Abstraction for Stylized Image Composition\n\nHashimoto, Dobashi\n\nThis p aper proposes a method to stylize a background image based on the level of detail in the foreground illustration by estimating the parameters of sty lization.\n\n---------------------\nUsing Rhythm Game to train Rhythmic Mo tion in Sports\n\nKatsuyama, Wu, Koike\n\nWe propose a ski training system using a rhythm game. It incorporates a variety of feedback in addition to music, and we will confirm its effectiveness in a pilot study.\n\n------- --------------\nCodeless Content Creator System: Anyone Can Make Their Ow n Mixed Reality Content Without Relying on Software Developer Tools\n\nKim , Lee, Park, Song, Jung\n\nWe propose “Codeless Content Creator System” th at helps novice users rapidly and conveniently create MR contents using MR devices rather than relying on complicated software development tools suc h as Unity.\n\n---------------------\nRealistic Rendering Tool for Pseudo- Structural Coloring with Multi-Color Extrusion of FFF 3D Printing\n\nEguch i, Nagura, Tanaka\n\nWe propose the method to generate 3d print-like color s and shapes from gcode for 3d printing. This allows color transitions to check while designing and creating art with multi-color.\n\n-------------- -------\nAccelerated and Optimized Search of Imperceptible Color Vibration for Embedding Information into LCD images\n\nHattori, Hiraki\n\nWe accele rate the search for color pairs with invisible color vibrations by matrix operations, and investigate the amount of information that can be embedded using color vibrations for nine colors.\n\n---------------------\nColor L ightField: Estimation of View-point Dependant Color Dispersion Pattern In Waveguide Display\n\nOoi, Dingliana\n\nA 2-step method used to estimate th e color dispersion pattern at any given viewpoint in the eyebox of an opti cal see-through near-eye display by capturing a 4D light field and then es timating the eye position that matches the viewpoint.\n\n----------------- ----\nComputer Generated Hologram Optimization for Lens Aberration\n\nNaka mura, Yamamoto, Ochiai\n\nWe propose a lens aberration correction method f or holographic displays via a light wave propagation simulation and optimi zation algorithm.\n\n---------------------\nCross-platforming "School life metaverse" user experience\n\nKato, Nakano, Horibe, Takemasa, Yamazaki... \n\nIn "Metaver-School", Cross-platforming "School life metaverse" user ex perience development (UXDev), we investigated how many avatars could be re ndered on current Head Mounted Displays (HMDs). As a benchmark, REALITY av atars works in Quest 2, up to 23 avatars could be rendered at more than 60 FPS, ho...\n\n---------------------\nHanging Print: Plastic Extrusion for Catenary Weaving in Mid Air\n\nKinoshita, Tanaka\n\nWe present Hanging Pr int, a framework to design and fabricate shapes by weaving catenaries (i.e . hanging curves) by extruding plastic filaments directly in mid air and d emonstrate our works.\n\n---------------------\nTexSR: Image Super-resolut ion for High-Quality Texture Mapping\n\nNah, Kim\n\nWe introduce an image super-resolution technique for high-quality texture mapping in this poster .\n\n---------------------\nTranscendental Avatar: Experiencing Biorespons ive Avatar of the Self for Improved Cognition\n\nSkiers, Suen Pai, Minamiz awa\n\nThe Transcendental Avatar is a virtual reality system with the self -avatar that reacts to the user's physiological state, such as their heart rate and electrodermal activity.\n\n---------------------\nAIP: Adversari al Interaction Priors for Multi-Agent Physics-based Character Control\n\nY ounes, Kijak, Kulpa, Malinowski, Multon\n\nWe simulate the interactions be tween multiple physics-based characters, using short unlabeled clips. We i ntroduce Adversarial Interaction Priors: multi-agents generative adversari al imitation learning technique extending recent single character deep rei nforcement learning.\n\n---------------------\nPupillary oscillation induc ed by pseudo-isochromatic stimuli for objective color vision test\n\nNakan ishi, Kinzuka, Sato, Nakauchi, Minami\n\nThis study proposes an objective color vision test based on pupillary response induced by pseudo-isochromat ic flicker stimuli. Results show that the amplitude of pupillary oscillati on differs depending on the color difference.\n\n---------------------\nA Study on Sonification Method of Simulator-Based Ski Training for People wi th Visual Impairment\n\nMiura, Kuribayashi, Wu, Koike, Morishima\n\nWe exp lore two types of sonification feedback to enable people with visual impai rment to train skiing using a ski simulator based on an interview of blind skiers and their guides.\n\n---------------------\nCombining Augmented an d Virtual Reality Experiences for Immersive Fire Drills\n\nKang, Lee, Choi \n\nCombining the advantages of AR and VR, we propose a more immersive and effective fire training system, and introduce techniques and methods for configuring this system.\n\n---------------------\nMMGrip: A Handheld Mult imodal Haptic Device Combining Vibration, Impact, and Shear for Realistic Expression of Contact\n\nKim, Lee, Choi\n\nWe introduce MMGrip, a handheld multimodal haptic device that simultaneously presents vibration, impact, and shear for realistic and immersive haptic feedback to virtual collision events.\n\n---------------------\nRobust Vectorized Surface Reconstructio n with 2D-3D Joint Optimization\n\nWang, Xu, Tang, Li, Mao...\n\nwe propos e a robust pipeline to create vectorized models from LiDAR point clouds wi thout the assumption of watertight polygonal surfaces.\n\n---------------- -----\nPalette-based Image Search with Color Weights\n\nKita, Kawasaki, Sa ito\n\nWe propose a novel image search system with color palettes. By quer ying color palettes, users can search for inspiring images, which is usefu l for design exploration.\n\n---------------------\nInternal-External Boun dary Attentions for Transparent Object Segmentation\n\nHan, Lee\n\nWe prop ose a new internal-external boundaries attention module in which internal and external boundary features are separately recognized.\n\n------------- --------\nReal-Time Facial Animation Generation on Face Mask\n\nHAN, Kim, Hwang\n\nThis study aims to address the communication difficulties caused by wearing a mask and provide a strategy for aiding in understanding the s peaker’s speech through facial animation.\n\n---------------------\nEffici ent Drone Exploration in Real Unknown Environments\n\nXie, Jung, Chen\n\nW e propose an autonomous drone exploration system with a lightweight and lo w-latency saliency prediction model to explore unknown environments. The e xperiments show the efficiency and feasibility of the system.\n\n--------- ------------\nRobust and Efficient Structure-from-Motion Method for Ambigu ous Large-Scale Indoor Scene\n\nYu, Chen\n\nWe proposed a method to recons truct large-scale ambiguous indoor scene. None of existing works can succe ssfully construct such ambiguous large-scale indoor scene, to the best of our knowledge.\n\n---------------------\nTime-Dependent Machine Learning f or Volumetric Simulation\n\nGiraud-Carrier, Holladay, Egbert\n\nAn applica tion of the ODE-net framework to volumetric simulation sequences. Examples of its use in retiming examples are presented, and the potential for addi tional applications are also discussed.\n\n---------------------\nArtist-d irected Modeling of Competitively Growing Corals\n\nHoriuchi, Cao, Kominam i, Umezawa, Dou...\n\nThis paper presents a procedural modeling method for coral groups considering the territorial conflict between different speci es.\n\n---------------------\nSustainable VFX - A Pipeline and Rendering C hallenge?\n\nSchubert, Löffler, Schober, Freitag, Helzle...\n\nThis report presents insights into offline pipeline of VFX creature shots. We'll disc uss demands of modern VFX pipelines and shortcomings of real-time render s olutions, energy budgets and quality aspects approaches.\n\n-------------- -------\nInvestigating the Effects of Synchronized Visuo-Tactile Stimuli f or Inducing Kinesthetic Illusion in Observational Learning of Whole-Body M ovements\n\nFukumoto, Mitsuno, Nakayama, Itatani, Jogan...\n\nWe confirmed that synchronization of visuo-tactile stimuli can induce kinesthetic illu sion in the observational learning of whole-body movement, and the synchro nization influences kinesthetic illusion mediated by body ownership.\n\n-- -------------------\nNo-code Digital Human for Conversational Behavior\n\n Kim, Kim, Ali, Hwang\n\nFlow Human is a no-code system that automatically generates conversational behavior of digital humans from the conversation flow. This flow can be generated using the dialogue authoring tool we deve loped.\n\n---------------------\nMEMformer: Transformer-based 3D Human Mot ion Estimation from MoCap Markers\n\nLuan, Jiang, Diao, Wang, Xiao\n\nWe p ropose a real-time end-to-end method to estimate 3D human motion from MoCa p markers with a transformer-like architecture, and it does not depend on the knowledge of marker labels.\n\n---------------------\nA Novel Solutio n to Manufacturing Multi-Color Medical Preoperative Models with Transparen t Shells\n\nYang, Xiang, Zhao, Zhao, Wu\n\nA medical preoperative model ma nufacturing solution that combines FDM and DLP 3D printing technologies to ensure high quality while reducing costs to a very low level.\n\n-------- -------------\nInfiniteShader: Color Changeable 3D Printed Objects using B i-Stable Thermochromic Materials\n\nUmetsu, Punpongsanon, Hiraki\n\nWe pro pose a method to control the color and pattern on the surface of 3D printe d objects using bi-stable thermochromic materials and laser thermal projec tion.\n\n---------------------\nAdaptive real-time interactive rendering o f gigantic multi-resolution models\n\nLi\n\nWe present a scheme (both mesh preprocessing and real-time rendering), which avoids LOD popping and/or c racks between level parts and could render gigantic scanned models at the interactive frame rate.\n\n---------------------\nPrometheus: A mobile tel epresence system connecting the 1st person and 3rd person perspectives con tinuously\n\nKimura, Rekimoto\n\nWe propose a telepresence method, Prometh eus. It generates first- and third-person views in real-time with pre-scan ned geometry and monocular camera's image. The user can switch these views freely.\n\n---------------------\nColor Animated Full-parallax High-defin ition Computer-generated Hologram\n\nKoiso, Nonaka, Kobayashi, Matsushima\ n\nWe proposed the crosstalk suppression method of FPHD-CGH animation and developed a prototype of a CGH display system that achieves both a large s creen and a wide viewing angle.\n\n---------------------\nMetric-KNN is Al l You Need\n\nAnvekar, Tabib, Hegde, Mudengudi\n\nMetric-KNN (M-KNN) is a topology aware Nearest-Neighbor search algorithm.Topology aware learning f acilitates Deep-Learning models to imitate human-cognition. M-KNN lays fou ndation for wide-range of 3D Deep-Learning models towards topology aware l earning.\n\n---------------------\nMotion In-betweening for Physically Sim ulated Characters\n\nGopinath, Joo, Won\n\nWe present a motion in-betweeni ng framework to generate high quality, physically plausible character anim ation when we are given temporally sparse keyframes as soft animation cons traints.\n\n\nRegistration Category: FULL ACCESS, EXPERIENCE PLUS ACCESS, EXPERIENCE ACCESS, TRADE EXHIBITOR\n\nLanguage: ENGLISH\n\nFormat: IN-PERS ON END:VEVENT END:VCALENDAR