VR devices with hand controllers are increasingly being used as simple motion capture devices to operate 3DCG avatars. When wearing an avatar with a different appearance from the real-world user, there is a desire to input poses (postures) suitable for that avatar. However, it is difficult to make the avatar take the intended pose due to user skill and equipment limitations. In particular, some motion capture devices cannot measure elbow or waist motions and must interpolate from hand and head positions and orientations, making it difficult to achieve the intended pose.
In this research, we proposed a system that supports users in inputting their intended poses by continuously transitioning between a pre-configured target pose and the user’s input posture, controlled by the amount of trigger pressure on the hand-held controller. Using this system, users can easily and accurately take target poses through avatar manipulation in virtual space.

System Overview
