AIST Digital Human Research Center
Kevin Fan is a researcher in HCI, VR, AR, augmented human, and ubiquitous computing. He is fascinated by the notion of reality, and is dedicated to augmenting humans beyond the physical medium through exploring perception, cross-modality, embodiment, and interaction. Recently, Kevin has become an machine learning enthusiast, and spends his time reading deep learning papers and NN architecture. In leisure time, Kevin enjoys sports and cooking.
Ph.D. in Media Design
Graduate School of Media Design, Keio University
Master in Media Design
Graduate School of Media Design, Keio University
Bachelor of Applied Science in Computer Engineering
University of British Columbia
My research focus is in the areas of VR/AR in HCI. In particular, I am fascinated by the notion of reality, which are uniquely built by our conscious experience through our sensory feedbacks and embodiment of these senses.
My research vision and goal is bridging our perception of reality across multiple spatial and temporal dimensions, across embodied body difference between people, and across physical and virtual mediums, in doing so breaking the bounds of human self and physical reality, to augment human perception and enhance human-to-human, human-to-world interaction.
We present a multi-embodiment interface aimed at assisting human-centered ergonomics design, where traditionally the design process is hindered by the need of recruiting diverse users or the utilization of disembodied simulations to address designing for most groups of the population. The multi-embodiment solution is to actively embody the user in the design and evaluation process in virtual reality, while simultaneously superimposing additional simulated virtual bodies on the user’s own body. This superimposed body acts as the target and enables simultaneous anthropometrical ergonomics evaluation for both the user’s self and the target. Both virtual bodies of self and target are generated using digital human modeling from statistical data, and the animation of self-body is motion-captured while the target body is moved using a weighted inverse kinematics approach with end effectors on the hands and feet. We conducted user studies to evaluate human ergonomics design in five scenarios in virtual reality, comparing multi-embodiment with single embodiment. Similar evaluations were conducted again in the physical environment after virtual reality evaluations to explore the post-VR influence of different virtual experience.
The emergence of head-mount-displays(HMDs) have enabled us to experience virtual environments in an immersive mean. At the same time, omnidirectional cameras which capture real-life environments in all 360-degree angles in either still image or motion video are also getting attention. Using HMDs, we can view those captured omnidirectional images in immersion, as though we are actually "being there". However, as a requirement for immersion, our view of these omnidirectional images in the HMD is usually presented as first-person-view and limited by our natural field of view (FOV), i.e. we only see a fraction of the environment which we are facing, while the rest of the 360-degree environment is hidden from our view. This is even more problematic in telexistence situations where the scene is live so setting a default facing direction for the HMD is impratical. We can often observe people, while wearing HMDs, turn their heads frantically trying to locate interesting occurrences in the omnidirectional environment they are viewing.
Emerging media technologies such as 3D film and head-mounted displays (HMDs) call for new types of spatial interaction. Here we describe and evaluate AnyOrbit: a novel orbital navigation technique that enables flexible and intuitive 3D spatial navigation in virtual environments (VEs). Unlike existing orbital methods, we exploit toroidal rather than spherical orbital surfaces, which allow independent control of orbital curvature in vertical and horizontal directions. This control enables intuitive and smooth orbital navigation between any desired orbital centers and between any vantage points within VEs. AnyOrbit leverages our proprioceptive sense of rotation to enable navigation in VEs without inconvenient external motion trackers. In user studies, we demonstrate that within a sports spectating context, the technique allows smooth shifts in perspective at a rate comparable to broadcast sport, is fast to learn, and is without excessive simulator sickness in most users. The technique is widely applicable to gaming, computer-aided-design (CAD), data visualisation, and telepresence.
Electrosmog is the electromagnetic radiation emitted from wireless technology such as Wi-Fi hotspots or cellular towers, and poses potential hazard to human. Electrosmog is invisible, and we rely on detectors which show level of electrosmog in a warning such as numbers. Our system is able to detect electrosmog level from number of Wi-Fi networks, connected cellular towers and strengths, and show in an intuitive representation by blurring the vision of the users wearing a Head-Mounted Display (HMD). The HMD displays in real-time the users' augmented surrounding environment with blurriness, as though the electrosmog actually clouds the environment. For demonstration, participants can walk in a video-see-through HMD and observe vision gradually blurred while approaching our prepared dense wireless network.
We present SpiderVision, a wearable device that extends the human field of view to augment a user’s awareness of things happening behind one’s back. SpiderVision leverages a front and back camera to enable users to focus on the front view while employing intelligent interface techniques to cue the user about activity in the back view. The extended back view is only blended in when the scene captured by the back camera is analyzed to be dynamically changing, e.g. due to object movement. We explore factors that affect the blended extension, such as view abstraction and blending area. We contribute results of a user study that explore 1) whether users can perceive the extended field of view effectively, and 2) whether the extended field of view is considered a distraction. Quantitative analysis of the users’ performance and qualitative observations of how users perceive the visual augmentation are described.
Cuddly is a mobile phone application that will enchant soft objects to enhance human’s interaction with the objects. Cuddly utilizes the mobile phone’s camera and flash light (LED) to detect the surrounding brightness value captured by the camera. When one integrate Cuddly with a soft object and compresses the object, the brightness level captured by the camera will decrease. Utilizing the measurement change in brightness values, we can implement diverse entertainment applications using the different functions a mobile phone is embedded with, such as animation, sound, Bluetooth communication etc. For example, we created a boxing game by connecting two devices through Bluetooth; with one device inserted into a soft object and the other acting as a screen.