Study on the effects of virtual avatar’s gaze on orienting users' attention

Study on the effects of virtual avatar’s gaze on orienting users' attention

Study on the effects of virtual avatar’s gaze on orienting users' attention

JIangxue Ning*, Kexuan Zhang*, Angela Song

*These authors contributed equally

Hypothesis: 
Whether 'moving with a camera' or 'fixed in the world' (or 'egocentric moving vs. allocentric fixed') directs people's attention better? Theoretically, you hypothesized that the effect is caused by motion-defined reference frames and that we needed to separate 'moving with a camera,' 'moving with a face,' and 'fixed in the world' to be able to interpret the results as an effect of motion-defined reference frame or 'moving with us' vs. 'fixed in the world.'

Motivation: We are seeing an increase in artificial intelligence depicted via virtual avatars, but we have yet to find out the most engaging way to interact with them. In the new context of VR and AR, will users' attention be oriented differently when designers embed avatars in the 3D scene or when designers make them part of the HUD, which is overlaid upon the scene? The goal is to make the most effective use of the virtual avatar’s gaze to orient users' attention. Thus I am running an experiment in which observers see faces that are either stationary with respect to their visual field (i.e. moving with their head) or stationary with respect to the 3D scene, and measuring whether these differently affect subjects' performance in attention tasks to either in-scene or HUD information. Specifically, I predict that when an avatar is embedded in the scene, its gaze cues will be more effective at directing attention to other within-scene elements and that when the avatar is affixed to the HUD, its gaze cues will be more effective at directing attention to other HUD elements. 


JIangxue Ning*, Kexuan Zhang*, Angela Song

*These authors contributed equally

Hypothesis: 
Whether 'moving with a camera' or 'fixed in the world' (or 'egocentric moving vs. allocentric fixed') directs people's attention better? Theoretically, you hypothesized that the effect is caused by motion-defined reference frames and that we needed to separate 'moving with a camera,' 'moving with a face,' and 'fixed in the world' to be able to interpret the results as an effect of motion-defined reference frame or 'moving with us' vs. 'fixed in the world.'

Motivation: We are seeing an increase in artificial intelligence depicted via virtual avatars, but we have yet to find out the most engaging way to interact with them. In the new context of VR and AR, will users' attention be oriented differently when designers embed avatars in the 3D scene or when designers make them part of the HUD, which is overlaid upon the scene? The goal is to make the most effective use of the virtual avatar’s gaze to orient users' attention. Thus I am running an experiment in which observers see faces that are either stationary with respect to their visual field (i.e. moving with their head) or stationary with respect to the 3D scene, and measuring whether these differently affect subjects' performance in attention tasks to either in-scene or HUD information. Specifically, I predict that when an avatar is embedded in the scene, its gaze cues will be more effective at directing attention to other within-scene elements and that when the avatar is affixed to the HUD, its gaze cues will be more effective at directing attention to other HUD elements. 


JIangxue Ning*, Kexuan Zhang*, Angela Song

*These authors contributed equally

Hypothesis: 
Whether 'moving with a camera' or 'fixed in the world' (or 'egocentric moving vs. allocentric fixed') directs people's attention better? Theoretically, you hypothesized that the effect is caused by motion-defined reference frames and that we needed to separate 'moving with a camera,' 'moving with a face,' and 'fixed in the world' to be able to interpret the results as an effect of motion-defined reference frame or 'moving with us' vs. 'fixed in the world.'

Motivation: We are seeing an increase in artificial intelligence depicted via virtual avatars, but we have yet to find out the most engaging way to interact with them. In the new context of VR and AR, will users' attention be oriented differently when designers embed avatars in the 3D scene or when designers make them part of the HUD, which is overlaid upon the scene? The goal is to make the most effective use of the virtual avatar’s gaze to orient users' attention. Thus I am running an experiment in which observers see faces that are either stationary with respect to their visual field (i.e. moving with their head) or stationary with respect to the 3D scene, and measuring whether these differently affect subjects' performance in attention tasks to either in-scene or HUD information. Specifically, I predict that when an avatar is embedded in the scene, its gaze cues will be more effective at directing attention to other within-scene elements and that when the avatar is affixed to the HUD, its gaze cues will be more effective at directing attention to other HUD elements.