In:
PLOS Computational Biology, Public Library of Science (PLoS), Vol. 18, No. 6 ( 2022-6-6), p. e1009485-
Abstract:
Vision provides the most important sensory information for spatial navigation. Recent technical advances allow new options to conduct more naturalistic experiments in virtual reality (VR) while additionally gathering data of the viewing behavior with eye tracking investigations. Here, we propose a method that allows one to quantify characteristics of visual behavior by using graph-theoretical measures to abstract eye tracking data recorded in a 3D virtual urban environment. The analysis is based on eye tracking data of 20 participants, who freely explored the virtual city Seahaven for 90 minutes with an immersive VR headset with an inbuild eye tracker. To extract what participants looked at, we defined “gaze” events, from which we created gaze graphs. On these, we applied graph-theoretical measures to reveal the underlying structure of visual attention. Applying graph partitioning, we found that our virtual environment could be treated as one coherent city. To investigate the importance of houses in the city, we applied the node degree centrality measure. Our results revealed that 10 houses had a node degree that exceeded consistently two-sigma distance from the mean node degree of all other houses. The importance of these houses was supported by the hierarchy index, which showed a clear hierarchical structure of the gaze graphs. As these high node degree houses fulfilled several characteristics of landmarks, we named them “gaze-graph-defined landmarks”. Applying the rich club coefficient, we found that these gaze-graph-defined landmarks were preferentially connected to each other and that participants spend the majority of their experiment time in areas where at least two of those houses were visible. Our findings do not only provide new experimental evidence for the development of spatial knowledge, but also establish a new methodology to identify and assess the function of landmarks in spatial navigation based on eye tracking data.
Type of Medium:
Online Resource
ISSN:
1553-7358
DOI:
10.1371/journal.pcbi.1009485
DOI:
10.1371/journal.pcbi.1009485.g001
DOI:
10.1371/journal.pcbi.1009485.g002
DOI:
10.1371/journal.pcbi.1009485.g003
DOI:
10.1371/journal.pcbi.1009485.g004
DOI:
10.1371/journal.pcbi.1009485.g005
DOI:
10.1371/journal.pcbi.1009485.g006
DOI:
10.1371/journal.pcbi.1009485.g007
DOI:
10.1371/journal.pcbi.1009485.g008
DOI:
10.1371/journal.pcbi.1009485.g009
DOI:
10.1371/journal.pcbi.1009485.g010
DOI:
10.1371/journal.pcbi.1009485.g011
DOI:
10.1371/journal.pcbi.1009485.g012
DOI:
10.1371/journal.pcbi.1009485.g013
DOI:
10.1371/journal.pcbi.1009485.g014
DOI:
10.1371/journal.pcbi.1009485.s001
DOI:
10.1371/journal.pcbi.1009485.s002
DOI:
10.1371/journal.pcbi.1009485.r001
DOI:
10.1371/journal.pcbi.1009485.r002
DOI:
10.1371/journal.pcbi.1009485.r003
DOI:
10.1371/journal.pcbi.1009485.r004
DOI:
10.1371/journal.pcbi.1009485.r005
DOI:
10.1371/journal.pcbi.1009485.r006
Language:
English
Publisher:
Public Library of Science (PLoS)
Publication Date:
2022
detail.hit.zdb_id:
2193340-6
Permalink