Combining neuroscience and virtual reality to investigate how we perceive the world: the case of vision and touch
Already starting at birth, humans integrate information from several sensory modalities in order to form a representation of the environment - such as when a baby explores, manipulates, and interacts with objects. Among these processes, the combination of vision and touch information is perhaps one of the most fundamental sensory integration processes, as touch information (such as body-relative size, shape, texture, material, temperature, and weight) can easily be linked to the visual image, thereby providing a grounding for later vision-only recognition. Previous research on such integration processes has so far mainly focused on low-level object properties (such as curvature, or surface granularity) such that little is known on how the human actually forms a high-level, multisensory representation of objects. Here, I will review research from our lab that investigates how the human brain processes shape using input from vision and touch. Using parametrically-defined, 3D-printed shapes we were able to show that touch is actually equally good at shape processing compared to vision. We next conducted a series of imaging experiments (using anatomical, functional, and white matter analyses) that chart the brain networks that process this shape representation, which suggest a common, multisensory representation of shape. Finally, I will present our current line of research that uses virtual reality tools to dissociate vision and touch when humans experience shape.
**********************************************************
Lunedì 22 ottobre 2018, ore 14.30
Sala Lauree del Dipartimento di Psicologia, U6 (3° piano)
Tutti gli interessati sono invitati a partecipare.
Per informazioni:
Prof.ssa Emanuela Bricolo