Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Hirohito M. Kondo
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Network Neuroscience 1–22.
Published: 02 December 2024
Abstract
View article
PDF
ABSTRACT Sustained attention is essential for daily life and can be directed to information from different perceptual modalities, including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study, we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults’ visual sustained attention performance generalizes to predict auditory sustained attention performance in three independent datasets ( N 1 = 29, N 2 = 60, N 3 = 17). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention. AUTHOR SUMMARY While previous work has demonstrated external validity of functional connectivity-based networks for the prediction of cognitive and attentional performance, testing generalization across visual and auditory perceptual modalities has been limited. The current study demonstrates robust prediction of sustained attention performance, regardless of perceptual modality models are trained or tested in. Results demonstrate that connectivity-based models may generalize broadly capturing variance in sustained attention performance, which is agnostic to the perceptual modality of model training.