Skip Nav Destination
Close Modal
Update search
NARROW
Format
Journal
Date
Availability
1-1 of 1
Masanori Shimono
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Publisher: Journals Gateway
Network Neuroscience 1–22.
Published: 13 January 2025
Abstract
View article
PDF
Our brains operate as a complex network of interconnected neurons. To gain a deeper understanding of this network architecture, it is essential to extract simple rules from its intricate structure. This study aimed to compress and simplify the architecture, with a particular focus on interpreting patterns of functional connectivity in 2.5 hr of electrical activity from a vast number of neurons in acutely sliced mouse brains. Here, we combined two distinct methods together: automatic compression and network analysis. Firstly, for automatic compression, we trained an artificial neural network named NNE (neural network embedding). This allowed us to reduce the connectivity to features, be represented only by 13% of the original neuron count. Secondly, to decipher the topology, we concentrated on the variability among the compressed features and compared them with 15 distinct network metrics. Specifically, we introduced new metrics that had not previously existed, termed as indirect-adjacent degree and neighbor hub ratio. Our results conclusively demonstrated that these new metrics could better explain approximately 40%–45% of the features. This finding highlighted the critical role of NNE in facilitating the development of innovative metrics, because some of the features extracted by NNE were not captured by the currently existed network metrics. Author Summary Neural network embedding can compress large connectivity and has led to new metrics like indirect-adjacency degree and neighbor hub ratio.
Includes: Supplementary data