ArXiv Domain 2025-10-11
数据来源:ArXiv Domain
LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-12
数据来源:ArXiv Domain
LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-14
数据来源:ArXiv Domain
LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-15
数据来源:ArXiv Domain
LLM Domain Papers1. Lost in the Middle: An Emergent Property from Information Retrieval Demands in LLMsThe performance of Large Language Models (LLMs) often degrades when crucial information is in the middle of a long context, a “lost-in-the-middle” phenomenon that mirrors the primacy and recency effects in human memory. We propose that this behavior is not simply a flaw indicative of information loss but an adaptation to different information retrieval demands during pre-tra ...
ArXiv Domain 2025-10-16
数据来源:ArXiv Domain
LLM Domain Papers1. Lost in the Middle: An Emergent Property from Information Retrieval Demands in LLMsThe performance of Large Language Models (LLMs) often degrades when crucial information is in the middle of a long context, a “lost-in-the-middle” phenomenon that mirrors the primacy and recency effects in human memory. We propose that this behavior is not simply a flaw indicative of information loss but an adaptation to different information retrieval demands during pre-tra ...
ArXiv Domain 2025-10-17
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-19
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-18
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-20
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-21
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-22
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-23
数据来源:ArXiv Domain
LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-24
数据来源:ArXiv Domain
LLM Domain Papers1. Analyzing Memory Effects in Large Language Models through the lens of Cognitive PsychologyMemory, a fundamental component of human cognition, exhibits adaptive yet fallible characteristics as illustrated by Schacter’s memory “sins”.These cognitive phenomena have been studied extensively in psychology and neuroscience, but the extent to which artificial systems, specifically Large Language Models (LLMs), emulate these cognitive phenomena remains underexplor ...
ArXiv Domain 2025-10-25
数据来源:ArXiv Domain
LLM Domain Papers1. On sources to variabilities of simple cells in the primary visual cortex: A principled theory for the interaction between geometric image transformations and receptive field responsesThis paper gives an overview of a theory for modelling the interaction between geometric image transformations and receptive field responses for a visual observer that views objects and spatio-temporal events in the environment. This treatment is developed over combinations of ...
ArXiv Domain 2025-10-26
数据来源:ArXiv Domain
LLM Domain Papers1. On sources to variabilities of simple cells in the primary visual cortex: A principled theory for the interaction between geometric image transformations and receptive field responsesThis paper gives an overview of a theory for modelling the interaction between geometric image transformations and receptive field responses for a visual observer that views objects and spatio-temporal events in the environment. This treatment is developed over combinations of ...