avatar
Articles
708
Tags
25
Categories
16

Home
Content
  • Paper
  • LLMs
  • Jupyter
  • Algorithm
  • PLs
Daily
  • Github
  • HotNews
  • HF
  • Arxiv
Archives
Categories
About
37.2° Blog
Search
Home
Content
  • Paper
  • LLMs
  • Jupyter
  • Algorithm
  • PLs
Daily
  • Github
  • HotNews
  • HF
  • Arxiv
Archives
Categories
About
ArXiv Domain 2025-10-11
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-12
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-14
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Atlas-free Brain Network TransformerCurrent atlas-based approaches to brain network analysis rely heavily on standardized anatomical or connectivity-driven brain atlases. However, these fixed atlases often introduce significant limitations, such as spatial misalignment across individuals, functional heterogeneity within predefined regions, and atlas-selection biases, collectively undermining the reliability and interpretability of the derived brain network ...
ArXiv Domain 2025-10-15
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Lost in the Middle: An Emergent Property from Information Retrieval Demands in LLMsThe performance of Large Language Models (LLMs) often degrades when crucial information is in the middle of a long context, a “lost-in-the-middle” phenomenon that mirrors the primacy and recency effects in human memory. We propose that this behavior is not simply a flaw indicative of information loss but an adaptation to different information retrieval demands during pre-tra ...
ArXiv Domain 2025-10-16
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Lost in the Middle: An Emergent Property from Information Retrieval Demands in LLMsThe performance of Large Language Models (LLMs) often degrades when crucial information is in the middle of a long context, a “lost-in-the-middle” phenomenon that mirrors the primacy and recency effects in human memory. We propose that this behavior is not simply a flaw indicative of information loss but an adaptation to different information retrieval demands during pre-tra ...
ArXiv Domain 2025-10-17
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-19
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-18
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-20
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-21
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-22
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-23
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Scaling Vision Transformers for Functional MRI with Flat MapsA key question for adapting modern deep learning architectures to functional MRI (fMRI) is how to represent the data for model input. To bridge the modality gap between fMRI and natural images, we transform the 4D volumetric fMRI data into videos of 2D fMRI activity flat maps. We train Vision Transformers on 2.3K hours of fMRI flat map videos from the Human Connectome Project using the spatiotemp ...
ArXiv Domain 2025-10-24
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Analyzing Memory Effects in Large Language Models through the lens of Cognitive PsychologyMemory, a fundamental component of human cognition, exhibits adaptive yet fallible characteristics as illustrated by Schacter’s memory “sins”.These cognitive phenomena have been studied extensively in psychology and neuroscience, but the extent to which artificial systems, specifically Large Language Models (LLMs), emulate these cognitive phenomena remains underexplor ...
ArXiv Domain 2025-10-25
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. On sources to variabilities of simple cells in the primary visual cortex: A principled theory for the interaction between geometric image transformations and receptive field responsesThis paper gives an overview of a theory for modelling the interaction between geometric image transformations and receptive field responses for a visual observer that views objects and spatio-temporal events in the environment. This treatment is developed over combinations of ...
ArXiv Domain 2025-10-26
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. On sources to variabilities of simple cells in the primary visual cortex: A principled theory for the interaction between geometric image transformations and receptive field responsesThis paper gives an overview of a theory for modelling the interaction between geometric image transformations and receptive field responses for a visual observer that views objects and spatio-temporal events in the environment. This treatment is developed over combinations of ...
1…434445…48
avatar
Firefly
A firefly flying freely in the AI domain.
Articles
708
Tags
25
Categories
16
Follow Me
Announcement
Welcome to My Personal Blog!
If Not, Please Visit Gitee Mirror.
Recent Post
检索增强LLM2024-01-13
LLMs公开课 - 6.文本理解和生成大模型2024-01-10
LLMs公开课 - 5.高效训练&模型压缩2024-01-07
Categories
  • AI298
  • Cython1
  • DSA24
  • GitHub165
  • HotNews14
Tags
DSARLTransformerLLMsPaperReadingDeepLearningCVGPTPLgithubdomainhfhot_newsleetcodealgoGitHubTrendingArXivDomainAIHuggingFacePapers微博热搜HotNews
Archives
  • January 20245
  • December 202314
  • November 202326
  • October 20231
  • September 20234
Info
Article :
708
Run time :
Total Count :
33308.1k
UV :
PV :
Last Push :
©2023 - 2025 By Firefly
Search
Loading the Database