avatar
Articles
879
Tags
25
Categories
16

Home
Content
  • Paper
  • LLMs
  • Jupyter
  • Algorithm
  • PLs
Daily
  • Github
  • HotNews
  • HF
  • Arxiv
Archives
Categories
About
37.2° Blog
Search
Home
Content
  • Paper
  • LLMs
  • Jupyter
  • Algorithm
  • PLs
Daily
  • Github
  • HotNews
  • HF
  • Arxiv
Archives
Categories
About
ArXiv Domain 2025-11-01
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Brain-IT: Image Reconstruction from fMRI via Brain-Interaction TransformerReconstructing images seen by people from their fMRI brain recordings provides a non-invasive window into the human brain. Despite recent progress enabled by diffusion models, current methods often lack faithfulness to the actual seen images. We present “Brain-IT”, a brain-inspired approach that addresses this challenge through a Brain Interaction Transformer (BIT), allowing effectiv ...
ArXiv Domain 2025-12-31
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Training AI Co-Scientists Using Rubric RewardsAI co-scientists are emerging as a tool to assist human researchers in achieving their research goals. A crucial feature of these AI co-scientists is the ability to generate a research plan given a set of aims and constraints. The plan may be used by researchers for brainstorming, or may even be implemented after further refinement. However, language models currently struggle to generate research plans that fol ...
ArXiv Domain 2025-11-04
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. A Sensing Whole Brain Zebrafish Foundation Model for Neuron Dynamics and BehaviorNeural dynamics underlie behaviors from memory to sleep, yet identifying mechanisms for higher-order phenomena (e.g., social interaction) is experimentally challenging. Existing whole-brain models often fail to scale to single-neuron resolution, omit behavioral readouts, or rely on PCA/conv pipelines that miss long-range, non-linear interactions. We introduce a sparse-attentio ...
ArXiv Domain 2025-11-03
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Brain-IT: Image Reconstruction from fMRI via Brain-Interaction TransformerReconstructing images seen by people from their fMRI brain recordings provides a non-invasive window into the human brain. Despite recent progress enabled by diffusion models, current methods often lack faithfulness to the actual seen images. We present “Brain-IT”, a brain-inspired approach that addresses this challenge through a Brain Interaction Transformer (BIT), allowing effectiv ...
ArXiv Domain 2025-11-05
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. A Sensing Whole Brain Zebrafish Foundation Model for Neuron Dynamics and BehaviorNeural dynamics underlie behaviors from memory to sleep, yet identifying mechanisms for higher-order phenomena (e.g., social interaction) is experimentally challenging. Existing whole-brain models often fail to scale to single-neuron resolution, omit behavioral readouts, or rely on PCA/conv pipelines that miss long-range, non-linear interactions. We introduce a sparse-attentio ...
ArXiv Domain 2025-11-06
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. The Physical Basis of Prediction: World Model Formation in Neural Organoids via an LLM-Generated CurriculumThe capacity of an embodied agent to understand, predict, and interact with its environment is fundamentally contingent on an internal world model. This paper introduces a novel framework for investigating the formation and adaptation of such world models within a biological substrate: human neural organoids. We present a curriculum of three scalable, ...
ArXiv Domain 2025-11-07
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. Fast weight programming and linear transformers: from machine learning to neurobiologyRecent advances in artificial neural networks for machine learning, and language modeling in particular, have established a family of recurrent neural network (RNN) architectures that, unlike conventional RNNs with vector-form hidden states, use two-dimensional (2D) matrix-form hidden states. Such 2D-state RNNs, known as Fast Weight Programmers (FWPs), can be interpreted ...
ArXiv Domain 2025-11-09
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. CBraMod: A Criss-Cross Brain Foundation Model for EEG DecodingElectroencephalography (EEG) is a non-invasive technique to measure and record brain electrical activity, widely used in various BCI and healthcare applications. Early EEG decoding methods rely on supervised learning, limited by specific tasks and datasets, hindering model performance and generalizability. With the success of large language models, there is a growing body of studies focusing on ...
ArXiv Domain 2025-11-08
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. CBraMod: A Criss-Cross Brain Foundation Model for EEG DecodingElectroencephalography (EEG) is a non-invasive technique to measure and record brain electrical activity, widely used in various BCI and healthcare applications. Early EEG decoding methods rely on supervised learning, limited by specific tasks and datasets, hindering model performance and generalizability. With the success of large language models, there is a growing body of studies focusing on ...
ArXiv Domain 2025-11-10
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. CBraMod: A Criss-Cross Brain Foundation Model for EEG DecodingElectroencephalography (EEG) is a non-invasive technique to measure and record brain electrical activity, widely used in various BCI and healthcare applications. Early EEG decoding methods rely on supervised learning, limited by specific tasks and datasets, hindering model performance and generalizability. With the success of large language models, there is a growing body of studies focusing on ...
ArXiv Domain 2025-11-11
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. CBraMod: A Criss-Cross Brain Foundation Model for EEG DecodingElectroencephalography (EEG) is a non-invasive technique to measure and record brain electrical activity, widely used in various BCI and healthcare applications. Early EEG decoding methods rely on supervised learning, limited by specific tasks and datasets, hindering model performance and generalizability. With the success of large language models, there is a growing body of studies focusing on ...
ArXiv Domain 2025-11-13
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. On the Shape of Brainscores for Large Language Models (LLMs)With the rise of Large Language Models (LLMs), the novel metric “Brainscore” emerged as a means to evaluate the functional similarity between LLMs and human brain/neural systems. Our efforts were dedicated to mining the meaning of the novel score by constructing topological features derived from both human fMRI data involving 190 subjects, and 39 LLMs plus their untrained counterparts. Subsequentl ...
ArXiv Domain 2025-11-14
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. On the Shape of Brainscores for Large Language Models (LLMs)With the rise of Large Language Models (LLMs), the novel metric “Brainscore” emerged as a means to evaluate the functional similarity between LLMs and human brain/neural systems. Our efforts were dedicated to mining the meaning of the novel score by constructing topological features derived from both human fMRI data involving 190 subjects, and 39 LLMs plus their untrained counterparts. Subsequentl ...
ArXiv Domain 2025-11-12
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. On the Shape of Brainscores for Large Language Models (LLMs)With the rise of Large Language Models (LLMs), the novel metric “Brainscore” emerged as a means to evaluate the functional similarity between LLMs and human brain/neural systems. Our efforts were dedicated to mining the meaning of the novel score by constructing topological features derived from both human fMRI data involving 190 subjects, and 39 LLMs plus their untrained counterparts. Subsequentl ...
ArXiv Domain 2025-11-15
Created2019-06-18|AI
数据来源:ArXiv Domain LLM Domain Papers1. ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM InferenceWeight-only post-training quantization (PTQ) compresses the weights of Large Language Models (LLMs) into low-precision representations to reduce memory footprint and accelerate inference. However, the presence of outliers in weights and activations often leads to large quantization errors and severe accuracy degradation, especially in recent reasoning LLMs where errors accumulat ...
1…555657…59
avatar
Firefly
A firefly flying freely in the AI domain.
Articles
879
Tags
25
Categories
16
Follow Me
Announcement
Welcome to My Personal Blog!
If Not, Please Visit Gitee Mirror.
Recent Post
检索增强LLM2024-01-13
LLMs公开课 - 6.文本理解和生成大模型2024-01-10
LLMs公开课 - 5.高效训练&模型压缩2024-01-07
Categories
  • AI383
  • Cython1
  • DSA24
  • GitHub208
  • HotNews57
Tags
DSARLTransformerLLMsPaperReadingDeepLearningCVGPTPLdomaingithubhot_newshfArXivDomainAIGitHubTrending微博热搜HotNewsHuggingFacePapersleetcodealgo
Archives
  • January 20245
  • December 202314
  • November 202326
  • October 20231
  • September 20234
Info
Article :
879
Run time :
Total Count :
43199.7k
UV :
PV :
Last Push :
©2023 - 2026 By Firefly
Search
Loading the Database