ArXiv Domain 2025-09-27
数据来源:ArXiv Domain
LLM Domain Papers1. From Prediction to Understanding: Will AI Foundation Models Transform Brain Science?Generative pretraining (the “GPT” in ChatGPT) enables language models to learn from vast amounts of internet text without human supervision. This approach has driven breakthroughs across AI by allowing deep neural networks to learn from massive, unstructured datasets. We use the term foundation models to refer to large pretrained systems that can be adapted to a wide range ...
HuggingFace Papers 2025-08-21
数据来源:HuggingFace Papers
Latest Papers1. Chain-of-Agents: End-to-End Agent Foundation Models via Multi-Agent Distillation and Agentic RLRecent advances in large language models (LLMs) and multi-agent systems have demonstrated remarkable capabilities in complex problem-solving tasks such as deep research, vibe coding, and mathematical reasoning. However, most existing multi-agent systems are built upon manual prompt/workflow engineering with sophisticated agent frameworks, making them computatio ...