HuggingFace Papers 2025-08-06
数据来源:HuggingFace Papers
Latest Papers1. Qwen-Image Technical ReportWe present Qwen-Image, an image generation foundation model in the Qwen series that achieves significant advances in complex text rendering and precise image editing. To address the challenges of complex text rendering, we design a comprehensive data pipeline that includes large-scale data collection, filtering, annotation, synthesis, and balancing. Moreover, we adopt a progressive training strategy that starts with non-text-to ...
HuggingFace Papers 2025-08-07
数据来源:HuggingFace Papers
Latest Papers1. Seed Diffusion: A Large-Scale Diffusion Language Model with High-Speed InferenceWe present Seed Diffusion Preview, a large-scale language model based on discrete-state diffusion, offering remarkably fast inference speed. Thanks to non-sequential, parallel generation, discrete diffusion models provide a notable speedup to mitigate the inherent latency of token-by-token decoding, as demonstrated recently (e.g., Mercury Coder, Gemini Diffusion). Seed Diffus ...
HuggingFace Papers 2025-08-08
数据来源:HuggingFace Papers
Latest Papers1. On the Generalization of SFT: A Reinforcement Learning Perspective with Reward RectificationWe present a simple yet theoretically motivated improvement to Supervised Fine-Tuning (SFT) for the Large Language Model (LLM), addressing its limited generalization compared to reinforcement learning (RL). Through mathematical analysis, we reveal that standard SFT gradients implicitly encode a problematic reward structure that may severely restrict the generaliza ...
HuggingFace Papers 2025-08-11
数据来源:HuggingFace Papers
Latest Papers1. On the Generalization of SFT: A Reinforcement Learning Perspective with Reward RectificationWe present a simple yet theoretically motivated improvement to Supervised Fine-Tuning (SFT) for the Large Language Model (LLM), addressing its limited generalization compared to reinforcement learning (RL). Through mathematical analysis, we reveal that standard SFT gradients implicitly encode a problematic reward structure that may severely restrict the generaliza ...
HuggingFace Papers 2025-08-12
数据来源:HuggingFace Papers
Latest Papers1. GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation ModelsWe present GLM-4.5, an open-source Mixture-of-Experts (MoE) large language model with 355B total parameters and 32B activated parameters, featuring a hybrid reasoning method that supports both thinking and direct response modes. Through multi-stage training on 23T tokens and comprehensive post-training with expert model iteration and reinforcement learning, GLM-4.5 achieves strong performance ...
HuggingFace Papers 2025-08-13
数据来源:HuggingFace Papers
Latest Papers1. ReasonRank: Empowering Passage Ranking with Strong Reasoning AbilityLarge Language Model (LLM) based listwise ranking has shown superior performance in many passage ranking tasks. With the development of Large Reasoning Models, many studies have demonstrated that step-by-step reasoning during test-time helps improve listwise ranking performance. However, due to the scarcity of reasoning-intensive training data, existing rerankers perform poorly in many c ...
HuggingFace Papers 2025-08-14
数据来源:HuggingFace Papers
Latest Papers1. WebWatcher: Breaking New Frontier of Vision-Language Deep Research AgentWeb agents such as Deep Research have demonstrated superhuman cognitive abilities, capable of solving highly challenging information-seeking problems. However, most research remains primarily text-centric, overlooking visual information in the real world. This makes multimodal Deep Research highly challenging, as such agents require much stronger reasoning abilities in perception, lo ...
HuggingFace Papers 2025-08-16
数据来源:HuggingFace Papers
Latest Papers1. We-Math 2.0: A Versatile MathBook System for Incentivizing Visual Mathematical ReasoningMultimodal Large Language Models (MLLMs) have demonstrated impressive capabilities across various tasks, but still struggle with complex mathematical reasoning. Existing research primarily focuses on dataset construction and method optimization, often overlooking two critical aspects: comprehensive knowledge-driven design and model-centric data space modeling. In this ...
HuggingFace Papers 2025-08-18
数据来源:HuggingFace Papers
Latest Papers1. We-Math 2.0: A Versatile MathBook System for Incentivizing Visual Mathematical ReasoningMultimodal Large Language Models (MLLMs) have demonstrated impressive capabilities across various tasks, but still struggle with complex mathematical reasoning. Existing research primarily focuses on dataset construction and method optimization, often overlooking two critical aspects: comprehensive knowledge-driven design and model-centric data space modeling. In this ...
HuggingFace Papers 2025-08-17
数据来源:HuggingFace Papers
Latest Papers1. We-Math 2.0: A Versatile MathBook System for Incentivizing Visual Mathematical ReasoningMultimodal Large Language Models (MLLMs) have demonstrated impressive capabilities across various tasks, but still struggle with complex mathematical reasoning. Existing research primarily focuses on dataset construction and method optimization, often overlooking two critical aspects: comprehensive knowledge-driven design and model-centric data space modeling. In this ...
HuggingFace Papers 2025-08-19
数据来源:HuggingFace Papers
Latest Papers1. SSRL: Self-Search Reinforcement LearningWe investigate the potential of large language models (LLMs) to serve as efficient simulators for agentic search tasks in reinforcement learning (RL), thereby reducing dependence on costly interactions with external search engines. To this end, we first quantify the intrinsic search capability of LLMs via structured prompting and repeated sampling, which we term Self-Search. Our results reveal that LLMs exhibit str ...
HuggingFace Papers 2025-08-20
数据来源:HuggingFace Papers
Latest Papers1. Ovis2.5 Technical ReportWe present Ovis2.5, a successor to Ovis2 designed for native-resolution visual perception and strong multimodal reasoning. Ovis2.5 integrates a native-resolution vision transformer that processes images at their native, variable resolutions, avoiding the degradation from fixed-resolution tiling and preserving both fine detail and global layout — crucial for visually dense content like complex charts. To strengthen reasoning, we tr ...
HuggingFace Papers 2025-08-22
数据来源:HuggingFace Papers
Latest Papers1. DuPO: Enabling Reliable LLM Self-Verification via Dual Preference OptimizationWe present DuPO, a dual learning-based preference optimization framework that generates annotation-free feedback via a generalized duality. DuPO addresses two key limitations: Reinforcement Learning with Verifiable Rewards (RLVR)’s reliance on costly labels and applicability restricted to verifiable tasks, and traditional dual learning’s restriction to strictly dual task pairs ...
HuggingFace Papers 2025-08-23
数据来源:HuggingFace Papers
Latest Papers1. Intern-S1: A Scientific Multimodal Foundation ModelIn recent years, a plethora of open-source foundation models have emerged, achieving remarkable progress in some widely attended fields, with performance being quite close to that of closed-source models. However, in high-value but more challenging scientific professional fields, either the fields still rely on expert models, or the progress of general foundation models lags significantly compared to tho ...
HuggingFace Papers 2025-08-24
数据来源:HuggingFace Papers
Latest Papers1. Intern-S1: A Scientific Multimodal Foundation ModelIn recent years, a plethora of open-source foundation models have emerged, achieving remarkable progress in some widely attended fields, with performance being quite close to that of closed-source models. However, in high-value but more challenging scientific professional fields, either the fields still rely on expert models, or the progress of general foundation models lags significantly compared to tho ...