Cursor Composer 2는 Kimi K2.5에 RL을 적용한 모델로 확인 | GeekNews
News

Cursor Composer 2는 Kimi K2.5에 RL을 적용한 모델로 확인 | GeekNews

xguru
2026.03.21
·News·by 권준호
#AI#Licensing#LLM#Open Source#RL

Key Points

  • 1Cursor Composer 2 was discovered to internally use Moonshot's Kimi K2.5 model with reinforcement learning (RL), leading to controversy over re-branding accusations and potential license violations.
  • 2Moonshot subsequently confirmed an official partnership with Cursor, clarifying that the integration of Kimi K2.5 was a legitimate collaboration rather than unauthorized use.
  • 3This incident has fueled broader discussions within the AI community regarding model transparency, license compliance, and the shifting focus from base model origins to workflow efficiency and user experience in commercial AI products.

Cursor Composer 2, an integrated development environment (IDE) with AI coding assistance, faced a controversy regarding its underlying large language model (LLM). It was discovered that Composer 2 internally utilizes a version of Moonshot's Kimi K2.5 model, specifically identified as kimi-k2p5-rl.

The discovery was made through a technical exploit involving the manipulation of the OpenAI base URL. Users configured a server to dump requests originating from Cursor Composer 2 by modifying the base_url parameter, a technique previously employed to analyze GPT-4 caching behavior. This revealed that Composer 2's API requests included the path accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast. While Composer 1.5 had blocked such requests, Composer 2 did not, allowing the identification. Following this public revelation, Cursor promptly patched the vulnerability within hours.

The model identifier kimi-k2p5-rl suggests that Cursor Composer 2 leverages the Kimi K2.5 base model with Reinforcement Learning (RL) applied. Cursor later clarified that only approximately 25% of the final model's computation is derived from the base model, with the remaining 75% attributed to their own training, which includes continued pretraining and RL. This strategic approach, common in the industry, involves licensing and fine-tuning existing powerful models rather than training foundational models from scratch.

This revelation ignited discussions within the AI community regarding licensing, transparency, and re-branding practices. Initial concerns focused on Moonshot's Kimi K2.5 model, which operates under a modified MIT License. This license reportedly stipulates that if the model is used at a certain scale (e.g., 100 million monthly active users or $20 million in monthly revenue), explicit attribution ("Kimi K2.5") must be displayed in the user interface. Critics initially accused Cursor of re-branding a third-party open-source model as their own without proper attribution, potentially violating the license. There was also speculation about whether the model might be based on GLM-5, which uses a standard MIT license without such explicit attribution requirements.

However, the situation was clarified when Moonshot officially confirmed its partnership with Cursor, stating that Cursor's use of Kimi K2.5 was not unauthorized but part of an official collaboration. This partnership involves Cursor accessing Kimi K2.5 via the FireworksAI_HQ platform for inference.

Community reactions were mixed. Some users criticized Cursor for undermining the spirit of open source and commercially exploiting community contributions. Others took a more pragmatic view, emphasizing that for most users, performance, coding speed, and workflow efficiency are more critical than the specific model's origin. This incident underscored the growing trend where the competitive advantage in AI services shifts from the foundational model itself to the integrated workflow and user experience. Despite being a fork of VSCode using open-source LLMs, Cursor's "moat" is perceived as its ability to leverage user data (patterns, acceptance rates, feedback) for fine-tuning, particularly excelling in features like tab-completion. The rapid security patch also highlighted the responsiveness of Cursor's engineering team. The incident ultimately served as a significant case study on the complexities of AI model integration, license compliance, and transparency in a rapidly evolving commercial landscape.