[Exclusive] Deepc, Performance Equivalent at 1/9th Scale... SK Group
News

[Exclusive] Deepc, Performance Equivalent at 1/9th Scale... SK Group

서울경제
2025.05.25
·News·by Anonymous
#AI#LLM#RAG#SKT#A.X

Key Points

  • 1SK Telecom (SKT) is launching A.X 4.1, an AI model that achieves performance comparable to Deepseek R1 with 9 times fewer parameters (72B vs 671B) and processes Korean 1.5 times more efficiently than GPT-4o.
  • 2This cost-efficient, inference-type model offers a breakthrough for Korean companies facing limited investment and GPU shortages, enabling specialized applications in complex fields.
  • 3SKT's release signals intensified competition in the efficient AI model market, with other major Korean firms like Naver and Kakao also developing similar technologies, further solidifying SK Group's AI value chain.

SK Telecom (SKT) has announced the development of A.X4.1, an inference-optimized artificial intelligence (AI) model that achieves performance comparable to larger global models with significantly fewer parameters. This strategic move aims to address the challenges of high investment costs and GPU supply shortages prevalent in the Korean AI industry.

A.X4.1 boasts 72 billion parameters, a drastic reduction compared to Deepseek R1's 671 billion parameters, yet it achieves a Massive Multitask Language Understanding (MMLU) score of 87.3 points, closely matching R1's 90.8 points. This demonstrates a substantial leap in efficiency, achieving near-equivalent inference capabilities with approximately one-ninth the model size.

Furthermore, A.X4.1 exhibits superior Korean language processing capabilities compared to OpenAI's GPT-4o, a non-inference-optimized model. A.X4.1 processes Korean tokens 1.5 times more efficiently, leading to a 34% reduction in associated costs. Its MMLU score is also reported to be on par with GPT-4o. The core methodology emphasizes achieving high performance with reduced computational resources by optimizing for inference tasks, thereby minimizing the need for massive parameters, extensive token processing, and large numbers of GPUs for deployment and operation.

The pursuit of such efficient inference models is critical for Korean companies, which face limitations in investment capital for developing trillion-parameter-scale Large Language Models (LLMs) and ongoing difficulties in securing the necessary high-end GPUs. Inference models offer a viable solution by improving cost-efficiency and enabling specialized applications in complex domains such as mathematical calculations, scientific research, coding, and manufacturing, which traditionally required significantly larger models.

SKT's A.X4.1, previously integrated primarily into its AI call agent A.dot, is now positioned for broader commercialization with external clients. This development comes as "Deepseek Shock" has highlighted the importance of cost-effective inference models. Other major Korean players are also making strides in this area: LG AI Research unveiled EXAONE Deep in March, Naver plans to release an inference-optimized version of HyperCLOVA X next month (already showing GPT-4o level performance with a 90.1 SimpleQA score), and Kakao is developing its own inference model for a potential release in the first half of the year. Korean AI startups like Liner and Upstage are also contributing to this competitive landscape. Globally, companies like Google (Gemini 2.5 Pro with Deep Think), Anthropic (Claude Opus 4), and Xiaomi (MiMO with 7 billion parameters) are also advancing their inference and autonomous AI capabilities.

This launch is a key accelerant for SK Group's broader AI value chain initiative. Led by SKT's AI models and data center infrastructure, the strategy integrates SK Hynix for semiconductor supply (e.g., HBM4 to Nvidia) and SK AX (formerly SK C&C) for AI solution services, aiming to establish a self-sufficient AI ecosystem. SKT's future plans include monetizing A.dot, launching its North American version "Ester," and building an AI data center with 60,000 GPUs by year-end.