Ant Group open-sources Ling-2.6-1T model, focusing on token efficiency and real-world tasks

  • Ant Group officially open-sources its trillion-parameter flagship model Ling-2.6-1T, prioritizing complex task execution and token efficiency.
  • The model achieves leading open-source performance across multiple execution benchmarks, aiming to accelerate AI deployment in enterprise systems and agent workflows.
Ant Group open-sources Ling-2.6-1T model, focusing on token efficiency and real-world tasks
(Image credit: Ant Group)

Ant Group has open-sourced its trillion-parameter comprehensive flagship model Ling-2.6-1T, marking another core strategic move for the company in the enterprise AI application sector.

The model, released last week, was officially open-sourced today, according to a Thursday announcement.

Rather than solely pursuing massive parameter scales or benchmark scores, the model is dedicated to tackling complex tasks and execution stability in real-world production environments.

As large models are gradually integrated into actual business systems, industry focus is rapidly shifting toward computing cost control and the reliability of multi-step workflows, the announcement said.

Through foundational innovations in its hybrid linear architecture, Ling-2.6-1T significantly reduces token overhead during inference while maintaining the performance ceiling of a trillion-parameter model.

It uses a more efficient mechanism to deliver direct results, compressing enterprise deployment costs at equivalent intelligence levels.

In high-frequency scenarios such as AI agents, code generation, and office automation, models must maintain continuous control over complex instructions, external tools, and intermediate states.

Ling-2.6-1T has heavily reinforced its learning of composite tasks, reaching leading levels in the open-source domain across multiple mainstream execution benchmarks.

It remains stable under multiple constraints, efficiently transforming unstructured business information into actionable outputs.

The trillion-parameter model possesses full engineering deployment capabilities, from code generation to bug fixing, and is highly compatible with current mainstream agent development frameworks.

It can not only quickly generate high-quality front-end code and web prototypes but also provide nuanced and highly controllable text in multilingual content creation.

For enterprise knowledge management, it can accurately extract key information from massive documents and clarify complex entity relationships, according to the announcement.

Ant Group had already open-sourced the Ling-2.6-flash enterprise instruction model on Wednesday, which features extremely high inference efficiency and ultra-low operating costs.

To allow more global developers to evaluate the real-world performance of the trillion-parameter flagship model, the company announced a one-week extension of free API access on related testing platforms, the statement said.

Through this open-source initiative, Ant Group hopes to accelerate the commercialization of agent workflows and boost delivery efficiency across all-scenario tasks.

Qwen3.6-Max-Preview is the most powerful model in the Qwen series, significantly improving agent programming capabilities.
Apr 20, 2026
AI News Alert
Subscribe to receive email notifications immediately when new articles about AI are published.
AI
View more channels