LG CNS deploys RNGD for cloud inference
News
Summary
-
FuriosaAI and LG CNS are collaborating to provide high-performance, low-cost AI cloud services for businesses.
-
LG CNS is using Furiosa’s RNGD chips to power its AgenticWorks platform, which automates complex business tasks.
-
RNGD chips provide 2.25x better performance-per-watt than traditional hardware, allowing for 3.5x more data processing within the same power limits.
-
The companies will optimize LG's EXAONE models and launch cloud-based AI computing services to help startups scale without buying expensive hardware.
Autonomous AI agents are transforming enterprise workflows, but their widespread adoption faces a significant hardware bottleneck. Unlike standard chatbots, agentic workflows involve continuous loops of reasoning and action, creating a massive demand for sustained, high-throughput inference. To meet this demand sustainably, LG CNS is integrating FuriosaAI’s inference, RNGD, accelerator into its AgenticWorks platform, a full-stack, modular solution to design, deploy, and manage autonomous AI agents.
By selecting RNGD to power its AI cloud infrastructure, LG CNS moves beyond the power-hungry constraints of traditional GPUs, enabling enterprise customers to scale agentic services with significantly improved energy efficiency.
Solving the agentic TCO challenge
Agentic AI requires more than just raw compute; it requires efficiency. Because RNGD is built on our Tensor Contraction Processor (TCP) architecture, it is uniquely suited for the high-density requirements of agentic services.
Recent performance benchmarks established by LG AI Research highlight the technical advantages:
Efficiency: RNGD achieves 2.25x better LLM inference performance-per-watt than traditional GPUs.
Throughput: Within the same power envelope, RNGD-powered racks generate 3.5x more tokens for EXAONE models compared to GPU-based racks.
These efficiencies allow for higher-density compute, fitting more servers into the same physical footprint and air-cooled power envelope. For enterprises, this translates directly to a significantly lower total cost of ownership (TCO).
Expanding access to AI compute-as-a-service
The collaboration also introduces "AI compute-as-a-service," optimized specifically for LG’s K-EXAONE models. This model allows enterprises and startups to access high-performance inference environments through the cloud without the capital expense of purchasing and maintaining hardware.
"Through cooperation with FuriosaAI, we will secure NPU-based AI infrastructure technology and experts so customers can use Agentic AI more efficiently,” said LG CNS Vice President Tae Hoon Kim. “In collaboration with LG AI Research, we will support the advancement of national AI models and contribute to the development of the domestic AI industry."
Sovereign AI and public sector deployment
A critical component of this partnership is addressing data sovereignty. For Korea’s public and government sectors, the collaboration provides an end-to-end AI stack that can operate on-premise or in hybrid environments.
By utilizing localized RNGD infrastructure, organizations can run advanced AI models while ensuring sensitive data remains within controlled environments. This approach avoids the high costs and extreme power demands of conventional GPU hardware, which often necessitate expensive liquid-cooling retrofits.
The blueprint for global inference
RNGD is now in mass production with TSMC, with the first 4,000 units delivered in January. LG CNS has already demonstrated productivity gains using AgenticWorks for corporate recruiting, software development, and CRM.
This partnership is a definitive step in proving that advanced agentic AI can be deployed sustainably and at scale. By removing the primary power and cost bottlenecks hindering adoption, this collaboration serves as a blueprint for how organizations worldwide will deploy high-performance inference compute without compromise.