Experience RENEGADE Summit 2026

News
May 13, 2026

Summary

Written by

The Furiosa Team

Share this article

No items found.

Agentic AI is driving near-infinite demand for tokens, exposing critical inefficiencies of legacy compute architectures.

At our inaugural RENEGADE Summit, we demonstrated RNGD (pronounced "renegade"), our AI engine built for the agentic era. Showcasing the culmination of our hardware and software innovation across the RNGD product line, we proved that the next era of AI infrastructure is not just a distant roadmap. It is a deployable reality, today.

Watch the keynote below.

The Inference Era

June Paik, Co-Founder and Chief Executive Officer

The economics and physics of AI have irreversibly changed. We are moving at breakneck speed toward the agentic AI era, driven by two powerful forces: frontier AI models advancing rapidly through scaling laws, and breakthroughs in agentic engineering that harness those models to achieve unprecedented capabilities once unimaginable.

Agentic AI workloads demand a staggering volume of tokens, and these workloads place a different class of stress on data center infrastructure. Efficiency — measured by true performance per energy and translating into the lower possible total cost of ownership (TCO) — becomes even more critical moving forward to make both our businesses and broader AI ecosystem sustainable. 

In this keynote, June Paik outlines why AI data centers are rapidly transitioning toward inference-centric operations, and why meeting the near-infinite token demand of frontier AI requires a radically new chips, software and systems architecture. 

Validating this shift, industry leaders from LG AI Research, Samsung SDS, LG U+, Upstage, and MegazoneCloud joined June on the main stage. Together, they detailed the future of sovereign AI infrastructure, cloud-scale inference services, and production AI systems powered by RNGD. (To dive deeper into these partner announcements, check out our event recap blog.)

The Tensor Contraction Processor (TCP) Architecture

Hanjoon Kim, Co-Founder and Chief Technology Officer

Solving the inference bottleneck is a full-stack systems challenge spanning hardware, software, and algorithms, and it must begin with a natively designed silicon chip.

In this session, Hanjoon Kim walks through the architectural principles behind RNGD and Furiosa's Tensor Contraction Processor (TCP). He explains why we chose tensor contraction as the core abstraction layer between hardware and software, and how this architectural decision enables more efficient data reuse, drastically reduces memory movement, and allows for compiler-driven global optimization that adapts alongside rapidly evolving AI models.

Rather than constraining developers with fixed model assumptions of yesterday, our TCP is engineered to be the sustainable computing foundation that scales alongside the agentic workloads of today and tomorrow.

The Software Stack for Modern AI Inference

Jeehoon Kang, Chief Research Officer

Hardware performance is meaningless if the software stack is locked in the past. For years, the industry has debated the "CUDA moat." But legacy software stacks designed for traditional GPU-centric training workloads are fundamentally rigid and increasingly struggle to keep pace with the rapid architectural shifts of modern reasoning models.

In this session, Jeehoon Kang introduces the software systems powering RNGD, including our advanced compiler stack, the Tensor Contraction Language (TCL), and the Furiosa Virtual ISA. Together, these technologies represent a major leap forward in programming AI chips beyond CUDA.

Furiosa's SDK was co-designed from the beginning with our TCP architecture so that developers can quickly adopt new AI models and new optimization techniques on RNGD. This generality is crucial for keeping up with the incredible pace of innovation in AI.

The Milestone: From Vision to Global Mass Production

RENEGADE Summit 2026 was more than a technology showcase; it was a declaration of readiness.

This event marked RNGD's official transition into mass production, representing the culmination of Furiosa's mission to build and deliver the world's most advanced inference chip for the data center.

At Furiosa, we are doing more than challenging the status quo. We are building the engine that makes AI more efficient, accessible, and sustainable – at global scale.

Relive the excitement and key moments with our official highlight reel.

Written by

The Furiosa Team

Share this article

white dot background graphic

Get the latest updates on FuriosaAI