Blog
-
Why we’re joining the UEC: The future of LLM inference is multi-chip
News -
Is Furiosa’s chip architecture actually innovative? Or just a fancy systolic array?
Technical Updates -
Gartner report mentions FuriosaAI as Sample Vendor, highlights need for power-efficient AI chips
News -
Tech giants begin RNGD sampling
News -
Implementing HBM3 in RNGD: Why it’s tricky, why it’s important, how we did it
Technical Updates -
Hot Chips 2024 recap: The global unveiling of RNGD
Our Viewpoints -
FuriosaAI unveils RNGD at Hot Chips 2024
News -
Why should you care about RNGD?
Our Viewpoints -
Tensor Contraction Processor: The first future-proof AI chip architecture
Technical Updates -
RNGD preview: The world’s most efficient AI chip for LLM inference
Technical Updates -
World's first NPU Hackathon for Vision Applications with Furiosa's Gen 1 Vision NPU
News -
How ePopSoft, maker of Korea’s most popular English instruction app, uses Furiosa’s Gen 1 Vision NPU
Technical Updates -
Q&A: ASUS on AI server trends, FuriosaAI partnership and more
Our Viewpoints -
A new global survey of businesses’ AI Infra Plans, conducted by FuriosaAI, ClearML and AIIA
News -
Dudaji Uses FuriosaAI's Gen 1 Vision NPU to monitor workplace safety
News -
Press Release: FuriosaAI Ends 2024 on a High Note: Llama 3.1 Performance, SDK Release, Leadership Expansion
News