Cerebras plans humongous AI supercomputer in India backed by UAE
(2026/02/20)
- Reference: 1771615936
- News link: https://www.theregister.co.uk/2026/02/20/india_ai_supercomputer_cerebras_uae/
- Source link:
Nvidia rival Cerebras Systems' dinner plate-sized accelerators will power a new supercomputing cluster in India capable of 8 exaFLOPS of AI compute.
The installation, [1]announced in New Delhi during the AI Impact Summit this week, is part of a collaboration between the United Arab Emirates' Mohamed Bin Zayed University of AI (MBZUAI) and India's Center for Development of Advanced Computing (C-DAC).
The system itself will be deployed by the UAE's AI crown jewel, technology company G42, in a bid to bolster the nation's sovereign compute capacity. If G42 sounds familiar, that's because the UAE-based cloud provider and AI model dev is one of Cerebras' largest backers, having previously financed the chip startup's Condor Galaxy deployment effort at an estimated cost of $900 million.
[2]
G42 has sought to carve out a niche by helping other nations build sovereign AI models trained in their native languages. Late last year, the cloud provider released NANDA 87B, an 87 billion parameter open weights model trained in Hindi and English.
[3]
[4]
"Sovereign AI infrastructure is becoming essential for national competitiveness," CEO of G42 India Manu Jain said in a canned statement. "This project brings that capability to India at a national scale, enabling local researchers, innovators, and enterprises to become AI-native while maintaining full data sovereignty and security."
While the system will be deployed by G42, the company says it'll be operated under India-defined governance frameworks and that all data will remain within the nation's borders. Once operational, the supercomputer will be made available to Indian universities, startups, and small and mid-sized businesses.
[5]
Cerebras tells El Reg the super will be powered by its [6]WSE-3 wafer-scale accelerators. A little back-of-the-napkin math tells us that the machine will likely feature 64 of Cerebras' WSE-3 wafer-scale accelerators, each of which is capable of churning out 125 petaFLOPS of highly sparse 16-bit floating point performance.
These massive chips are somewhat unique in that they don't rely on pricey high-bandwidth memory (HBM) commonly found in Nvidia and AMD GPUs, and instead use lightning-fast on-chip SRAM. Each of Cerebras' 23 kW CS-3 systems features 44 GB of SRAM good for 21 petabytes a second of memory bandwidth. That's roughly 1,000x faster than the HBM4 on Nvidia's newly announced Rubin GPUs.
Originally designed for AI training, the chip's speedy SRAM has made it particularly potent for memory-bound AI inference workloads. Artificial Analysis [7]reports Cerebras can serve gpt-oss 120b High at nearly 2,853 tokens a second per user, more than 3x faster than the next fastest GPU-based inference provider. This no doubt played into OpenAI's [8]decision to begin deploying select models on Cerebras' hardware last month.
[9]As memory shortage persists, vendor price quotes are not long remembered
[10]India's top telco tackles AI with $110 billion build plan and proven fast market dominance playbook
[11]Indian think tank finds strong hiring for the kind of jobs AI puts at risk
[12]Indian conglomerate Adani plans very slow $100 billion AI datacenter build
Today’s announcement comes just days after AMD and Nvidia unveiled several large-scale deployments in India powered by their respective compute platforms. As part of the partnership with Tata Consultancy Services, AMD will [13]deploy 200 MW of its next-gen Helios racks powered by its MI455X.
Indian cloud service provider Yotta said it [14]planned to field a cluster of 20,000 Blackwell Ultra GPUs. Bit barn builders Larsen and Toubro also laid out plans for a giga-scale AI datacenter network, beginning with a 30 MW facility in Chennai and a 40 megawatt site in Mumbai. Among its first tenants will be E2E Networks’ deployment of B200 accelerators. ®
Get our [15]Tech Resources
[1] https://www.g42.ai/resources/news/uae-deploy-8-exaflop-supercomputer-india-strengthen-local-sovereign-ai-infrastructure
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2024/03/13/cerebras_claims_to_have_revived/
[7] https://artificialanalysis.ai/models/gpt-oss-120b/providers
[8] https://www.theregister.com/2026/02/12/openai_model_cerebras/
[9] https://www.theregister.com/2026/02/18/memory_shortage_persists_vendor_change_terms/
[10] https://www.theregister.com/2026/02/20/jio_ai_plans_india_summit/
[11] https://www.theregister.com/2026/02/19/ai_impact_tech_jobs_india/
[12] https://www.theregister.com/2026/02/18/india_ai_summit_adani_datacenters/
[13] https://www.amd.com/en/newsroom/press-releases/2026-2-15-amd-and-tcs-to-bring-state-of-the-art-helios-rac.html
[14] https://blogs.nvidia.com/blog/india-ai-mission-infrastructure-models/
[15] https://whitepapers.theregister.com/
The installation, [1]announced in New Delhi during the AI Impact Summit this week, is part of a collaboration between the United Arab Emirates' Mohamed Bin Zayed University of AI (MBZUAI) and India's Center for Development of Advanced Computing (C-DAC).
The system itself will be deployed by the UAE's AI crown jewel, technology company G42, in a bid to bolster the nation's sovereign compute capacity. If G42 sounds familiar, that's because the UAE-based cloud provider and AI model dev is one of Cerebras' largest backers, having previously financed the chip startup's Condor Galaxy deployment effort at an estimated cost of $900 million.
[2]
G42 has sought to carve out a niche by helping other nations build sovereign AI models trained in their native languages. Late last year, the cloud provider released NANDA 87B, an 87 billion parameter open weights model trained in Hindi and English.
[3]
[4]
"Sovereign AI infrastructure is becoming essential for national competitiveness," CEO of G42 India Manu Jain said in a canned statement. "This project brings that capability to India at a national scale, enabling local researchers, innovators, and enterprises to become AI-native while maintaining full data sovereignty and security."
While the system will be deployed by G42, the company says it'll be operated under India-defined governance frameworks and that all data will remain within the nation's borders. Once operational, the supercomputer will be made available to Indian universities, startups, and small and mid-sized businesses.
[5]
Cerebras tells El Reg the super will be powered by its [6]WSE-3 wafer-scale accelerators. A little back-of-the-napkin math tells us that the machine will likely feature 64 of Cerebras' WSE-3 wafer-scale accelerators, each of which is capable of churning out 125 petaFLOPS of highly sparse 16-bit floating point performance.
These massive chips are somewhat unique in that they don't rely on pricey high-bandwidth memory (HBM) commonly found in Nvidia and AMD GPUs, and instead use lightning-fast on-chip SRAM. Each of Cerebras' 23 kW CS-3 systems features 44 GB of SRAM good for 21 petabytes a second of memory bandwidth. That's roughly 1,000x faster than the HBM4 on Nvidia's newly announced Rubin GPUs.
Originally designed for AI training, the chip's speedy SRAM has made it particularly potent for memory-bound AI inference workloads. Artificial Analysis [7]reports Cerebras can serve gpt-oss 120b High at nearly 2,853 tokens a second per user, more than 3x faster than the next fastest GPU-based inference provider. This no doubt played into OpenAI's [8]decision to begin deploying select models on Cerebras' hardware last month.
[9]As memory shortage persists, vendor price quotes are not long remembered
[10]India's top telco tackles AI with $110 billion build plan and proven fast market dominance playbook
[11]Indian think tank finds strong hiring for the kind of jobs AI puts at risk
[12]Indian conglomerate Adani plans very slow $100 billion AI datacenter build
Today’s announcement comes just days after AMD and Nvidia unveiled several large-scale deployments in India powered by their respective compute platforms. As part of the partnership with Tata Consultancy Services, AMD will [13]deploy 200 MW of its next-gen Helios racks powered by its MI455X.
Indian cloud service provider Yotta said it [14]planned to field a cluster of 20,000 Blackwell Ultra GPUs. Bit barn builders Larsen and Toubro also laid out plans for a giga-scale AI datacenter network, beginning with a 30 MW facility in Chennai and a 40 megawatt site in Mumbai. Among its first tenants will be E2E Networks’ deployment of B200 accelerators. ®
Get our [15]Tech Resources
[1] https://www.g42.ai/resources/news/uae-deploy-8-exaflop-supercomputer-india-strengthen-local-sovereign-ai-infrastructure
[2] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=2&c=2aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D2%26raptor%3Dcondor%26pos%3Dtop%26test%3D0
[3] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[4] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=3&c=33aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D3%26raptor%3Deagle%26pos%3Dmid%26test%3D0
[5] https://pubads.g.doubleclick.net/gampad/jump?co=1&iu=/6978/reg_software/aiml&sz=300x50%7C300x100%7C300x250%7C300x251%7C300x252%7C300x600%7C300x601&tile=4&c=44aZjnkAAQanmuuJtwtrKJIwAAAZE&t=ct%3Dns%26unitnum%3D4%26raptor%3Dfalcon%26pos%3Dmid%26test%3D0
[6] https://www.theregister.com/2024/03/13/cerebras_claims_to_have_revived/
[7] https://artificialanalysis.ai/models/gpt-oss-120b/providers
[8] https://www.theregister.com/2026/02/12/openai_model_cerebras/
[9] https://www.theregister.com/2026/02/18/memory_shortage_persists_vendor_change_terms/
[10] https://www.theregister.com/2026/02/20/jio_ai_plans_india_summit/
[11] https://www.theregister.com/2026/02/19/ai_impact_tech_jobs_india/
[12] https://www.theregister.com/2026/02/18/india_ai_summit_adani_datacenters/
[13] https://www.amd.com/en/newsroom/press-releases/2026-2-15-amd-and-tcs-to-bring-state-of-the-art-helios-rac.html
[14] https://blogs.nvidia.com/blog/india-ai-mission-infrastructure-models/
[15] https://whitepapers.theregister.com/