
Daniel Nishball
Articles
-
4 weeks ago |
semianalysis.com | Dylan Patel |Daniel Nishball |Myron Xie |Wega Chu
Huawei is making waves with its new AI accelerator and rack scale architecture. Meet China’s newest and most powerful Chinese domestic solution, the CloudMatrix 384 built using the Ascend 910C. This solution competes directly with the GB200 NVL72, and in some metrics is more advanced than Nvidia’s rack scale solution. The engineering advantage is at the system level not just at the chip level, with innovation at the accelerator, networking, optics, and software layers.
-
Feb 13, 2025 |
semianalysis.com | Dylan Patel |Jeremie Eliahou Ontiveros |Daniel Nishball |Reyk Knuhtsen
Cluster deployments are an order of magnitude larger in scale with Gigawatt-scale datacenters coming online at full capacity much faster than most believe. As such, there are considerable design changes that Datacenter developers planning future sites must consider. We previously covered the Electrical system of Datacenters and how the rise of Generative AI is impacting datacenter design and equipment suppliers.
-
Jan 15, 2025 |
semianalysis.com | Dylan Patel |Daniel Nishball |Myron Xie |AJ Kourabi
The US government lobbed the largest salvo in the new technology cold war with its new Framework for Artificial Intelligence Diffusion. These new export restrictions are completely unprecedented in scope and scale, with many calling the efforts overzealous or misguided. The regulation at its core is targeted at preventing China from accessing AI compute to build frontier models.
-
Dec 10, 2024 |
semianalysis.com | Dylan Patel |Daniel Nishball |AJ Kourabi
There has been an increasing amount of fear, uncertainty and doubt (FUD) regarding AI Scaling laws. A cavalcade of part-time AI industry prognosticators have latched on to any bearish narrative they can find, declaring the end of scaling laws that have driven the rapid improvement in Large Language Model (LLM) capabilities in the last few years.
-
Dec 3, 2024 |
semianalysis.com | Dylan Patel |Daniel Nishball |Reyk Knuhtsen
Amazon is currently conducting one of the largest build-out of AI clusters globally, deploying a considerable number of Hopper and Blackwell GPUs. In addition to a massive Capex invested into Nvidia based clusters, AWS is also investing many billions’ dollars worth of capex into Trainium2 AI clusters. AWS is currently deploying a cluster with 200k+ Trainium2 chips for Anthropic called “Project Rainer”.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →