
Articles
-
1 week ago |
servethehome.com | Cliff Robinson
AMD held a briefing last week on the AMD Ryzen Threadripper 9000 series parts as well as the AMD Radeon AI Pro R9700 series GPUs. We can now share some of the updates on the upcoming parts that we expect to arrive this summer. New AMD Ryzen Threadripper 9000 SeriesThe AMD Ryzen Threadripper 9000 series and AMD Ryzen Threadripper Pro 9000 series extend AMD’s workstation lead over Intel’s solutions by updating platforms to Zen 5 with higher memory frequencies.
-
1 week ago |
servethehome.com | Cliff Robinson
The PCI-SIG released the PCIe 7.0 specifications for next-generation interconnects. We had this one on the list for a few days ago, but AMD’s event ended up being too big. As a result, we are playing some Father’s Day catch up (also happy Father’s Day to all the dads of STH!)PCI-SIG Releases PCIe 7.0 Specifications and Optical PCIe 6.4 and 7.0The new generation is set to deliver 128.0 GT/s in raw bit rate. That will mean up to 512GB/s of bidirectional bandwidth via a PCIe Gen7 x16 slot.
-
2 weeks ago |
servethehome.com | Cliff Robinson
Google Cloud announced today the preview availability of its G4 VMs. These VMs are neat for a few reasons, but perhaps the biggest is due to the hardware. As GPU instances, these new machines combine two high-end AMD CPUs, eight NVIDIA GPUs, fast networking, and plenty of memory. Google has New NVIDIA RTX Pro 6000 8-GPU Instances with AMD EPYC 9005 CPUsUnfortunately, Google did not provide pictures of the hardware. Luckily, this is STH, so we have seen or used a lot of the components.
-
2 weeks ago |
servethehome.com | Cliff Robinson
Today, Micron announced that it has started shipping next-generation high-bandwidth memory, HBM4. HBM is a key technology to enable today’s AI accelerators and HPC processors. Using HBM trades capacity and serviceability for speed, and that is why a new generation is a big deal. Micron Begins Shipping HBM4 Memory for Next-Gen AIHBM4 features a 2048-bit interface and now up to 2.0TB/s per memory stack. Remember, there are often several HBM stacks on a modern accelerator.
-
3 weeks ago |
servethehome.com | Cliff Robinson
MLPerf Training 5.0 Results are out with a few new architectures. This edition was still mostly NVIDIA dominated, which makes sense given NVIDIA’s market share. Google had a Trillium system submitted. AMD had some llama runs using the Instinct MI300X and MI325X, which is exciting. MLPerf Training v5.0 is OutWe often call MLPerf NVIDIA’s MLPerf because they have dominated since the benchmark suite was created and heavily influence the vast majority of submissions across benchmarks.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →