Dylan Patel's profile photo

Dylan Patel

San Francisco Bay

Founder and Chief Analyst at SemiAnalysis

SemiAnalysis Boutique AI & Semiconductor Research and Consulting DMs are open for consulting, quotes, or to talk shop

Articles

  • 5 days ago | semianalysis.com | Dylan Patel |Kimbo Chen

    In our AI Scaling Laws article from late last year, we discussed how multiple stacks of AI scaling laws have continued to drive the AI industry forward, enabling greater than Moore’s Law growth in model capabilities as well as a commensurately rapid reduction in unit token costs. These scaling laws are driven by training and inference optimizations and innovations, but advancements in compute capabilities transcending Moore’s Law have also played a critical role.

  • 2 weeks ago | semianalysis.com | Dylan Patel |Daniel Nishball |Wega Chu |Ivan Chiam

    For the past six months, AMD has been in a Wartime stance. They have been working hard and working smart towards their goal of being competitive with Nvidia. At its Advancing AI 2025 event, AMD launched the MI350X/MI355X GPUs which could be competitive to Nvidia’s HGX B200 solutions for inference of small to medium LLMs on a performance per TCO basis.

  • 2 weeks ago | semianalysis.com | Tanj Bennett |Dylan Patel

    Standard Ethernet intially lost significant market share to Nvidia’s InfiniBand in the early days of the GenAI boom. Since then, Ethernet has started clawing back market share, largely driven by cost, the various deficiencies of InfiniBand, as well as the ability to add more features and customization on top of Ethernet.

  • 2 weeks ago | semianalysis.com | Dylan Patel |AJ Kourabi

    The test time scaling paradigm is thriving. Reasoning models continue to rapidly improve, and are becoming more effective and affordable. Evaluations measuring real world software engineering tasks, like SWE-Bench, are seeing higher scores at cheaper costs. Below is a chart showing how models are both getting cheaper and better. Reinforcement learning (RL) is the reason for this progress.

  • 1 month ago | semianalysis.com | Dylan Patel |Daniel Nishball |Ivan Chiam

    It has been long claimed that AMD’s AI servers can achieve better inference performance per total cost of ownership (TCO) than Nvidia. We have spent the past six months investigating and validating this claim through a comprehensive analysis and benchmarking of inference solutions offered by both Nvidia and AMD. We expected to arrive at a simple answer, but instead the results were far more nuanced and surprising to us.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →

X (formerly Twitter)

Followers
48K
Tweets
11K
DMs Open
Yes
No Tweets found.