
Tobias Mann
Systems Editor at The Register
Systems Editor @TheRegister / @SitPub — hiker, animal lover, photographer, blogger, and tech journo. I'm over on Mastodon now at @[email protected]
Articles
-
1 week ago |
theregister.com | Tobias Mann
World War Fee Meta's AI ambitions are going to cost more than expected thanks to increased competition and — who could have seen this coming? — the Trump administration's obsession with tariffs, which is driving up the price tag of key components. This revelation came amid Zuckercorp's Q1 earnings call this week when CFO Susan Li warned investors its infrastructure investments could end up costing Meta as much as $72 billion in 2025.
-
1 week ago |
theregister.com | Tobias Mann
UK authorities on Wednesday arrested three individuals in connection with a multi-million-pound bribery probe tied to the construction of a Microsoft datacenter in the Netherlands. The arrests came after more than 70 investigators from Britain's Serious Fraud Office (SFO) raided five properties across London, Kent, Surrey, and Somerset. Meanwhile, Monaco authorities - with help from the SFO - searched a suspect's residence in the principality.
-
2 weeks ago |
theregister.com | Tobias Mann
+Comment Anthropic has urged the White House to further tighten so-called AI diffusion rules – which are already set to hurt Nvidia and co by limiting or blocking the sale of higher-end GPUs and accelerators outside the US and a select few allies from mid-May. On Wednesday, the San Francisco-based chatbot maker said in a briefing note these looming export controls won't go far enough to stem the flow of smuggled chips fueling China's continued advancements in artificial intelligence.
-
2 weeks ago |
theregister.com | Tobias Mann
Direct Connect Intel has revealed a pair of variants of its long-awaited 18A process node to make it better suited for, one, manufacturing mass-market processors and, two, complex multi-die semiconductors for – of course – AI. First teased in mid-2021, the 2nm-ish 18A is set to finally enter volume production later this year with the launch of Intel’s Panther Lake client processor family.
-
3 weeks ago |
lxer.com | Tobias Mann
Hands On You can spin up a chatbot with Llama.cpp or Ollama in minutes, but scaling large language models to handle real workloads – think multiple users, uptime guarantees, and not blowing your GPU budget – is a very different beast. While a model might run efficiently on your PC with less than 4 GB of memory, the resources required to serve a model at scale are quite different. Deploying it in a production environment to handle numerous concurrent requests can require 40GB of GPU memory or more.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →Coverage map
X (formerly Twitter)
- Followers
- 1K
- Tweets
- 3K
- DMs Open
- Yes

20 minutes to #GTC keynote kick off. https://t.co/UtENfrmcP8

Here we go again. #GTC25 https://t.co/xb79Qj6OWN

DeepSeek-R1-beating perf in a 32B package? El Reg digs its claws into Alibaba's QwQ We slogged through hyperparameter hell so you don't have to. My latest adventure in AI now live on @TheRegister https://t.co/KfIvVFyMG4