Articles

  • 5 days ago | magazine.sebastianraschka.com | Sebastian Raschka

    A lot has happened this month, especially with the releases of new flagship models like GPT-4.5 and Llama 4. But you might have noticed that reactions to these releases were relatively muted. Why? One reason could be that GPT-4.5 and Llama 4 remain conventional models, which means they were trained without explicit reinforcement learning for reasoning. Meanwhile, competitors such as xAI and Anthropic have added more reasoning capabilities and features into their models.

  • 3 weeks ago | ai-supremacy.com | Ethan Mollick |Sebastian Raschka |Timothy Lee

    This list is not exhaustive but features many of the top AI Newsletters on the internet today. 📚 Some of the factors used in the ranking are total paid subscriptions (this is the main criteria) engagement relative to list size⭐ highlighting emerging writers (also check out the “rising” leaderboards in tech in the app)💭 simply my own personal judgment (while browsing Substack’s Tech and Biz leaderboards & analyzing who is reading who that is a huge Venn diagram in and of itself).

  • 1 month ago | magazine.sebastianraschka.com | Sebastian Raschka

    Improving the reasoning abilities of large language models (LLMs) has become one of the hottest topics in 2025, and for good reason. Stronger reasoning skills allow LLMs to tackle more complex problems, making them more capable across a wide range of tasks users care about. In the last few weeks, researchers have shared a large number of new strategies to improve reasoning, including scaling inference-time compute, reinforcement learning, supervised fine-tuning, and distillation.

  • 2 months ago | magazine.sebastianraschka.com | Sebastian Raschka

    This article describes the four main approaches to building reasoning models, or how we can enhance LLMs with reasoning capabilities. I hope this provides valuable insights and helps you navigate the rapidly evolving literature and hype surrounding this topic. In 2024, the LLM field saw increasing specialization. Beyond pre-training and fine-tuning, we witnessed the rise of specialized applications, from RAGs to code assistants.

  • 2 months ago | sebastianraschka.com | Sebastian Raschka

    In this article, I will describe the four main approaches to building reasoning models, or how we can enhance LLMs with reasoning capabilities. I hope this provides valuable insights and helps you navigate the rapidly evolving literature and hype surrounding this topic. In 2024, the LLM field saw increasing specialization. Beyond pre-training and fine-tuning, we witnessed the rise of specialized applications, from RAGs to code assistants.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →