Articles

  • 2 weeks ago | ignorance.ai | Charlie Guo |Nathan Lambert

    Meta's Llama 4 launch should have been a triumphant moment for open-weight AI. Instead, it's become a showcase of how technical excellence can be undermined by rushed execution and communication missteps (and why taking shortcuts may be the natural response to increasing competitive pressures). In the days since Meta (unexpectedly) released its newest AI model family over the weekend, the company's narrative has been at odds with the community's reception.

  • 1 month ago | exponentialview.co | Azeem Azhar |Nathan Lambert |Joachim Klement

    Hi,Here’s your Monday round-up of data driving conversations this week — all in less than 250 words. A Chinese discount. Baidu’s new multimodal foundation model, ERNIE 4.5, outperforms OpenAI’s GPT-4.5 on several benchmarks—while being 99% cheaper. An open alt. The Allen Institute released OLMo 2 32B, the first fully open-source1 GPT-4-class model. (via )Solar capacity.

  • Dec 10, 2024 | chinatalk.media | Jordan Schneider |Nathan Lambert

    ChinaTalk is hiring for a dedicated China AI lab analyst. Chinese fluency and a technical background are strongly preferred. Apply here!We’ve got a new show up on the podcast feed with of the Interconnects Substack talking through the biggest AI stories of this year and next. Listen in on Apple Podcasts or Spotify. Today we’re running a guest piece from Ray Wang, a Washington-based analyst.

  • Oct 16, 2024 | interconnects.ai | Nathan Lambert

    Other than the pace of progress on individual evaluations being extremely high, the landscape of evaluating leading language models has not changed substantially in the last year. The biggest change is the required level of detail in reporting results, where more information must be communicated to contextualize the capabilities of your models. The new types of models that OpenAI's o1 heralds welcome a new axis to this, evaluation-time compute.

  • Sep 9, 2024 | interconnects.ai | Nathan Lambert

    The central point of AI regulation and policy over the last few years, everything from the Biden Executive Order to California’s SB 1047 bill, has been model size. The most common tool for AI enforcement proportional to model size has been thresholds that kick in once an AI system uses more than a certain amount of compute (or money) to be trained. The use of thresholds for regulation is the subject of substantial pushback and is likely fading in relevance due to it.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →