Articles

  • Feb 3, 2025 | infoq.com | Apoorva Joshi |Srini Penchikala

    In this podcast, Apoorva Joshi, Senior AI Developer Advocate at MongoDB, discusses how to evaluate software applications that use the Large Language Models or LLMs and how to improve the performance of LLM based applications.

  • Nov 18, 2024 | lxer.com | Bruce Gain |BC Gain |Apoorva Joshi |Emilio Salvador

    SALT LAKE CITY — SUSE now offers what it describes as a comprehensive observability platform, designed to cover all environments — whether on AWS, Azure, Google Cloud, on premises or in other environments — that run on Rancher for Kubernetes. With SUSE Cloud Observability, the company seeks to provide observability functionality comparable to that of industry heavyweights, including Datadog, Grafana and Honeycomb.

  • Nov 17, 2024 | lxer.com | Jack Wallen |Loraine Lawson |David Eastman |Apoorva Joshi

    Not even five years ago, if someone said to me that Fedora was a Linux distribution that anyone could use, I would have smirked and pointed them toward Ubuntu, elementary OS, Zorin OS, or Linux Mint. It has long been known as an early test run for Red Hat to test its next version of Red Hat Enterprise Linux. The distro was chock full of great features but still pretty buggy for everyday use.

  • Oct 2, 2024 | mongodb.social | Apoorva Joshi

    In the past year or so, there has been an uptick in natural language experiences in user-facing applications, ranging from RAG (Retrieval Augmented Generation (RAG) chatbots and agents to natural language search. With natural language becoming the new query language, keyword-based full-text search might not always retrieve the best results due to the length, paraphrasing, ambiguity, and vagueness of natural language queries.

  • Jun 24, 2024 | mongodb.social | Apoorva Joshi

    If you have ever deployed machine learning models in production, you know that evaluation is an important part of the process. Evaluation is how you pick the right model for your use case, ensure that your model’s performance translates from prototype to production, and catch performance regressions. While evaluating Generative AI applications (also referred to as LLM applications) might look a little different, the same tenets for why we should evaluate these models apply.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →