
Harsha Nori
Articles
-
Nov 27, 2024 |
microsoft.com | Eric J Horvitz |Harsha Nori |Naoto Usuyama
Groundbreaking advancements in frontier language models are progressing rapidly, paving the way for boosts in accuracy and reliability of generalist models, making them highly effective in specialized domains. As part of our ongoing exploration of foundation model capabilities, we developed Medprompt last year—a novel approach to maximize model performance on specialized domain and tasks without fine-tuning.
-
Oct 6, 2024 |
arxiv.org | Julien Siems |Harsha Nori |David Salinas |Arber Zela
arXiv:2410.04560 (cs) View PDF HTML (experimental) Comments: Subjects: Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:2410.04560 [cs.LG] (or arXiv:2410.04560v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2410.04560 Submission history From: Arber Zela [ view email] [v1] Sun, 6 Oct 2024 17:28:20 UTC (5,578 KB) Bibliographic Tools Bibliographic Explorer Toggle Bibliographic Explorer () Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart...
-
Dec 12, 2023 |
microsoft.com | Eric J Horvitz |Harsha Nori |Yin Tat Lee |Brenda Potts
We’re seeing exciting capabilities of frontier foundation models, including intriguing powers of abstraction, generalization, and composition across numerous areas of knowledge and expertise. Even seasoned AI researchers have been impressed with the ability to steer the models with straightforward, zero-shot prompts. Beyond basic, out-of-the-box prompting, we’ve been exploring new prompting strategies, showcased in our Medprompt work, to evoke the powers of specialists.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →