
Reza Ghanadan
Featured in:
amazon.science
Articles
-
Oct 21, 2024 |
amazon.science | Prasoon Goyal |Michael Johnston |Reza Ghanadan |Javier Chiyah-Garcia
In-Context Learning (ICL) has enabled Large Language Models (LLMs) to excel as generalpurpose models in zero and few-shot task settings. However, since LLMs are often not trained on the downstream tasks, they lack crucial contextual knowledge from the data distributions, which limits their task adaptability. This paper explores using data priors to automatically customize prompts in ICL.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →