
Articles
-
1 week ago |
aws.amazon.com | Ishan Singh |Neeraj Lamba
In the landscape of generative AI, organizations are increasingly adopting a structured approach to deploy their AI applications, mirroring traditional software development practices. This approach typically involves separate development and production environments, each with its own AWS account, to create logical separation, enhance security, and streamline workflows.
-
2 weeks ago |
ieducation.co.za | Ishan Singh
The 79th anniversary of Victory in Europe Day in May – a pivotal moment marking the end of World War II in Europe – presents an opportunity to reflect on the transformative power of architecture. Architecture is more than the art of designing buildings; it shapes our communities and embodies our collective aspirations.
-
1 month ago |
aws.amazon.com | Shreyas Vathul Subramanian |Adewale Akinfaderin |Ishan Singh |Jesse Manders
With Amazon Bedrock Evaluations, you can evaluate foundation models (FMs) and Retrieval Augmented Generation (RAG) systems, whether hosted on Amazon Bedrock or another model or RAG system hosted elsewhere, including Amazon Bedrock Knowledge Bases or multi-cloud and on-premises deployments. We recently announced the general availability of the large language model (LLM)-as-a-judge technique in model evaluation and the new RAG evaluation tool, also powered by an LLM-as-a-judge behind the scenes.
-
1 month ago |
aws.amazon.com | Yanyan Zhang |Ishan Singh |David Yan |Shreeya Sharma
Amazon Bedrock Model Distillation is generally available, and it addresses the fundamental challenge many organizations face when deploying generative AI: how to maintain high performance while reducing costs and latency. This technique transfers knowledge from larger, more capable foundation models (FMs) that act as teachers to smaller, more efficient models (students), creating specialized models that excel at specific tasks.
-
2 months ago |
aws.amazon.com | Adewale Akinfaderin |Ishan Singh |Jesse Manders |Shreyas Vathul Subramanian
AWS Machine Learning Blog Organizations deploying generative AI applications need robust ways to evaluate their performance and reliability. When we launched LLM-as-a-judge (LLMaJ) and Retrieval Augmented Generation (RAG) evaluation capabilities in public preview at AWS re:Invent 2024, customers used them to assess their foundation models (FMs) and generative AI applications, but asked for more flexibility beyond Amazon Bedrock models and knowledge bases.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →