
Tamer H.M. Soliman
Articles
-
Apr 1, 2024 |
amazon.science | Larry Hardesty |Sean O'Neill |Tamer H.M. Soliman
In August 2024, at the 10th International Conference on Quantum Information and Quantum Control, John Preskill, the Richard P. Feynman Professor of Theoretical Physics at the California Institute of Technology and an Amazon Scholar, will receive the John Stewart Bell Prize for Research on Fundamental Issues in Quantum Mechanics and Their Applications.
-
Apr 1, 2024 |
amazon.science | Sean O'Neill |Tamer H.M. Soliman
Amazon operates some of the most advanced fulfillment center warehouses in the world. Reading that opening line, it’s a fair bet you are picturing conveyor belts loaded with packages, or flat little robots carrying big yellow pods full of products around. But Amazon is also pushing the frontiers of human-centered engineering — a.k.a. ergonomics — to create the safest, healthiest warehouses in the world for its employees.
-
Mar 18, 2024 |
amazon.science | Abdul Fatir Ansari |Lorenzo Stella |Tamer H.M. Soliman
Time series forecasting is essential for decision making across industries such as retail, energy, finance, and health care. However, developing accurate machine-learning-based forecasting models has traditionally required substantial dataset-specific tuning and model customization. In a paper we have just posted to arXiv, we present Chronos, a family of pretrained time series models based on language model architectures.
-
Jan 19, 2024 |
amazon.science | Wenyi Wu |Qi Li |Tamer H.M. Soliman
Vision-language models, which map images and text to a common representational space, have demonstrated remarkable performance on a wide range of multimodal AI tasks. But they’re typically trained on text-image pairs: each text input is associated with a single image. This limits the models’ applicability.
-
Jan 17, 2024 |
amazon.science | Xiangkun Hu |Dongyu Ru |Tamer H.M. Soliman
For all their remarkable abilities, large language models (LLMs) have an Achilles heel, which is their tendency to hallucinate, or make assertions that sound plausible but are factually inaccurate. Sometimes, these hallucinations can be quite subtle: an LLM might, for instance, make an assertion that’s mostly accurate but gets a date wrong by just a year or two.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →