-
Oct 16, 2024 |
pubs.acs.org | Shuai Tang |Hao Wang |Huihui Zhang |Mingyu Zhang
-
Oct 16, 2024 |
dx.doi.org | Shuai Tang |Hao Wang |Huihui Zhang |Mingyu Zhang
-
Jun 13, 2024 |
amazon.science | Shasha Li |Ming Du |Arnab Dhua |Shuai Tang
Vision-language transformer models play a pivotal role in e-commerce product search. When using product description (e.g. product title) and product image pairs to train such models, there are often non-visual-descriptive text attributes in the product description, which makes the visual textual alignment challenging. We introduce MultiModal Learning with online Token Pruning (MML-TP).
-
Jun 13, 2024 |
amazon.science | Shuai Tang |Zhiwei Wu |Sergul Aydore |Michael Kearns
Recently, diffusion models have become popular tools for image synthesis due to their high-quality outputs. However, like other large models, they may leak private information about their training data. Here, we demonstrate a privacy vulnerability of diffusion models through a membership inference (MI) attack, which aims to identify whether a target example belongs to the training set when given the trained diffusion model.
-
Jan 19, 2024 |
amazon.science | Shuai Tang |Sergul Aydore |Michael Kearns |Aaron Thomas Roth
We revisit the problem of differentially private squared error linear regression. We observe that existing state- of-the-art methods are sensitive to the choice of hyperparameters — including the “clipping threshold” that cannot be set optimally in a data-independent way. We give a new algorithm for private linear regression based on gradient boosting.
-
Dec 13, 2023 |
amazon.science | Zhiwei Wu |Shuai Tang |Sergul Aydore |Michael Kearns
Recently, diffusion models have demonstrated great potential for image synthesis due to their ability to generate high-quality synthetic data. However, when applied to sensitive data, privacy concerns have been raised about these models. In this paper, we evaluate the privacy risks of diffusion models through a membership inference (MI) attack, which aims to identify whether a target example is in the training set when given the trained diffusion model.
-
Dec 12, 2023 |
amazon.science | Martin Lopez |Shuai Tang |Michael Kearns |Jamie Morgenstern
Machine learning (ML) has been strategic to Amazon from the early years. We are pioneers in areas such as recommendation engines, product search, eCommerce fraud detection, and large-scale optimization of fulfillment center operations. The Generative AI team helps AWS customers accelerate the use of Generative AI to solve business and operational challenges and promote innovation in their organization.
-
Nov 16, 2023 |
amazon.science | Michael Kearns |Aaron Thomas Roth |Sergul Aydore |Shuai Tang
Aaron Roth is the Henry Salvatori Professor of Computer and Cognitive Science at the University of Pennsylvania and an Amazon Scholar. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory, learning theory, and machine learning. Together with Cynthia Dwork, he is the author of the book The Algorithmic Foundations of Differential Privacy; together with Michael Kearns, he is the author of The Ethical Algorithm. Read more
-
Jul 7, 2023 |
arxiv.org | Zhiwei Steven |Shuai Tang |Michael Kearns |Jamie Morgenstern
Membership inference attacks are designed to determine, using black box access to trained models, whether a particular example was used in training or not. Membership inference can be formalized as a hypothesis testing problem.
-
Jun 22, 2023 |
dovepress.com | Yaodan Bi |Yingchao Zhu |Shuai Tang
Back to Journals » Journal of Pain Research » Volume 16 Original Research Pre-Clinical/Scientific Authors Bi Y, Zhu Y, Tang S Received 22 June 2023 Accepted for publication 23 November 2023 Published 18 December 2023 Volume 2023:16 Pages 4317—4328 DOI https://doi.org/10.2147/JPR.S424086 Checked for plagiarism Yes Review by Single anonymous peer review Peer reviewer comments 2 Editor who approved publication: Dr Wendy Imlach Yaodan Bi,1,* Yingchao Zhu,2,* Shuai Tang1 1Department of...