
Ariel Kwiatkowski
Articles
-
Aug 13, 2024 |
lesswrong.com | Steven Byrnes |Ariel Kwiatkowski
This is a polemic to the ten arguments post. I'm not a regular LW poster, but I'm an AI researcher and mild-AI-worrier. I believe that AI progress, and the risks associated with it, is one of the most important things to figure out as humanity in the current year. And yet, in most discussions about x-risk, I find myself unaligned with either side.
-
May 2, 2024 |
lesswrong.com | Ariel Kwiatkowski
I'm a daily user of ChatGPT, sometimes supplementing it with Claude, and the occasional local model for some experiments. I try to make squeeze LLMs into agent-shaped bodies, but it doesn't really work. I also have a PhD, which typically would make me an expert in the field of AI, but the field is so busy and dynamic that it's hard to really state what an "expert" even is.
-
Mar 7, 2024 |
lesswrong.com | Nathan Helm-Burger |Ariel Kwiatkowski
This is a republish of a previous post, after the previous version went through heavy editing, updates and changes. The text has been expanded, content moved around/added/deleted. Major updates for those who read the previous draft:Added - 1.0 SOTA AI, 5.1 Zeroth Order Forecasting, 5.2 First Order Forecasting, 4.4 Scaling Hypotheses, Appendices 1,2,3. Expanded - 2.0 Foundation Models all sections, 3.1. Capabilities vs.
-
Apr 24, 2023 |
lesswrong.com | Steven Byrnes |Quintin Pope |Jon Garcia |Ariel Kwiatkowski
Crossposted from my personal blog. Epistemic status: Pretty speculative, but there is a surprising amount of circumstantial evidence. I have been increasingly thinking about NN representations and slowly coming to the conclusion that they are (almost) completely secretly linear inside.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →