
M. Y. Zuo
Articles
-
Apr 4, 2024 |
lesswrong.com | Gerald M. Monroe |Victor Ashioya |M. Y. Zuo |Nora Belrose
Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI. Note how the punishment lengths are all the same, I think it was a mass ban-wave of 3 week bans:Gears to ascension was here but is no longer, guess she convinced them it was a mistake.
-
Jan 28, 2024 |
lesswrong.com | Nathan Helm-Burger |Carl Feynman |M. Y. Zuo
Inspired by Milan Cvitkovic’s article, Things You’re Allowed to Do.Going to the dentist can be uncomfortable. Some amount of this is unavoidable. Yet most dentists and staff care a lot about patient comfort. Tell them what you need, and you may very well get it!The hardest part is figuring out what’s on the menu. Below are some items that I’ve discovered.
-
Jan 2, 2024 |
lesswrong.com | Joe Carlsmith |Charlie Steiner |M. Y. Zuo |Nathan Young
(Cross-posted from my website. Audio version here, or search "Joe Carlsmith Audio" on your podcast app. This is the first essay in a series that I’m calling “Otherness and control in the age of AGI.” See here for more about the series as a whole.)When species meetThe most succinct argument for AI risk, in my opinion, is the “second species” argument. Basically, it goes like this. Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans. Conclusion: That’s scary.
-
Dec 30, 2023 |
lesswrong.com | Ben Pace |Light Star |M. Y. Zuo
Or “How I got my hyperanalytical friends to chill out and vibe on ideas for 5 minutes before testing them to destruction”Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation.
-
Nov 27, 2023 |
lesswrong.com | Steven Byrnes |Zach Stein-Perlman |M. Y. Zuo |Thane Ruthenis
SummaryYou can’t optimise an allocation of resources if you don’t know what the current one is. Existing maps of alignment research are mostly too old to guide you and the field has nearly no ratchet, no common knowledge of what everyone is doing and why, what is abandoned and why, what is renamed, what relates to what, what is going on. This post is mostly just a big index: a link-dump for as many currently active AI safety agendas as we could find. But even a linkdump is plenty subjective.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →