
Tamsin Leake
Articles
-
Aug 31, 2024 |
lesswrong.com | Tamsin Leake
Malignancy in the prior seems like a strong crux of the goal-design part of alignment to me.
-
May 2, 2024 |
lesswrong.com | Tamsin Leake
Basically all ideas/insights/research about AI is potentially exfohazardous. At least, it's pretty hard to know when some ideas/insights/research will actually make things better; especially in a world where building an aligned superintelligence (let's call this work "alignment") is quite harder than building any superintelligence (let's call this work "capabilities"), and there's a lot more people trying to do the latter than the former, and they have a lot more material resources.
-
Apr 23, 2024 |
lesswrong.com | Tamsin Leake
If you're reading this, it's possible you just found yourself switched to the Enriched tab. Congratulations! You were randomly assigned to be fed to the Shoggoth to a group of users automatically switched to the new posts list. The Enriched posts list:Is 50% the same algorithm as Latest, 50% ML-algorithm selected posts for you based on your post interaction history. The sparkle icon next to the post title marks which posts were the result of personalized recommendations.
-
Mar 14, 2024 |
lesswrong.com | Euan McLean |Seth Herd |Siao Si Looi |Tamsin Leake
Doing a PhD is a strong option to get great at developing and evaluating research ideas. These skills are necessary to become an AI safety research lead, one of the key talent bottlenecks in AI safety, and are helpful in a variety of other roles. By contrast, my impression is that currently many individuals with the goal of being a research lead pursue options like independent research or engineering-focused positions instead of doing a PhD.
-
Dec 5, 2023 |
lesswrong.com | Tamsin Leake |Thane Ruthenis |Nathan Helm-Burger |Roman Leventov
Except China doesn't want to die any more than the US, and there isn't, in principle, any reason the Chinese government can't be convinced of the seriousness of the danger. They would believe in the reality of an asteroid on a collision course with Earth if shown the evidence, and the AGI threat is no less real. The current belief held by many people is that future AI can be controlled.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →