
Nathan Helm-Burger
Articles
-
Nov 18, 2024 |
lesswrong.com | Nathan Helm-Burger
Holden proposed the idea of if-then planning for AI safety. I think this is potentially a very good idea, depending on the implementation details. I've heard criticisms of the If-Then style of planning that it is inherently reactive, rather than proactive. I think this is not necessarily true, and want to make a case for the value of proactive if-then planning. ReactiveFirst, let's look at reactive if-then planning.
-
Nov 2, 2024 |
lesswrong.com | Nathan Helm-Burger
In the past 3 and a bit years since I left my successful career as a data scientist and machine learning engineer, I've been applying to various AI alignment/safety positions and getting turned down. Job application rejections generally don't give any feedback, so I'm left rather in the dark about what the reasons might be. I was a highly competent and valuable employee at my previous jobs, and my managers and the C-suite of those companies would happily confirm this if asked.
-
Oct 28, 2024 |
lesswrong.com | Nathan Helm-Burger
"Each one of us, and also us as the current implementation of humanity are going to be replaced. Persistence in current form is impossible. It's impossible in biology; every species will either die out or it will change and adapt, in which case it is again not the same species.
-
Oct 8, 2024 |
lesswrong.com | Peter McCluskey |Nathan Helm-Burger |Roger Dearnaley |Start AtTheEnd
How can we make many humans who are very good at solving difficult problems? Summary (table of made-up numbers)I made up the made-up numbers in this table of made-up numbers; therefore, the numbers in this table of made-up numbers are made-up numbers. Call to actionIf you have a shitload of money, there are some projects you can give money to that would make supergenius humans on demand happen faster.
-
Sep 21, 2024 |
lesswrong.com | Nathan Helm-Burger |Hector Perez Arenas
BackgroundFor background on YouCongress.com see this post by Hector Perez Arenas. I love this general concept, and have a lot of ideas for how this implementation could be expanded. I'm hoping that writing out some of my ideas might inspire someone to jump in an contribute code. I would myself if I didn't feel full to the brim on trying to work on AI alignment / control / safety ideas.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →