-
Nov 14, 2024 |
lesswrong.com | Eric Neyman
[Cross-posted from my blog. I think LessWrong readers would find the discussion of to be the most interesting section of this post.]I spent most of my election day -- 3pm to 11pm Pacific time -- trading on Manifold Markets. That went about as well as it could have gone. I doubled the money I was trading with, jumping to 10th place on Manifold's all-time leaderboard. Spending my time trading instead of just nervously watching results come in also spared me emotionally.
-
Nov 14, 2024 |
ericneyman.wordpress.com | Eric Neyman
I spent most of my election day — 3pm to 11pm Pacific time — trading on Manifold Markets. That went about as well as it could have gone. I doubled the money I was trading with, jumping to 10th place on Manifold’s all-time leaderboard. Spending my time trading instead of just nervously watching results come in also spared me emotionally. It’s been a week now, and people seem to be in a mood for learning lessons, for grand takeaways. There is, of course, virtue in learning the right lessons.
-
Oct 7, 2024 |
lesswrong.com | Eric Neyman
Last week, ARC released a paper called Towards a Law of Iterated Expectations for Heuristic Estimators, which follows up on previous work on formalizing the presumption of independence. Most of the work described here was done in 2023. A brief table of contents for this post: (One example and three analogies.) (Three potential applications.) for heuristic estimation (the technical meat of the paper).
-
Oct 2, 2024 |
arxiv.org | Jacob K. Hilton |Eric Neyman |Mark Xu |Andrea Lincoln
-
Sep 27, 2024 |
lesswrong.com | Eric Neyman
It seems that China may be going through a recession. It's hard to tell because we can't really trust government data, but my vague impression (which you shouldn't trust very much) is that China is in a pretty bad economic position. Source: Financial Times (though see here for caveats). What implications (if any) does this have for AGI development in China?
-
May 27, 2024 |
ericneyman.wordpress.com | Eric Neyman
Yesterday, I had a coronectomy: the top halves of my bottom wisdom teeth were surgically removed. It was my first time being sedated, and I didn’t know what to expect. While I was unconscious during the surgery, the hour after surgery turned out to be a fascinating experience, because I was completely lucid but had almost zero short-term memory. My girlfriend, who had kindly agreed to accompany me to the surgery, was with me during that hour.
-
May 9, 2024 |
lesswrong.com | Eric Neyman
In March I posted a very short description of my PhD thesis, Algorithmic Bayesian Epistemology, on LessWrong. I've now written a more in-depth summary for my blog, Unexpected Values. Here's the full post:***In January, I defended my PhD thesis. My thesis is called Algorithmic Bayesian Epistemology, and it’s about predicting the future. In many ways, the last five years of my life have been unpredictable.
-
May 3, 2024 |
lesswrong.com | Eric Neyman
Yesterday, I had a coronectomy: the top halves of my bottom wisdom were teeth surgically removed. It was my first time being sedated, and I didn’t know what to expect. While I was unconscious during the surgery, the hour after surgery turned out to be a fascinating experience, because I was completely lucid but had almost zero short-term memory. My girlfriend, who had kindly agreed to accompany me to the surgery, was with me during that hour.
-
Apr 25, 2024 |
lesswrong.com | Eric Neyman
I think that people who work on AI alignment (including me) have generally not put enough thought into the question of whether a world where we build an aligned AI is better by their values than a world where we build an unaligned AI. I'd be interested in hearing people's answers to this question. Or, if you want more specific questions:By your values, do you think a misaligned AI creates a world that "rounds to zero", or still has substantial positive value?
-
Mar 16, 2024 |
lesswrong.com | Eric Neyman
In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface:For me as for most students, college was a time of exploration. I took many classes, read many academic and non-academic works, and tried my hand at a few research projects. Early in graduate school, I noticed a strong commonality among the questions that I had found particularly fascinating: most of them involved reasoning about knowledge, information, or uncertainty under constraints.