Rafael Harth's profile photo

Rafael Harth

Featured in:

Articles

  • Dec 19, 2023 | lesswrong.com | Tracing Woodgrains |Paul Crowley |David Hornbein |Rafael Harth

    Picture a scene: the New York Times is releasing an article on Effective Altruism (EA) with an express goal to dig up every piece of negative information they can find. They contact Émile Torres, David Gerard, and Timnit Gebru, collect evidence about Sam Bankman-Fried, the OpenAI board blowup, and Pasek's Doom, start calling Astral Codex Ten (ACX) readers to ask them about rumors they'd heard about affinity between Effective Altruists, neoreactionaries, and something called TESCREAL.

  • Dec 18, 2023 | lesswrong.com | Steven Byrnes |Rafael Harth

    Part of the Valence series. Here in the final post of the Valence series, I will discuss how valence might shed light on three phenomena in mental health and personality: depression, mania, and narcissistic personality disorder. Section 5.2 gives some context: What kind of relationship do we expect a priori between algorithm-level mental components like “valence”, versus observable mental health syndromes and personality disorders?

  • Dec 15, 2023 | lesswrong.com | Steven Byrnes |Rafael Harth

    Great post! I love the whole valence sequence! I agree with the central thesis 4.2 that social status is defined by Person X's perception of the positive feelings Person Y has towards Person Z. I also agree with the elaborations in 4.3 and 4.4. I have a tangent question about 4.3 and 4.6 that I comment on separately. But I think your innate status-drive claim of 4.5 doesn't hold, and the convergent learning (plus other factors) is the more likely explanation.

  • Jul 2, 2023 | lesswrong.com | Steven Byrnes |Rafael Harth |Charlie Steiner |Richard Kennaway

    I have a simple, yet unusual, explanation for the difference between camp #1 and camp#2: we have different experiences of consciousness.  Believing that everyone has our kind of consciousness, of course we talk past each other. I’ve noticed that in conversations about qualia, I’m always in the position of Mr Boldface in the example dialog: I don’t think there is anything that needs to be explained, and I’m puzzled that nobody can tell me what qualia are using sensible words.

  • Apr 4, 2023 | lesswrong.com | Steven Byrnes |Rafael Harth |Thane Ruthenis |Nathan Helm-Burger

    It has become common on LW to refer to "giant inscrutable matrices" as a problem with modern deep-learning systems. To clarify: deep learning models are trained by creating giant blocks of random numbers -- blocks with dimensions like 4096 x 512 x 1024 -- and incrementally adjusting the values of these numbers with stochastic gradient descent (or some variant thereof). In raw form, these giant blocks of numbers are of course completely unintelligible.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →