Articles

  • 2 weeks ago | lesswrong.com | Katja Grace |Adam Kaufman |Knight Lee |Logan Riggs

    I'm very skeptical that a one-year pause would meaningfully reduce this 4% risk. This skepticism arises partly because I doubt much productive safety research would actually happen during such a pause. In my view, effective safety research depends heavily on an active feedback loop between technological development and broader real-world applications and integration, and pausing the technology would essentially interrupt this feedback loop.

  • Sep 4, 2024 | lesswrong.com | Katja Grace

    Recently, Nathan Young and I wrote about arguments for AI risk and put them on the AI Impacts wiki. In the process, we ran a casual little survey of the American public regarding how they feel about the arguments, initially (if I recall) just because we were curious whether the arguments we found least compelling would also fail to compel a wide variety of people. The results were very confusing, so we ended up thinking more about this than initially intended and running four iterations total.

  • Jun 27, 2024 | lesswrong.com | Katja Grace

    A general thing I hadn’t noticed about debts until lately: Whenever Bob owes Alice, then Alice has reason to look after Bob, to the extent that increases the chance he satisfies the debt. Yet at the same time, Bob has an incentive for Alice to disappear, insofar as it would relieve him. These might be tiny incentives, and not overwhelm for instance Bob’s many reasons for not wanting Alice to disappear. But the bigger the owing, the more relevant the incentives.

  • Jun 23, 2024 | lesswrong.com | Katja Grace

    For those of you who enjoy learning things via listening in on numerous slightly different conversations about them, and who also want to learn more about this AI survey I led, three more podcasts on the topic, and also other topics: The AGI Show: audio, video (other topics include: my own thoughts about the future of AI and my path into AI forecasting) Consistently Candid: audio (other topics include: whether we should slow down AI progress, the best arguments for and against existential...

  • Jun 9, 2024 | lesswrong.com | Katja Grace

    It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →