Adele Lopez's profile photo

Adele Lopez

Featured in:

Articles

  • Mar 10, 2024 | lesswrong.com | Adele Lopez |Alexander Gietelink Oldenziel |Nate Showell

    Truth values in classical logic have more than one interpretation. In 0th Person Logic, the truth values are interpreted as True and False. In 1st Person Logic, the truth values are interpreted as Here and Absent relative to the current reasoner. Importantly, these are both useful modes of reasoning that can coexist in a logical embedded agent.

  • Feb 11, 2024 | lesswrong.com | Adele Lopez |Charlie Steiner

    There's nothing stopping the AI from developing its own world model (or if there is, it's not intelligent enough to be much more useful than whatever process created your starting world model). This will allow it to model itself in more detail than you were able to put in, and to optimize its own workings as is instrumentally convergent. This will result in an intelligence explosion due to recursive self-improvement.

  • Jan 27, 2024 | lesswrong.com | Adele Lopez |Seth Herd

    It's important to remember that the culture we grew up in is deeply nihilistic at its core. People expect Moloch, assume Moloch as a given, even defer to Moloch. If you read enough about business and international affairs (not news articles, those don't count, not for international affairs at least, I don't know about business), and then read about dath ilan, it becomes clear that our world is ruled by Moloch cultists who nihilistically optimized for career advancement.

  • Dec 12, 2023 | lesswrong.com | Seth Herd |Adele Lopez |Scott Alexander |William C Stanford

    TL;DR versionIn the course of my life, there have been a handful of times I discovered an idea that changed the way I thought about where our species is headed. The first occurred when I picked up Nick Bostrom’s book “superintelligence” and realized that AI would utterly transform the world. The second was when I learned about embryo selection and how it could change future generations.

  • Jul 8, 2023 | lesswrong.com | Mateusz Bagiński |Adele Lopez |Roman Leventov |Ben Pace

    Summary: AGI isn't super likely to come super soon. People should be working on stuff that saves humanity in worlds where AGI comes in 20 or 50 years, in addition to stuff that saves humanity in worlds where AGI comes in the next 10 years. Thanks to Alexander Gietelink Oldenziel, Abram Demski, Daniel Kokotajlo, Cleo Nardo, Alex Zhu, and Sam Eisenstat for related conversations.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →