Mateusz Bagiński's profile photo

Mateusz Bagiński

Featured in:

Articles

  • May 31, 2024 | lesswrong.com | Mateusz Bagiński

    includeIt is written in More Dakka:If something is a good idea, you need a reason to not try doing more of it. Taken at face value, it implies the contrapositive:If something is a bad idea, you need a reason to not try doing less of it. Labels/concepts, such as More Dakka, Inadequate Equilibria, etc point to a puzzling phenomenon.

  • Mar 20, 2024 | lesswrong.com | Mateusz Bagiński

    I'm especially interested in examples of more or less psychologically healthy and otherwise (neuro)typical people having very weird desires/values that we would characterize as intrinsic in the sense of being wanted for their own sake, even if we could explain their development as linked to a more typical human drive. But I'm also somewhat interested in examples of very out-of-distribution desires/values in very [otherwise psychologically out-of-distribution] people.

  • Feb 26, 2024 | lesswrong.com | Vojta Kovarik |Mateusz Bagiński

    Assumming that there is an "alignment homework" to be done, I am tempted to answer something like: AI can do our homework for us, but only if we are already in a position where we could solve that homework even without AI. An important disclaimer is that perhaps there is no "alignment homework" that needs to get done ("alignment by default", "AGI being impossible", etc).

  • Dec 31, 2023 | lesswrong.com | Vanessa Kosoy |Roger Dearnaley |Roman Leventov |Mateusz Bagiński

    I call "alignment strategy" the high-level approach to solving the technical problem. For example, value learning is one strategy, while delegating alignment research to AI is another. I call "alignment metastrategy" the high-level approach to converging on solving the technical problem in a manner which is timely and effective. (Examples will follow.)In a previous article, I summarized my criticism of prosaic alignment. However, my analysis of the associated metastrategy was too sloppy.

  • Dec 25, 2023 | lesswrong.com | Dave Orr |Mateusz Bagiński |Vojta Kovarik

    Given how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.), do you think it's a good idea to have a baby now? What factors would you take into account (e.g. age)? Today I saw a tweet by Eliezer Yudkowski that made me think about this:"When was the last human being born who'd ever grow into being employable at intellectual labor? 2016?

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →