Vojta Kovarik's profile photo

Vojta Kovarik

Featured in:

Articles

  • Jun 14, 2024 | lesswrong.com | Vojta Kovarik

    I am looking for examples of theories that we now know to be correct, but that would have have been unfalsifiable in a slightly different context --- e.g., in the past, or in hypothetical scenarios. (Unsurprisingly, this is motivated by the unfalsifiability of some claims around AI X-risk.

  • Apr 3, 2024 | lesswrong.com | Vojta Kovarik

    I think there is an important lack of clarity and shared understanding regarding how people intend to use AI-Safety-via-Debate-style approaches. So I think it would be helpful if there were some people --- who either (i) work on Debate or (ii) believe that Debate is promising --- who could give their answers to the following three questions:What is, according to you, the purpose of AI Debate? What problem is it supposed to be solving? How do you intend AI Debate to be used?

  • Feb 26, 2024 | lesswrong.com | Vojta Kovarik |Mateusz Bagiński

    Assumming that there is an "alignment homework" to be done, I am tempted to answer something like: AI can do our homework for us, but only if we are already in a position where we could solve that homework even without AI. An important disclaimer is that perhaps there is no "alignment homework" that needs to get done ("alignment by default", "AGI being impossible", etc).

  • Dec 25, 2023 | lesswrong.com | Dave Orr |Mateusz Bagiński |Vojta Kovarik

    Given how fast AI is advancing and all the uncertainty associated with that (unemployment, potential international conflict, x-risk, etc.), do you think it's a good idea to have a baby now? What factors would you take into account (e.g. age)? Today I saw a tweet by Eliezer Yudkowski that made me think about this:"When was the last human being born who'd ever grow into being employable at intellectual labor? 2016?

  • Nov 7, 2023 | lesswrong.com | Vojta Kovarik |Roman Leventov

    Box inversion hypothesis is a proposed correspondence between problems with AI systems studied in approaches like agent foundations, and problems with AI ecosystems, studied in various views on AI safety expecting multipolar, complex worlds, like CAIS. This is an updated and improved introduction to the idea. Cartoon explanationIn the classic -"superintelligence in a box" - picture, we worry about an increasingly powerful AGI, which we imagine as contained in a box.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →