Lukas Finnveden's profile photo

Lukas Finnveden

Featured in:

Articles

  • 1 week ago | 80000hours.org | Robert Wiblin |Lukas Finnveden

    Once we get to a world where it is technologically possible to replace those researchers with AI systems — which could just be fully obedient, instruction-following AI systems — then you could feasibly have a situation where there’s just one person at the top of the organisation that gives a command: “This is how I want the next AI system to be developed.” And then this army of loyal, obedient AIs will then do all of the technical work.

  • Aug 23, 2024 | lesswrong.com | Lukas Finnveden

    This post gives my personal take on “AI for epistemics” and how important it might be to work on. Some background context:AI capabilities are advancing rapidly and I think it’s important to think ahead and prepare for the possible development of AI that could automate almost all economically relevant tasks that humans can do. That kind of AI would have a huge impact on key epistemic processes in our society.

  • Jan 24, 2024 | lesswrong.com | Feeping Creature |Seth Herd |Lukas Finnveden |Rohin Shah

    I agree it helps to run experiments at small scales first, but I'd be pretty surprised if that helped to the point of enabling a 30x speedup -- that means that the AI labor allows you get 30x improvement in compute needed beyond what would be done by default by humans (though the 30x can include e.g. improving utilization, it's not limited just to making individual experiments take less time).

  • Jan 4, 2024 | lesswrong.com | Lukas Finnveden |Roger Dearnaley

    This is a series of posts with lists of projects that it could be valuable for someone to work on. The unifying theme is that they are projects that:Would be especially valuable if transformative AI is coming in the next 10 years or so. Are not primarily about controlling AI or aligning AI to human intentions. Most of the projects would be valuable even if we were guaranteed to get aligned AI. Some of the projects would be especially valuable if we were inevitably going to get misaligned AI.

  • Oct 25, 2023 | lesswrong.com | Steven Byrnes |Thomas Kwa |Lukas Finnveden |Rob Bensinger

    AI used to be a science. In the old days (back when AI didn't work very well), people were attempting to develop a working theory of cognition. Those scientists didn’t succeed, and those days are behind us. For most people working in AI today and dividing up their work hours between tasks, gone is the ambition to understand minds.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →