
Robert Kralisch
Articles
-
Jan 21, 2025 |
lesswrong.com | Linda Linsefors |Robert Kralisch
We still need more funding to be able to run another edition. Our fundraiser raised $6k as of now, and will end if it doesn't reach the $15k minimum, on February 1st. We need proactive donors. If we don't get funded for this time, there is a good chance we will move on to different work in AI Safety and new commitments. This would make it much harder to reassemble the team to run future AISCs, even if the funding situation improves.
-
Oct 26, 2024 |
lesswrong.com | Robert Kralisch |Linda Linsefors
We are pleased to announce that the 10th version of the AI Safety Camp is now entering the team member application phase!We again have a wide range of projects this year, so check them out to see if you or someone you know might be interested in applying to join one of them. You can find all of the projects and the application form on our website, or directly apply here. The deadline for team member applications is November 17th (Sunday).
-
Jul 24, 2024 |
lesswrong.com | Robert Kralisch
A while ago, I published a sort of conceptual intro to a cognitive architecture I am working on, the "Prop-room and Stage Cognitive Architecture" (or PSCA for short). I recently had a somewhat stimulating research chat about it with Claude and wanted to try out sharing it, as it is a potentially easy way for me to share more of my writing and thinking through a dialogue format.
-
Apr 28, 2024 |
lesswrong.com | Robert Kralisch
This is a post on a novel cognitive architecture I have been thinking about for a while now, first as a conceptual playground to concretise some of my agent foundation ideas, and lately as an idea for a project that approaches the Alignment Problem directly by concretising a sort of AI-Seed approach for an inherently interpretable yet generally intelligent architecture on which to implement a notion of "minimal alignment" before figuring out how to scale it.
-
Apr 28, 2024 |
lesswrong.com | Robert Kralisch
I have thought quite a bit about LLMs and the Simulator framing around them, and it seems pretty obvious to me that “they are simulators” is a good explanatory/predictive frame for behavior of current LLM (+multimodal) systems. It provides intuitive answers for why the difficulties around capability assessment for LLMs exist, and lines out somewhat coherent explanations about their out of distribution behaviors (i.e. Bing Sydney).
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →