
David Lorell
Articles
-
Jan 24, 2025 |
lesswrong.com | David Lorell
The CakeImagine that I want to bake a chocolate cake, and my sole goal in my entire lightcone and extended mathematical universe is to bake that cake. I care about nothing else. If the oven ends up a molten pile of metal ten minutes after the cake is done, if the leftover eggs are shattered and the leftover milk spilled, that’s fine. Baking that cake is my terminal goal. In the process of baking the cake, I check my fridge and cupboard for ingredients.
-
Oct 14, 2024 |
lesswrong.com | David Lorell
Suppose two Bayesian agents are presented with the same spreadsheet - IID samples of data in each row, a feature in each column. Each agent develops a generative model of the data distribution. We'll assume the two converge to the same predictive distribution, but may have different generative models containing different latent variables.
-
Oct 9, 2024 |
lesswrong.com | David Lorell
Imagine a TV showing a video of a bizarre, unfamiliar object - let’s call it a squirgle. The video was computer generated by a one-time piece of code, so there's no "real squirgle" somewhere else in the world which the video is showing. Nonetheless, there's still some substantive sense in which the squirgle on screen is "a thing" - even though I can only ever see it through the screen, I can still:Notice that the same squirgle is shown at one time and another time.
-
Sep 19, 2024 |
lesswrong.com | David Lorell
Background: “Learning” vs “Learning About”Adaptive systems, reinforcement “learners”, etc, “learn” in the sense that their behavior adapts to their environment. Bayesian reasoners, human scientists, etc, “learn” in the sense that they have some symbolic representation of the environment, and they update those symbols over time to (hopefully) better match the environment (i.e. make the map better match the territory). These two kinds of “learning” are not synonymous.
-
Aug 22, 2024 |
lesswrong.com | David Lorell
Meta: This post is a relatively rough dump of some recent research thoughts; it’s not one of our more polished posts, in terms of either clarity or rigor. You’ve been warned. The Interoperable Semantics post and the Solomonoff Inductor Walks Into A Bar post each tackled the question of how different agents in the same world can coordinate on an ontology, so that language can work at all given only a handful of example usages of each word (similar to e.g. children learning new words).
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →