Ege Erdil's profile photo

Ege Erdil

Featured in: Favicon arxiv.org

Articles

  • Mar 18, 2024 | lesswrong.com | Ege Erdil

    A new paper by Finlayson et al. describes how to exploit the softmax bottleneck in large language models to infer the model dimension of closed-source LLMs served to the public via an API. I'll briefly explain the method they use to achieve this and provide a toy model of the phenomenon, though the full paper has many practical details I will elide in the interest of simplicity. I recommend reading the whole paper if this post sounds interesting to you.

  • Jan 29, 2024 | lesswrong.com | Steven Byrnes |Ege Erdil |Gerald M. Monroe |Tao Lin

    Don't global clock speeds have to go down as die area goes up due to the speed of light constraint? Yes if you use 1 die with 1 clock domain, they would.  Modern chips don't. For instance, if you made a die with 1e15 MAC units and the area scaled linearly, you would be looking at a die that's ~ 2e9 times larger than H100's die size, which is about 1000 mm^2.

  • Nov 10, 2023 | lesswrong.com | Daniel Kokotajlo |Ajeya Cotra |Ege Erdil |Zach Stein-Perlman

    Here is a summary of the discussion so far:Daniel made an argument against Hofstadter's law for trend extrapolation and we discussed the validity of that for a bit. A key thing that has come up as an interesting crux/observation is that Ege and Ajeya both don't expect a massive increase in transfer learning ability in the next few years. For Ege this matters a lot because it's one of the top reasons why AI will not speed up the economy and AI development that much.

  • Nov 1, 2023 | lesswrong.com | Matthew Barnett |Adam Scholl |Daniel Kokotajlo |Ege Erdil

    A common theme implicit in many AI risk stories has been that broader society will either fail to anticipate the risks of AI until it is too late, or do little to address those risks in a serious manner. In my opinion, there are now clear signs that this assumption is false, and that society will address AI with something approaching both the attention and diligence it deserves. For example, one clear sign is Joe Biden's recent executive order on AI safety.

  • Apr 26, 2023 | lesswrong.com | Steven Byrnes |Alexander Gietelink Oldenziel |Eliezer Yudkowsky |Ege Erdil

    This is to announce a $250 prize for or otherwise indepth reviewing Jacob Cannell's technical claims concerning thermodynamic & physical limits on computations and the claim of biological efficiency of the brain in his post Brain Efficiency: Much More Than You Wanted To KnowI've been quite impressed by Jake's analysis ever since it came out. I have been puzzled why there has been so little discussion about his analysis since if true it seems to be quite important.

Contact details

Socials & Sites

Try JournoFinder For Free

Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.

Start Your 7-Day Free Trial →