
Stephen McAleese
Articles
-
Oct 12, 2024 |
lesswrong.com | Stephen McAleese
Geoffrey Hinton is a famous AI researcher who is often referred to as the "godfather of AI" because of his foundational work on neural networks and deep learning from the 1980s until today. Arguably his most significant contribution to the field of AI was the introduction of the backpropagation algorithm for neural network training in 1986 which is widely used to train neural networks today.
-
Oct 12, 2024 |
lesswrong.com | Stephen McAleese
Geoffrey Hinton is a famous AI researcher who is often referred to as the "godfather of AI" because of his foundational work on neural networks and deep learning from the 1980s until today. Arguably his most significant contribution to the field of AI was the introduction of the backpropagation algorithm for neural network training in 1986 which is widely used to train neural networks today.
-
Nov 21, 2023 |
lesswrong.com | Eli Tyre |Stephen McAleese |Ben Pace |Oliver Sourbut
I've seen/heard a bunch of people in the LW-o-sphere saying that the OpenAI corporate drama this past weekend was clearly bad. And I'm not really sure why people think that? To me, seems like a pretty clearly positive outcome overall. I'm curious why in the world people are unhappy about it (people in the LW-sphere, that is, obviously I can see why e.g. AI accelerationists would be unhappy about it). And I also want to lay out my models. Here's the high-gloss version of my take.
-
Oct 13, 2023 |
lesswrong.com | Nicholas Kross |Stephen McAleese |Alex K. Chen |Nathan Helm-Burger
EDIT: The full post is now upOh boy do I have a response for you. I think it may be possible to significantly enhance adult intelligence through gene editing. The basic idea goes something like this:There are about 20,000 genetic variants that influence fluid intelligenceMost of the variance among humans is determined by the number of IQ-decreasing minor alleles someone has.
-
Jul 12, 2023 |
lesswrong.com | Stephen McAleese |Nicholas Kross |Roman Leventov
Note: this post was updated on 2 January 2024 to reflect all available data from 2023. AI safety is a field concerned with preventing negative outcomes from AI systems and ensuring that AI is beneficial to humanity. The field does research on problems such as the AI alignment problem which is the problem of designing AI systems that follow user intentions and behave in a desirable and beneficial way.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →