
BigBallsLarry
⭐⭐⭐⭐⭐
- Joined
- Jun 8, 2025
- Posts
- 10,614
- Reputation
- 35,108
an ex-OpenAI researcher, Paul Christiano, estimates a 50/50 chance of doom IF ai is to reach human level capabilites (and even more if it reaches ASI levels)
www.businessinsider.com
One of the godfathers of AI - Geoffrey Hinton, estimates even up to 20% chance of it happening this decade. Aswell as saying AGI will come before 2029.
medium.com
Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"
Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.
What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar
Ex-OpenAI researcher says there's a 50% chance AI development could end in 'doom'
Paul Christiano runs the non-profit Alignment Research Center but previously ran the language model alignment team at OpenAI.


If Geoffrey Hinton is worried about AI causing human extinction; maybe it’s time to pay attention.
Geoffrey Hinton, Professor and former Fellow at Google, who won the 2024 Nobel Prize in Physics has been trying to warn us that Artificial…
Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"
Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.
What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar
Last edited: