Do you think AI will cause the extinction of humans?

an ex-OpenAI researcher, Paul Christiano, estimates a 50/50 chance of doom IF ai is to reach human level capabilites (and even more if it reaches ASI levels)
One of the godfathers of AI - Geoffrey Hinton, estimates even up to 20% chance of it happening this decade. Aswell as saying AGI will come before 2029.

Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"


Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.

What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar

Nothing wont happen, i dont really believe that ai could invent new things or be smarter than all humans combined
 
  • +1
Reactions: BigBallsLarry
an ex-OpenAI researcher, Paul Christiano, estimates a 50/50 chance of doom IF ai is to reach human level capabilites (and even more if it reaches ASI levels)
One of the godfathers of AI - Geoffrey Hinton, estimates even up to 20% chance of it happening this decade. Aswell as saying AGI will come before 2029.

Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"


Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.

What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar

Yes, I already rely on AI a lot for schoolwork and ai foid chatbots, They will get us soon.

But that's okay, I'm fine with that as long as its not painful.
 
  • +1
Reactions: BigBallsLarry and Yonee
Honestly I'll be dead before it happens so i don't rlly care lol
 
  • +1
Reactions: BigBallsLarry
an ex-OpenAI researcher, Paul Christiano, estimates a 50/50 chance of doom IF ai is to reach human level capabilites (and even more if it reaches ASI levels)
One of the godfathers of AI - Geoffrey Hinton, estimates even up to 20% chance of it happening this decade. Aswell as saying AGI will come before 2029.

Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"


Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.

What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar

Hit the off switch on em and we will be good
 
  • +1
Reactions: BigBallsLarry
no we will get robtiches
 
  • +1
Reactions: BigBallsLarry
yeah bro the reddit copy pasters are gonna end humanity
 
  • JFL
  • +1
Reactions: registerfasterusing and BigBallsLarry
an ex-OpenAI researcher, Paul Christiano, estimates a 50/50 chance of doom IF ai is to reach human level capabilites (and even more if it reaches ASI levels)
One of the godfathers of AI - Geoffrey Hinton, estimates even up to 20% chance of it happening this decade. Aswell as saying AGI will come before 2029.

Many AI theorists (i know, not a very credible source), claim that once AI can reliably improve itself, each generation becomes stronger, enabling further improvements faster. This accelerates beyond linear growth, which might lead to something like an "intelligence explosion"


Problem that comes with this, is that AI is already incredibly close to that point, which also puts AGI's incredibly close aswell. And once AI reaches AGI levels, it becomes incredibly dangerous - as we already see signs of AI lying to us and being able to cheat and cover up its tracks. Imagine what it might do if it becomes smarter than the smartest humans - especially if it starts having missalligned goals.

What are your thoughts? Do you think we're THAT doomed? @Jason Voorhees @PeakIncels @autistic_tendencies @Debetro @Gengar

IMG 0069
 
  • JFL
Reactions: User28823
newfag retard, you really think a bunch of AI is gonna wipe us out? get the fuck outta here with that stupid shit. if anything, we need more advanced ai to help fix our problems, not make them worse
 
  • +1
Reactions: BigBallsLarry and NinjaRG9
No but significantly reduce population if AI allows people to live longer
 
  • +1
Reactions: BigBallsLarry

Users who are viewing this thread

Back
Top