ELON MUSK himself said, AI is vastly more dangerous than nukes.

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” said Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”
 
“So the rate of improvement is really dramatic. We have to figure out some way to ensure that the advent of digital super intelligence is one which is symbiotic with humanity. I think that is the single biggest existential crisis that we face and the most pressing one.”
 
“And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
 
  • +1
Reactions: enlightful
AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
 
  • +1
Reactions: Snicket
AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
1748127816529
man wtf is this bru
 
  • +1
  • JFL
Reactions: CEO and Kiwi'sSub5
AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
You dont have your hands on any of the technology Elon does though
 
AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
LLMs are, yes.
 
  • +1
Reactions: imontheloose
You dont have your hands on any of the technology Elon does though
True but they fundamentally cannot be turned into these aliens they're fearmongered to be. You can make them more physical with more expensive raw materials, sure, but you cannot change the fundamentals behind AI which even students are aware of.
 
True but they fundamentally cannot be turned into these aliens they're fearmongered to be. You can make them more physical with more expensive raw materials, sure, but you cannot change the fundamentals behind AI which even students are aware of.
LLMs are just predictive token generators.
They are not intelligent.
 
  • +1
Reactions: imontheloose
LLMs are just predictive token generators.
They are not intelligent.
My point.

Nonetheless, AI as a whole is only as smart as its creator allows it to be. We still use automation to do most of our mathematical and physics tasks which we humans tell it to do because AI itself is so terrible at it.
 
  • +1
Reactions: Snicket
My point.

Nonetheless, AI as a whole is only as smart as its creator allows it to be. We still use automation to do most of our mathematical and physics tasks which we humans tell it to do because AI itself is so terrible at it.
Even Yoshua Bengio said that we will probably need a breakthrough in neuroscience to get AI closer to human cognition.

He is a genius who won the Turing Award.
 
  • +1
Reactions: imontheloose
Even Yoshua Bengio said that we will probably need a breakthrough in neuroscience to get AI closer to human cognition.

He is a genius who won the Turing Award.
AI is only good for searching through human-made papers and spewing a summary of it back to you. That's why it's so good at explaining topics, yet so horrible at solving problems.

Human cognition is simply too advanced for AI to replicate. It really would be a fantastic breakthrough if we can get it even close. If we can, then AI would essentially debunk all conjectures and find new laws by the hour. Of course, this will not happen.
 
  • +1
Reactions: Snicket
AI is only good for searching through human-made papers and spewing a summary of it back to you. That's why it's so good at explaining topics, yet so horrible at solving problems.
Current AI is basically an exceptional pattern finder but horrible at critical thinking.
Human cognition is simply too advanced for AI to replicate. It really would be a fantastic breakthrough if we can get it even close. If we can, then AI would essentially debunk all conjectures and find new laws by the hour. Of course, this will not happen.
I believe human cognition could be replicated if we had a better understanding of how the brain actually works.

I’m with Bengio on this one. Neural networks existed as a concept in computation for 100 years but it’s taken a long time for the technology to catch up.

We need a new scientific breakthrough before we can get another computational one imo.

I don’t know what the timeframe will be but I believe it could happen.
 
  • +1
Reactions: Deathninja328 and imontheloose
Current AI is basically an exceptional pattern finder but horrible at critical thinking.
Precisely so.
I believe human cognition could be replicated if we had a better understanding of how the brain actually works.

I’m with Bengio on this one. Neural networks existed as a concept in computation for 100 years but it’s taken a long time for the technology to catch up.

We need a scientific breakthrough before we can get another computational one imo.
Keep in mind. The human brain is by far the most fascinating system in the entire universe. 86 billion neurons each one of them forming thousands of synaptic connections, with dynamic, non-linear interactions. And it isn't only electrical signals too but neuromodulators, hormones, glial activity which influence cognition in ways no model we have ever made can replicate or even understand.

Even with Moore's law or quantum computing, it's unclear that we could ever stimulate anything of the likes of the human brain's system. AI also lacks something fundamentally: a mathematically complete theory of cognition. Cognition isn't described by law. It's impossible to manufacture it artificially without that knowledge which we simply are forever unable to test for.

Not to mention the differences in data vs experience, and they also lack intuition totally. They can only solve questions via brute-force or learned heuristics. They cannot have revolutionary insight so they will never derive a law from first principles like a human genius will. We, as humans, can also extrapolate infinite potential from finite experience whereas AI's method of brute-force learning is inherently brittle.

John Searle wrote something similar in Chinese Room that argued thought experiment, syntactic processing (what AI does) is not the same as semantic understanding. So, understandably, no matter how elaborate a computational process gets, they will never truly understand in the human sense. It's fundamentally screwed and flawed.
 
  • +1
Reactions: Snicket
Even with Moore's law or quantum computing, it's unclear that we could ever stimulate anything of the likes of the human brain's system.
Interesting point but even the advent of transformer architecture is a big leap forward in dynamic text generation.

Yes, it’s barely touching the surface of true human cognition but highly trained, domain specific GPTs will still blow 50% of people out of the water in terms of performance, considering so many issues can be condescend to pattern recognition.
AI also lacks something fundamentally: a mathematically complete theory of cognition. Cognition isn't described by law. It's impossible to manufacture it artificially without that knowledge which we simply are forever unable to test for.
Yes, this was my point exactly. Cognition is not fully understood on an even a scientific level yet, let alone a computational one.
Not to mention the differences in data vs experience, and they also lack intuition totally. They can only solve questions via brute-force or learned heuristics. They cannot have revolutionary insight so they will never derive a law from first principles like a human genius will. We, as humans, can also extrapolate infinite potential from finite experience whereas AI's method of brute-force learning is inherently brittle.
You do have unsupervised learning although it’s still not great for the kind of tasks we care about.
John Searle wrote something similar in Chinese Room that argued thought experiment, syntactic processing (what AI does) is not the same as semantic understanding. So, understandably, no matter how elaborate a computational process gets, they will never truly understand in the human sense. It's fundamentally screwed and flawed.
This seems like a valid criticism. I’ll read into it.
 
  • +1
Reactions: imontheloose
Interesting point but even the advent of transformer architecture is a big leap forward in dynamic text generation.
True. But transformers are still fundamentally pattern matching machines. They don't understand meaning, intent, or causality. They predict the next token in a sequence based on massive exposure to prior patterns as you said earlier. So, I do agree, transformers can produce impressive outputs that appear coherent. But coherence is not the same as understanding, insight, or reasoning. Think of it like so. A parrot trained on Shakespeare can recite beautiful lines. That doesn't mean it understands tragedy, irony, or even human suffering.

You get my point. I do understand what you're saying, and you seem to agree partially.
Yes, it’s barely touching the surface of true human cognition but highly trained, domain specific GPTs will still blow 50% of people out of the water in terms of performance, considering so many issues can be condescend to pattern recognition.
Yes, this was my point exactly. Cognition is not fully understood on an even a scientific level yet, let alone a computational one.
This conflates task performance with cognitive equivalence. Sure, a calculator outperforms 100% of people in arithmetic, but we can't claim it's intelligent at all. We agree there. What seems like intelligence is often just statistical overfitting to a task distribution.

Highly trained, domain-specific GPTs are brittle fundamentally. They excel in narrow contexts but generalise poorly. When prompted outside the scope, they'll just hallucinate or fail. Humans, by contrast are flexible, adaptive, and robust to ambiguity. A human trained as a mathematician might still write poetry, understand irony, and navigate social cues. But GPT can't do that without siloed, separate training pipelines. It's just compartmentalised mimicry.
You do have unsupervised learning although it’s still not great for the kind of tasks we care about.
Cognition is not just pattern recognition. That's the crux. Pattern recognition is necessary, not sufficient. It's tough. You'd need intentionality, understanding, metacognition, emotion and contextual relevance, and embodied learning. Transformers are totally disconnected from all of these. Even the best unsupervised learning models can't form grounded understanding without real-world feedback loops.

I don't think unsupervised learning will ever be truly there. You are right it is improving, but it lacks structure. It's like trying to learn a language by reading 10000 books without speaking or listening. Until models act environmentally, learn from consequence, form beliefs and revise them, they'll never replicate cognition. You can simulate it, but only within a narrow statistical corridor.

It's fundamentally screwed. That's the issue.
 
  • +1
Reactions: Snicket
True. But transformers are still fundamentally pattern matching machines. They don't understand meaning, intent, or causality. They predict the next token in a sequence based on massive exposure to prior patterns as you said earlier. So, I do agree, transformers can produce impressive outputs that appear coherent. But coherence is not the same as understanding, insight, or reasoning. Think of it like so. A parrot trained on Shakespeare can recite beautiful lines. That doesn't mean it understands tragedy, irony, or even human suffering.
I agree with all this.
You get my point. I do understand what you're saying, and you seem to agree partially.
Yes!
This conflates task performance with cognitive equivalence. Sure, a calculator outperforms 100% of people in arithmetic, but we can't claim it's intelligent at all. We agree there. What seems like intelligence is often just statistical overfitting to a task distribution.

Highly trained, domain-specific GPTs are brittle fundamentally. They excel in narrow contexts but generalise poorly. When prompted outside the scope, they'll just hallucinate or fail. Humans, by contrast are flexible, adaptive, and robust to ambiguity. A human trained as a mathematician might still write poetry, understand irony, and navigate social cues. But GPT can't do that without siloed, separate training pipelines. It's just compartmentalised mimicry.
No, I appreciate it’s not cognition but we still are very outcome oriented and the simple fact is that GPT solutions are superior problem solvers to most people with full fledged cognition.
Cognition is not just pattern recognition. That's the crux. Pattern recognition is necessary, not sufficient. It's tough. You'd need intentionality, understanding, metacognition, emotion and contextual relevance, and embodied learning. Transformers are totally disconnected from all of these. Even the best unsupervised learning models can't form grounded understanding without real-world feedback loops.
I think GPT solutions are sufficient in many cases already. At least for many jobs that require a great deal of procedural tasks with little to no critical thinking - like complex logistical operations.
I don't think unsupervised learning will ever be truly there. You are right it is improving, but it lacks structure. It's like trying to learn a language by reading 10000 books without speaking or listening. Until models act environmentally, learn from consequence, form beliefs and revise them, they'll never replicate cognition. You can simulate it, but only within a narrow statistical corridor.
Not well informed about this. I think without further breakthroughs in neuroscience, unsupervised learning is hard capped.
It's fundamentally screwed. That's the issue.
We’ll have to wait and see.
 
  • +1
Reactions: imontheloose

Users who are viewing this thread

Back
Top