
_MVP_
Emerald
- Joined
- Jul 15, 2022
- Posts
- 52,971
- Reputation
- 55,244
.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: this_feature_currently_requires_accessing_site_using_safari
AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
fucking lol.View attachment 3767523 man wtf is this bru
You dont have your hands on any of the technology Elon does thoughAI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
LLMs are, yes.AI is so fearmongered. They're only as stupid or intelligent as the creator lets it be. I've first hand made them at uni and they're so fallible.
True but they fundamentally cannot be turned into these aliens they're fearmongered to be. You can make them more physical with more expensive raw materials, sure, but you cannot change the fundamentals behind AI which even students are aware of.You dont have your hands on any of the technology Elon does though
LLMs are just predictive token generators.True but they fundamentally cannot be turned into these aliens they're fearmongered to be. You can make them more physical with more expensive raw materials, sure, but you cannot change the fundamentals behind AI which even students are aware of.
My point.LLMs are just predictive token generators.
They are not intelligent.
Even Yoshua Bengio said that we will probably need a breakthrough in neuroscience to get AI closer to human cognition.My point.
Nonetheless, AI as a whole is only as smart as its creator allows it to be. We still use automation to do most of our mathematical and physics tasks which we humans tell it to do because AI itself is so terrible at it.
AI is only good for searching through human-made papers and spewing a summary of it back to you. That's why it's so good at explaining topics, yet so horrible at solving problems.Even Yoshua Bengio said that we will probably need a breakthrough in neuroscience to get AI closer to human cognition.
He is a genius who won the Turing Award.
Current AI is basically an exceptional pattern finder but horrible at critical thinking.AI is only good for searching through human-made papers and spewing a summary of it back to you. That's why it's so good at explaining topics, yet so horrible at solving problems.
I believe human cognition could be replicated if we had a better understanding of how the brain actually works.Human cognition is simply too advanced for AI to replicate. It really would be a fantastic breakthrough if we can get it even close. If we can, then AI would essentially debunk all conjectures and find new laws by the hour. Of course, this will not happen.
Precisely so.Current AI is basically an exceptional pattern finder but horrible at critical thinking.
Keep in mind. The human brain is by far the most fascinating system in the entire universe. 86 billion neurons each one of them forming thousands of synaptic connections, with dynamic, non-linear interactions. And it isn't only electrical signals too but neuromodulators, hormones, glial activity which influence cognition in ways no model we have ever made can replicate or even understand.I believe human cognition could be replicated if we had a better understanding of how the brain actually works.
I’m with Bengio on this one. Neural networks existed as a concept in computation for 100 years but it’s taken a long time for the technology to catch up.
We need a scientific breakthrough before we can get another computational one imo.
Interesting point but even the advent of transformer architecture is a big leap forward in dynamic text generation.Even with Moore's law or quantum computing, it's unclear that we could ever stimulate anything of the likes of the human brain's system.
Yes, this was my point exactly. Cognition is not fully understood on an even a scientific level yet, let alone a computational one.AI also lacks something fundamentally: a mathematically complete theory of cognition. Cognition isn't described by law. It's impossible to manufacture it artificially without that knowledge which we simply are forever unable to test for.
You do have unsupervised learning although it’s still not great for the kind of tasks we care about.Not to mention the differences in data vs experience, and they also lack intuition totally. They can only solve questions via brute-force or learned heuristics. They cannot have revolutionary insight so they will never derive a law from first principles like a human genius will. We, as humans, can also extrapolate infinite potential from finite experience whereas AI's method of brute-force learning is inherently brittle.
This seems like a valid criticism. I’ll read into it.John Searle wrote something similar in Chinese Room that argued thought experiment, syntactic processing (what AI does) is not the same as semantic understanding. So, understandably, no matter how elaborate a computational process gets, they will never truly understand in the human sense. It's fundamentally screwed and flawed.
True. But transformers are still fundamentally pattern matching machines. They don't understand meaning, intent, or causality. They predict the next token in a sequence based on massive exposure to prior patterns as you said earlier. So, I do agree, transformers can produce impressive outputs that appear coherent. But coherence is not the same as understanding, insight, or reasoning. Think of it like so. A parrot trained on Shakespeare can recite beautiful lines. That doesn't mean it understands tragedy, irony, or even human suffering.Interesting point but even the advent of transformer architecture is a big leap forward in dynamic text generation.
Yes, it’s barely touching the surface of true human cognition but highly trained, domain specific GPTs will still blow 50% of people out of the water in terms of performance, considering so many issues can be condescend to pattern recognition.
This conflates task performance with cognitive equivalence. Sure, a calculator outperforms 100% of people in arithmetic, but we can't claim it's intelligent at all. We agree there. What seems like intelligence is often just statistical overfitting to a task distribution.Yes, this was my point exactly. Cognition is not fully understood on an even a scientific level yet, let alone a computational one.
Cognition is not just pattern recognition. That's the crux. Pattern recognition is necessary, not sufficient. It's tough. You'd need intentionality, understanding, metacognition, emotion and contextual relevance, and embodied learning. Transformers are totally disconnected from all of these. Even the best unsupervised learning models can't form grounded understanding without real-world feedback loops.You do have unsupervised learning although it’s still not great for the kind of tasks we care about.
I agree with all this.True. But transformers are still fundamentally pattern matching machines. They don't understand meaning, intent, or causality. They predict the next token in a sequence based on massive exposure to prior patterns as you said earlier. So, I do agree, transformers can produce impressive outputs that appear coherent. But coherence is not the same as understanding, insight, or reasoning. Think of it like so. A parrot trained on Shakespeare can recite beautiful lines. That doesn't mean it understands tragedy, irony, or even human suffering.
Yes!You get my point. I do understand what you're saying, and you seem to agree partially.
No, I appreciate it’s not cognition but we still are very outcome oriented and the simple fact is that GPT solutions are superior problem solvers to most people with full fledged cognition.This conflates task performance with cognitive equivalence. Sure, a calculator outperforms 100% of people in arithmetic, but we can't claim it's intelligent at all. We agree there. What seems like intelligence is often just statistical overfitting to a task distribution.
Highly trained, domain-specific GPTs are brittle fundamentally. They excel in narrow contexts but generalise poorly. When prompted outside the scope, they'll just hallucinate or fail. Humans, by contrast are flexible, adaptive, and robust to ambiguity. A human trained as a mathematician might still write poetry, understand irony, and navigate social cues. But GPT can't do that without siloed, separate training pipelines. It's just compartmentalised mimicry.
I think GPT solutions are sufficient in many cases already. At least for many jobs that require a great deal of procedural tasks with little to no critical thinking - like complex logistical operations.Cognition is not just pattern recognition. That's the crux. Pattern recognition is necessary, not sufficient. It's tough. You'd need intentionality, understanding, metacognition, emotion and contextual relevance, and embodied learning. Transformers are totally disconnected from all of these. Even the best unsupervised learning models can't form grounded understanding without real-world feedback loops.
Not well informed about this. I think without further breakthroughs in neuroscience, unsupervised learning is hard capped.I don't think unsupervised learning will ever be truly there. You are right it is improving, but it lacks structure. It's like trying to learn a language by reading 10000 books without speaking or listening. Until models act environmentally, learn from consequence, form beliefs and revise them, they'll never replicate cognition. You can simulate it, but only within a narrow statistical corridor.
We’ll have to wait and see.It's fundamentally screwed. That's the issue.