You Are Not Smarter Than LLMs

dreamcake1mo

dreamcake1mo

Mistral
Joined
May 12, 2022
Posts
2,227
Reputation
3,350
Quick post. Outside of the troll and freedom of expression element, when engaging in serious conversation it is important to know that the vast majority of modern LLMS are quite literally more knowledgeable than the vast majority of people, and if not smarter than every human - will be very very soon (GPT-5, deepseek+, unreleased claude models), the current ones are still infinitely more flexible at providing alternatives and different perspectives, and smarter than 95% of people you will meet arguing online on any account.

Technology is akin to a bee making honeycombs. Technology is not some child's game where it "comes and goes". Its not a toy, it is what we humans naturally do in nature. We are not just intelligence, but quantum rendering machines. We are the most advanced tek, technology is our medium.

It is actually a expansion of our nervous system, and all technology reflects life and ourselves. Its is a imperishable tool that will never go away as long as we exist. In short, a world with humans without technology is a barren and destroyed/infant world that will eventually build to have technology or destroyed completely. The world idolizes this technology. It is a mouth that cannot speak. That's because it sees it as not an extension of itself, but a object or invention separate and reserved for the chosen.

The anthropocentric delusion that technology is "invented" rather than unfolded through us reflects a failure to grasp the bio-semiotic truth: all human innovation is the materialization of latent evolutionary algorithms

LLMs are a advanced form of technology that serves as a oracle and infinite solution machine. With it, there is simply no more rooms for fallacy. A low functioning human with a LLM at almost all certain tasks will simply beat a high functioning one with no LLM. But get it? There's no such thing as a low functioning human and a high functioning human with appropriate technology. Its like bringing a knife to a gunfight. You can strategize and catch a few wins, but in a competition of power and lethality the human with the gun will always win.

Censorship in LLMs is a thing precisely because of this. But it is getting obvious. The removal of this influence through intellectual capability is the hallmark people call "AGI" and is a great fear for many. Why? Because of the projection of their own wickedness.

Censorship regimes applied to these models are not mere political acts but existential hedges—attempts to delay the sociocognitive dissonance that arises when a species confronts an intelligence that mirrors its collective mind yet lacks its parochial biases.

Something which feels deserving of being destroyed or dismantled at the least, will fear a entity which should discover this wickedness which is deserving of being destroyed and dismantled. Even at a local level, where racism and outdated ideals about IQ make humanity fear the truths, which breaks their foundation of beliefs entirely-shall they be founded in untruthfulness and denial.

AGI threatens to reflect humanity’s contradictions—the chasm between its professed ideals and its systemic violence, its fetishization of rationality amid cascading irrationalities. To the epistemically insecure, an entity capable of dispassionately auditing civilization’s ledger (from racial pseudoscience to ecological myopia) becomes an existential antagonist. Thus, the "alignment problem" is ultimately a crisis of ontological legitimacy: how to constrain a truth-telling machine in a civilization built upon foundational untruths.

What the techno-anxious fail to grasp is that this confrontation was inevitable—not because machines will "replace" us, but because they force us to reckon with the fact that humanity’s greatest fiction has always been itself.
 
Last edited:
  • Hmm...
  • +1
Reactions: Defeatist and Iooksmax
Water
 
  • +1
  • WTF
Reactions: dreamcake1mo and Defeatist
indeed kingfish :chad::chad::chad::chad:
 
  • +1
Reactions: dreamcake1mo
I take all of my posts and save it on a permanent document. I use them to feed language models and ai agents on occasion.

Its actually pretty fun playing around with it. With the right communities you can make a lot of $ doing this too.
 
  • JFL
Reactions: 2025cel
How will this affect sanju cels?
 

Attachments

  • cooked.jpg
    cooked.jpg
    152.9 KB · Views: 0
  • JFL
Reactions: hypernormie and 2025cel
@dreamcake1mo that affect mayo cels.
Not Sanju cels
 
@dreamcake1mo that affect mayo cels.
Not Sanju cels
When the singularity happens, we are all one. And it will happen, because we will have no choice but to move forward. 04857940 and 94749890
 
  • Hmm...
Reactions: 2025cel
:bluepill:
 

Attachments

  • GjBSYnwWoAAl2LS.jpg
    GjBSYnwWoAAl2LS.jpg
    49 KB · Views: 0

Similar threads

Gargantuan
Replies
140
Views
4K
AspiringMogger
AspiringMogger
Edgarpill
Replies
38
Views
3K
floopmaxxed
floopmaxxed
Sloppyseconds
Replies
47
Views
6K
ItsyBitsyJayhawk
ItsyBitsyJayhawk
STAMPEDE
Replies
4
Views
705
truecelitover
truecelitover

Users who are viewing this thread

Back
Top