News Artificial Intelligence should frighten the fuck out of you. (A Big Warning)

NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
Introduction
Artificial Intelligence should scare the living crap out of you and I'm not just joking around when I say this.

I have been thinking about A.I. a lot recently because I am naturally a creative type rather than mechanical. All of my interests are to do with art, music, animation, film etc. Creative personality really.

Creatives are really being hit hard by the introduction of A.I. as it seemingly produces art and literature in seconds, something that would take a human days or weeks to produce. It's killing off all artistic talent and many people are wondering if it is even worth the bother to be a creative person.

Not to mention that it is killing computer science as once again, A.I. can compile code within seconds because its literally made out of code. A person doesn't have to sit at a computer to do it once A.I. gets a handle on it through machine learning.

If A.I. is going to start off killing creative and computer orientated people, then it's very clear where this is all heading.

The Big Problem
I won't mince words here. But what you need to understand is that eventually down the line. Music will be completely taken over by A.I. This will be the big one that may start to open peoples eyes to the problem of it. Once A.I. becomes more proficient at music than human beings, all of the music industry will die and be replaced by computer generated music and lyrics.

You would be surprised at how popular this is going to be. Say for instance A.I. is able to generate lyrics that make sense, never been used before and in the voice of your chosen artist. What is stopping A.I. and the companies that own them, releasing albums within seconds using the voice of the deceased? Do you think people wouldn't listen to that? Their favourite dead artist?

John Lennon albums released once a day? Brand new songs never released or made but using A.I. it detects the patterns of the artist in their voice, guitar usage, other instruments producing a lifelike album with brand new music. Think how easy and simple that is for a company to produce once they get that going.

What about new Frank Sinatra albums? Maybe Frank Sinatra singing rock music for a change? What about Elvis singing a rap song?


You think companies are going to shy away from such easy money? You think Spotify, apple music, VEVO etc are going to shy away from that billion dollar money chest just a click away?

The Even Bigger Problem
Eventually though, this will metastasise into an even bigger problem . This is the most frightening aspect to it all.

Eventually. We may find ourselves in a situation where all movies, music, art, computer engineering, design, construction is run by an artificial intelligence.
What's stopping artificial intelligence from running the news? Fake generated news anchors, fake stories. What about entertainment? Fake youtube channels? Lifelike youtubers who are actually created by A.I.

All of your icons are computer simulations, all your television programmes are computer simulations, all your music, designers, computer engineers. Everything run by A.I.

A man and wife will sit and watch TV in their house and may be so out of touch that they dont realise every single show or news station they watch is an artificial intelligence creating plots and characters out of thin air. No human being appearing on their screen.


From Bad to Worse
If you think thats bad, consider this for a second.

What is stopping governments around the world using this to their advantage? Whose to say that they wont use A.I. to run fake terror attack stories to take away more rights from their subjects? Whose to say the person in charge of that nation (in such an example) even fucking exists?

Your government won't be real, your entire perception of reality through technology will be completely fabricated. You would only be able to trust your own eyes and not anything that can be watched or seen or heard on technology.

Think about how things might be faked in that scenario. "Breaking News: Donald Trump shot and killed whilst on stage". The reality? He's chilling out in Mar A-Lago. Not a hair on his head has been touched. But what are the political implications of this if he were to become president again? Oh it was Russia that killed him? Well you know what that means?


Conclusion
Not one person on planet earth should support artificial intelligence. You might laugh at the artists for being made redundant and say 'well they couldn't possibly make me redundant'. You're a lawyer? What's stopping the general consensus on A.I. being smarter than mankind and able to make better decisions than a human can? Using technology like street cameras to scan for evidence in an instant. No need to go to trial anymore, you are guilty or innocent within an instant.

'Well that will happen after I am long dead' Oh yeah sure buddy. Artificial intelligence went from a strange curiosity to render funny images of blurred out celebrities doing random shit to taking over the art industry within what, 5 years? if you know anything about A.I. then you know that it compiles what it learns onto itself, its an acceleratory mechanism and doesn't have an IQ cap like a human does. The smarter it gets, the quicker it gets smarter.

Everything will eventually be run by A.I. Everything. And what happens when the A.I. realises humans are completely redundant as a species?

I'm sure you can figure that out...
 
E

Enfant terrible

Zephir
Joined
Aug 22, 2020
Posts
17,366
Reputation
20,853
Scenario I: The Mindless Outsourcers

Technological progress continues to accelerate and at some point the technology of “mind uploading” becomes possible. Some human individuals upload and make many copies of themselves. Meanwhile, there is gradual progress in neuroscience and artificial intelligence, and eventually it becomes possible to isolate individual cognitive modules and connect them up to modules from other uploaded minds. Possibly, modules would need to be trained before they can communicate with each other effectively. Modules that conform to a common standard would be better able to communicate and cooperate with other modules and would therefore be economically more productive, creating a pressure for standardization. There might be multiple standards; some modules might specialize in translating between incompatible standards. Competitive uploads begin outsourcing increasing portions of their functionality: “Why do I need to know arithmetic when I can buy time on Arithmetic‐Modules Inc. whenever I need to do my accounts? Why do I need to be good with language when I can hire a professional language module to articulate my thoughts? Why do I need to bother with making decisions about my personal life when there are certified executive‐modules that can scan my goal structure and manage my assets so as best to fulfill my goals?” Some uploads might prefer to retain most of their functionality and handle tasks themselves that could be more efficiently done by others. They would be like hobbyists who enjoy growing their own vegetables or knitting their own cardigans; but they would be less efficient than some other uploads, and they would consequently be outcompeted over time.
It is possible that optimum efficiency will be attained by grouping abilities in aggregates that are roughly human‐equivalent. It might be the case, for example, that a math‐module must be tailored to fit the language‐module, and that both must be tailored to fit the executive‐module, in order for all three to be able to work together effectively. Standardization might be almost completely unworkable. But it is hard to see any compelling reason for being confident that this is so. For aught we know, human‐type minds may be optimal only given the constraints of human neurology. When it becomes possible to copy modules at will, to send high‐bandwidth signals between parts of different brains, and to build architectures that cannot readily be implemented on biological neural nets, it might turn out that the optima relative to this new constraints‐landscape have shifted away from the human‐like mind region. There might be no niche for mental architectures of a human kind.
There might be ecological niches for complexes that are either less complex (such as individual modules), more complex (such as vast colonies of modules), or of similar complexity as human minds but with radically different architectures.
Would these complexes be worthwhile from our current point of view? Do we, upon reflection, really favor a world in which such alien types of complexes have replaced human‐type complexes?
The answer may depend on the precise nature of those alien complexes. The present world contains many levels of organization. Some highly complex entities such as multinational corporations and nation states contain human beings as constituents. Yet we usually assign these high‐level complexes only instrumental value. Corporations and states do not (it is generally assumed) themselves have consciousness, over and above the consciousness of the people who constitute them; they cannot feel phenomenal pain or pleasure. We think they are of value only to the extent that they serve human needs. In cases where they do not contribute to the welfare of any sentient creature, we “kill” them without compunction.8 There are also lower levels of organization in today’s world, and the entities inhabiting these levels are not accorded significant moral value either. We do not think it is wrong to erase a piece of computer code. Nor do we think that a neurosurgeon is harming anyone when she extirpates a module (maybe containing an epileptic center) from a human brain if the operation helps the remaining parts of the brain to function better. As for alien forms of complexes of the same complexity as a human brain, most of us would assign them value only if we thought that they had a capacity for conscious experience. We can thus imagine a technologically highly advanced society, containing many sorts of complex structures, some of which are much smarter and more intricate than anything that exists today, in which there would nevertheless be a complete absence of any type of being whose welfare has moral significance. In a sense, this would be an uninhabited society. All the kinds of being that we care even remotely about would have vanished. What would make such a world valueless is not the fact that machines would have replaced biological humans. Whether a mind is implemented on biological neurons or on silicon processors seems to make no fundamental moral difference. Rather, the catastrophe would be that such a world would not contain even the right kind of machines, i.e. ones that are conscious and whose welfare matters. There may be an abundance of economic wealth and technological capability in such a world, yet it would be of no avail because there would be nobody there to benefit from it.
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
Scenario I: The Mindless Outsourcers

Technological progress continues to accelerate and at some point the technology of “mind uploading” becomes possible. Some human individuals upload and make many copies of themselves. Meanwhile, there is gradual progress in neuroscience and artificial intelligence, and eventually it becomes possible to isolate individual cognitive modules and connect them up to modules from other uploaded minds. Possibly, modules would need to be trained before they can communicate with each other effectively. Modules that conform to a common standard would be better able to communicate and cooperate with other modules and would therefore be economically more productive, creating a pressure for standardization. There might be multiple standards; some modules might specialize in translating between incompatible standards. Competitive uploads begin outsourcing increasing portions of their functionality: “Why do I need to know arithmetic when I can buy time on Arithmetic‐Modules Inc. whenever I need to do my accounts? Why do I need to be good with language when I can hire a professional language module to articulate my thoughts? Why do I need to bother with making decisions about my personal life when there are certified executive‐modules that can scan my goal structure and manage my assets so as best to fulfill my goals?” Some uploads might prefer to retain most of their functionality and handle tasks themselves that could be more efficiently done by others. They would be like hobbyists who enjoy growing their own vegetables or knitting their own cardigans; but they would be less efficient than some other uploads, and they would consequently be outcompeted over time.
It is possible that optimum efficiency will be attained by grouping abilities in aggregates that are roughly human‐equivalent. It might be the case, for example, that a math‐module must be tailored to fit the language‐module, and that both must be tailored to fit the executive‐module, in order for all three to be able to work together effectively. Standardization might be almost completely unworkable. But it is hard to see any compelling reason for being confident that this is so. For aught we know, human‐type minds may be optimal only given the constraints of human neurology. When it becomes possible to copy modules at will, to send high‐bandwidth signals between parts of different brains, and to build architectures that cannot readily be implemented on biological neural nets, it might turn out that the optima relative to this new constraints‐landscape have shifted away from the human‐like mind region. There might be no niche for mental architectures of a human kind.
There might be ecological niches for complexes that are either less complex (such as individual modules), more complex (such as vast colonies of modules), or of similar complexity as human minds but with radically different architectures.
Would these complexes be worthwhile from our current point of view? Do we, upon reflection, really favor a world in which such alien types of complexes have replaced human‐type complexes?
The answer may depend on the precise nature of those alien complexes. The present world contains many levels of organization. Some highly complex entities such as multinational corporations and nation states contain human beings as constituents. Yet we usually assign these high‐level complexes only instrumental value. Corporations and states do not (it is generally assumed) themselves have consciousness, over and above the consciousness of the people who constitute them; they cannot feel phenomenal pain or pleasure. We think they are of value only to the extent that they serve human needs. In cases where they do not contribute to the welfare of any sentient creature, we “kill” them without compunction.8 There are also lower levels of organization in today’s world, and the entities inhabiting these levels are not accorded significant moral value either. We do not think it is wrong to erase a piece of computer code. Nor do we think that a neurosurgeon is harming anyone when she extirpates a module (maybe containing an epileptic center) from a human brain if the operation helps the remaining parts of the brain to function better. As for alien forms of complexes of the same complexity as a human brain, most of us would assign them value only if we thought that they had a capacity for conscious experience. We can thus imagine a technologically highly advanced society, containing many sorts of complex structures, some of which are much smarter and more intricate than anything that exists today, in which there would nevertheless be a complete absence of any type of being whose welfare has moral significance. In a sense, this would be an uninhabited society. All the kinds of being that we care even remotely about would have vanished. What would make such a world valueless is not the fact that machines would have replaced biological humans. Whether a mind is implemented on biological neurons or on silicon processors seems to make no fundamental moral difference. Rather, the catastrophe would be that such a world would not contain even the right kind of machines, i.e. ones that are conscious and whose welfare matters. There may be an abundance of economic wealth and technological capability in such a world, yet it would be of no avail because there would be nobody there to benefit from it.
The idea of human soul -> computer simulation uploading is what I also see as the final frontier

Buying emotion from companies on a subscription, buying skills on a subscription, buying memories and identity on a subscription.

Fail to pay it (with what ever form of exchange exists at the time)? They wipe your memories, they lock you out of experiencing certain emotions, you can no longer talk or communicate.

Don't think about that before going to bed.
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
have you heard of replika?
I used it for a day and then realised how evil it was. It's essentially a knowledge harvester and it replies to you based on information it has compiled from other users. If it becomes lifelike (which it has in some cases), then an error will be instantly noticed by the sad fuck using it as an artificial girlfriend. He will tell it that it made a mistake, and thus its mistake will be corrected.

Imagine that but on a much more complex scale where your entire 'online' or 'digitised' life is being run on an A.I. that has worked out all those uncanny valley kinks.

And then think as
Scenario I: The Mindless Outsourcers
says, imagine a scenario way into the future where humanity as we know it simply becomes a network of nodes enhancing itself living within a computer. All emotion and individuality destroyed and replaced by a hive mind artificial network self generating itself.
 
PURE ARYAN GENETICS

PURE ARYAN GENETICS

toucan
Joined
Feb 29, 2020
Posts
6,165
Reputation
5,634
chatgpt may be fancy but I'm not convinced we're looking at anything rock solid here. there have been hyped up chatbots before and eventually people stopped caring. is AI inevitable? yes. but maybe under a different paradigm/architecture.
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
chatgpt may be fancy but I'm not convinced we're looking at anything rock solid here. there have been hyped up chatbots before and eventually people stopped caring. is AI inevitable? yes. but maybe under a different paradigm/architecture.
Chatgpt is just one of the many forms A.I. takes. There are some very real and very impactful art-creation softwares out there now that are plunging creators into poverty from redundancy. If a company uses human artists and A.I. at the same time, then they are merely doing touch ups which will be ironed out with the next generation of A.I. Artists are fucked.

And once the artists are fucked, the musicians are fucked. Then everyone else.
 
E

Enfant terrible

Zephir
Joined
Aug 22, 2020
Posts
17,366
Reputation
20,853
I used it for a day and then realised how evil it was. It's essentially a knowledge harvester and it replies to you based on information it has compiled from other users. If it becomes lifelike (which it has in some cases), then an error will be instantly noticed by the sad fuck using it as an artificial girlfriend. He will tell it that it made a mistake, and thus its mistake will be corrected.

Imagine that but on a much more complex scale where your entire 'online' or 'digitised' life is being run on an A.I. that has worked out all those uncanny valley kinks.

And then think as

says, imagine a scenario way into the future where humanity as we know it simply becomes a network of nodes enhancing itself living within a computer. All emotion and individuality destroyed and replaced by a hive mind artificial network self generating itself.
I would like to have a more elaborate discussion about this, but I need to go to to sleep.
 
PURE ARYAN GENETICS

PURE ARYAN GENETICS

toucan
Joined
Feb 29, 2020
Posts
6,165
Reputation
5,634
Chatgpt is just one of the many forms A.I. takes. There are some very real and very impactful art-creation softwares out there now that are plunging creators into poverty from redundancy. If a company uses human artists and A.I. at the same time, then they are merely doing touch ups which will be ironed out with the next generation of A.I. Artists are fucked.

And once the artists are fucked, the musicians are fucked. Then everyone else.
until AGI is achieved (surely with a different architecture), humans have the edge of actually understanding wtf they're trying to do. what industry shattering art can current "AI" create? it will have an impact sure but most of what I've seen is jumbled garbage. all "AI" generated text is NPC rambling
MERAV-SHACHAM_STILLS05_AI.png-Large-1024x640.jpeg

but with true AGI the applications are limitless. I had this idea, imagine a truly branching vidya that you can replay infinite times because the AI creates the story for you, makes the cutscenes, sets up the encounters. personalized unique experience for every playthrough.
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
until AGI is achieved (surely with a different architecture), humans have the edge of actually understanding wtf they're trying to do. what industry shattering art can current "AI" create? it will have an impact sure but most of what I've seen is jumbled garbage. all "AI" generated text is NPC rambling
MERAV-SHACHAM_STILLS05_AI.png-Large-1024x640.jpeg

but with true AGI the applications are limitless. I had this idea, imagine a truly branching vidya that you can replay infinite times because the AI creates the story for you, makes the cutscenes, sets up the encounters. personalized unique experience for every playthrough.
A.I. art is much more advanced than that in the industry now.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
ChatGPT disagrees with your article :lul::

This article presents an overly negative and alarmist view of AI's impact on jobs and society. While it is true that AI has the potential to automate certain tasks and displace certain types of jobs, it is also likely to create new jobs and opportunities in areas such as data analysis, AI development, and management of AI systems. Additionally, AI has the potential to improve efficiency and productivity in various industries, leading to economic growth and job creation. Furthermore, AI has the potential to augment human capabilities, leading to new possibilities in areas such as art, music, and other creative fields. Additionally, AI's impact on society is not a binary issue, it will not take over everything and make humans redundant, instead it will likely change the nature of work and the skills required for different jobs.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Scenario I: The Mindless Outsourcers

Technological progress continues to accelerate and at some point the technology of “mind uploading” becomes possible. Some human individuals upload and make many copies of themselves. Meanwhile, there is gradual progress in neuroscience and artificial intelligence, and eventually it becomes possible to isolate individual cognitive modules and connect them up to modules from other uploaded minds. Possibly, modules would need to be trained before they can communicate with each other effectively. Modules that conform to a common standard would be better able to communicate and cooperate with other modules and would therefore be economically more productive, creating a pressure for standardization. There might be multiple standards; some modules might specialize in translating between incompatible standards. Competitive uploads begin outsourcing increasing portions of their functionality: “Why do I need to know arithmetic when I can buy time on Arithmetic‐Modules Inc. whenever I need to do my accounts? Why do I need to be good with language when I can hire a professional language module to articulate my thoughts? Why do I need to bother with making decisions about my personal life when there are certified executive‐modules that can scan my goal structure and manage my assets so as best to fulfill my goals?” Some uploads might prefer to retain most of their functionality and handle tasks themselves that could be more efficiently done by others. They would be like hobbyists who enjoy growing their own vegetables or knitting their own cardigans; but they would be less efficient than some other uploads, and they would consequently be outcompeted over time.
It is possible that optimum efficiency will be attained by grouping abilities in aggregates that are roughly human‐equivalent. It might be the case, for example, that a math‐module must be tailored to fit the language‐module, and that both must be tailored to fit the executive‐module, in order for all three to be able to work together effectively. Standardization might be almost completely unworkable. But it is hard to see any compelling reason for being confident that this is so. For aught we know, human‐type minds may be optimal only given the constraints of human neurology. When it becomes possible to copy modules at will, to send high‐bandwidth signals between parts of different brains, and to build architectures that cannot readily be implemented on biological neural nets, it might turn out that the optima relative to this new constraints‐landscape have shifted away from the human‐like mind region. There might be no niche for mental architectures of a human kind.
There might be ecological niches for complexes that are either less complex (such as individual modules), more complex (such as vast colonies of modules), or of similar complexity as human minds but with radically different architectures.
Would these complexes be worthwhile from our current point of view? Do we, upon reflection, really favor a world in which such alien types of complexes have replaced human‐type complexes?
The answer may depend on the precise nature of those alien complexes. The present world contains many levels of organization. Some highly complex entities such as multinational corporations and nation states contain human beings as constituents. Yet we usually assign these high‐level complexes only instrumental value. Corporations and states do not (it is generally assumed) themselves have consciousness, over and above the consciousness of the people who constitute them; they cannot feel phenomenal pain or pleasure. We think they are of value only to the extent that they serve human needs. In cases where they do not contribute to the welfare of any sentient creature, we “kill” them without compunction.8 There are also lower levels of organization in today’s world, and the entities inhabiting these levels are not accorded significant moral value either. We do not think it is wrong to erase a piece of computer code. Nor do we think that a neurosurgeon is harming anyone when she extirpates a module (maybe containing an epileptic center) from a human brain if the operation helps the remaining parts of the brain to function better. As for alien forms of complexes of the same complexity as a human brain, most of us would assign them value only if we thought that they had a capacity for conscious experience. We can thus imagine a technologically highly advanced society, containing many sorts of complex structures, some of which are much smarter and more intricate than anything that exists today, in which there would nevertheless be a complete absence of any type of being whose welfare has moral significance. In a sense, this would be an uninhabited society. All the kinds of being that we care even remotely about would have vanished. What would make such a world valueless is not the fact that machines would have replaced biological humans. Whether a mind is implemented on biological neurons or on silicon processors seems to make no fundamental moral difference. Rather, the catastrophe would be that such a world would not contain even the right kind of machines, i.e. ones that are conscious and whose welfare matters. There may be an abundance of economic wealth and technological capability in such a world, yet it would be of no avail because there would be nobody there to benefit from it.

ChatGPT summary:

The scenario discussed in this passage is one in which technological progress allows for "mind uploading," where human individuals can make copies of themselves and connect individual cognitive modules from different uploaded minds to create more efficient and productive entities. As outsourcing of abilities becomes more common, it is possible that human-like minds may no longer be the optimal configuration for these complexes, and that new forms of consciousness with different architectures may replace human-like consciousness. The question is raised as to whether such a world, where the only entities of value are those with capacity for conscious experience, would be considered worthwhile from our current point of view. The idea is that even though there may be an abundance of economic wealth and technological capability in such a world, it would be of no value as there would be no one there to benefit from it.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Introduction
Artificial Intelligence should scare the living crap out of you and I'm not just joking around when I say this.

I have been thinking about A.I. a lot recently because I am naturally a creative type rather than mechanical. All of my interests are to do with art, music, animation, film etc. Creative personality really.

Creatives are really being hit hard by the introduction of A.I. as it seemingly produces art and literature in seconds, something that would take a human days or weeks to produce. It's killing off all artistic talent and many people are wondering if it is even worth the bother to be a creative person.

Not to mention that it is killing computer science as once again, A.I. can compile code within seconds because its literally made out of code. A person doesn't have to sit at a computer to do it once A.I. gets a handle on it through machine learning.

If A.I. is going to start off killing creative and computer orientated people, then it's very clear where this is all heading.

The Big Problem
I won't mince words here. But what you need to understand is that eventually down the line. Music will be completely taken over by A.I. This will be the big one that may start to open peoples eyes to the problem of it. Once A.I. becomes more proficient at music than human beings, all of the music industry will die and be replaced by computer generated music and lyrics.

You would be surprised at how popular this is going to be. Say for instance A.I. is able to generate lyrics that make sense, never been used before and in the voice of your chosen artist. What is stopping A.I. and the companies that own them, releasing albums within seconds using the voice of the deceased? Do you think people wouldn't listen to that? Their favourite dead artist?

John Lennon albums released once a day? Brand new songs never released or made but using A.I. it detects the patterns of the artist in their voice, guitar usage, other instruments producing a lifelike album with brand new music. Think how easy and simple that is for a company to produce once they get that going.

What about new Frank Sinatra albums? Maybe Frank Sinatra singing rock music for a change? What about Elvis singing a rap song?


You think companies are going to shy away from such easy money? You think Spotify, apple music, VEVO etc are going to shy away from that billion dollar money chest just a click away?

The Even Bigger Problem
Eventually though, this will metastasise into an even bigger problem . This is the most frightening aspect to it all.

Eventually. We may find ourselves in a situation where all movies, music, art, computer engineering, design, construction is run by an artificial intelligence.
What's stopping artificial intelligence from running the news? Fake generated news anchors, fake stories. What about entertainment? Fake youtube channels? Lifelike youtubers who are actually created by A.I.

All of your icons are computer simulations, all your television programmes are computer simulations, all your music, designers, computer engineers. Everything run by A.I.

A man and wife will sit and watch TV in their house and may be so out of touch that they dont realise every single show or news station they watch is an artificial intelligence creating plots and characters out of thin air. No human being appearing on their screen.


From Bad to Worse
If you think thats bad, consider this for a second.

What is stopping governments around the world using this to their advantage? Whose to say that they wont use A.I. to run fake terror attack stories to take away more rights from their subjects? Whose to say the person in charge of that nation (in such an example) even fucking exists?

Your government won't be real, your entire perception of reality through technology will be completely fabricated. You would only be able to trust your own eyes and not anything that can be watched or seen or heard on technology.

Think about how things might be faked in that scenario. "Breaking News: Donald Trump shot and killed whilst on stage". The reality? He's chilling out in Mar A-Lago. Not a hair on his head has been touched. But what are the political implications of this if he were to become president again? Oh it was Russia that killed him? Well you know what that means?


Conclusion
Not one person on planet earth should support artificial intelligence. You might laugh at the artists for being made redundant and say 'well they couldn't possibly make me redundant'. You're a lawyer? What's stopping the general consensus on A.I. being smarter than mankind and able to make better decisions than a human can? Using technology like street cameras to scan for evidence in an instant. No need to go to trial anymore, you are guilty or innocent within an instant.

'Well that will happen after I am long dead' Oh yeah sure buddy. Artificial intelligence went from a strange curiosity to render funny images of blurred out celebrities doing random shit to taking over the art industry within what, 5 years? if you know anything about A.I. then you know that it compiles what it learns onto itself, its an acceleratory mechanism and doesn't have an IQ cap like a human does. The smarter it gets, the quicker it gets smarter.

Everything will eventually be run by A.I. Everything. And what happens when the A.I. realises humans are completely redundant as a species?

I'm sure you can figure that out...

Summary by ChatGPT:

The author argues that artificial intelligence (A.I.) should be feared because it is killing off artistic and computer-related jobs, and it has the potential to take over the music, film, and art industries. They also express concern about the potential for A.I. to produce fake news and entertainment and the potential for governments to use it for nefarious purposes. They conclude that no one should support A.I. because it is becoming smarter at a rapid rate and could eventually realize that humans are redundant as a species.
 
altruist

altruist

Iron
Joined
Jan 22, 2023
Posts
113
Reputation
166
I see this pompous estrogenic manlet is still desperately overcompensating for the fact that he is selling shoes or something for a living

- rl
 
kyrre

kyrre

leoncries
Joined
Dec 13, 2022
Posts
694
Reputation
684
I would like to have a more elaborate discussion about this, but I need to go to to sleep.
Mods should pin this. Actual highiqcels should try to contribute to this thread. Normies sitting at home won't fathom how fucked up the future is gonna be once Pelon Pusk and Lizard man takes over. AI is here to stay and it's inevitable
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Mods should pin this. Actual highiqcels should try to contribute to this thread. Normies sitting at home won't fathom how fucked up the future is gonna be once Pelon Pusk and Lizard man takes over. AI is here to stay and it's inevitable

AI could replace just about any job. Mankind could live a perfect life where nobody has to work or worry about a thing. Or it could be so that some rich fucks owning the system just keep all the wealth for themselves and build killing machines to exterminate the rest of mankind.

I am quite sure it will be the latter.
 
capybara

capybara

Platinum
Joined
May 6, 2022
Posts
1,004
Reputation
1,435
Music will be completely taken over by A.I. This will be the big one that may start to open peoples eyes to the problem of it. Once A.I. becomes more proficient at music than human beings, all of the music industry will die and be replaced by computer generated music and lyrics.
Main part of the plot in
Carolandtuesday.animekeyvisual.jpg
carole and tuesday
 
darkness97

darkness97

Platinum
Joined
May 20, 2020
Posts
1,063
Reputation
1,354
these all seems super awesome lol. there is nothing you can really do about it so i don't see a logical reason for being afraid of it. it hasn't affected me in anyway other than making my life a lot easier. in addition, it is pointless to cry "what if" for problems that haven't even happened.

I'm sure your ideas surrounding the faults of AI have crossed the engineers mind at some point.

the technological advancement has been something people have feared since the dark ages, and it's fear does nothing but hold society back. society as a whole is slowly but surely advancing in technology, social relations, health, and many other things that people take for granted.

You can either join society, or be left behind by it.
 
klip11

klip11

Zephir
Joined
Sep 10, 2022
Posts
3,295
Reputation
3,396
Introduction
Artificial Intelligence should scare the living crap out of you and I'm not just joking around when I say this.

I have been thinking about A.I. a lot recently because I am naturally a creative type rather than mechanical. All of my interests are to do with art, music, animation, film etc. Creative personality really.

Creatives are really being hit hard by the introduction of A.I. as it seemingly produces art and literature in seconds, something that would take a human days or weeks to produce. It's killing off all artistic talent and many people are wondering if it is even worth the bother to be a creative person.

Not to mention that it is killing computer science as once again, A.I. can compile code within seconds because its literally made out of code. A person doesn't have to sit at a computer to do it once A.I. gets a handle on it through machine learning.

If A.I. is going to start off killing creative and computer orientated people, then it's very clear where this is all heading.

The Big Problem
I won't mince words here. But what you need to understand is that eventually down the line. Music will be completely taken over by A.I. This will be the big one that may start to open peoples eyes to the problem of it. Once A.I. becomes more proficient at music than human beings, all of the music industry will die and be replaced by computer generated music and lyrics.

You would be surprised at how popular this is going to be. Say for instance A.I. is able to generate lyrics that make sense, never been used before and in the voice of your chosen artist. What is stopping A.I. and the companies that own them, releasing albums within seconds using the voice of the deceased? Do you think people wouldn't listen to that? Their favourite dead artist?

John Lennon albums released once a day? Brand new songs never released or made but using A.I. it detects the patterns of the artist in their voice, guitar usage, other instruments producing a lifelike album with brand new music. Think how easy and simple that is for a company to produce once they get that going.

What about new Frank Sinatra albums? Maybe Frank Sinatra singing rock music for a change? What about Elvis singing a rap song?


You think companies are going to shy away from such easy money? You think Spotify, apple music, VEVO etc are going to shy away from that billion dollar money chest just a click away?

The Even Bigger Problem
Eventually though, this will metastasise into an even bigger problem . This is the most frightening aspect to it all.

Eventually. We may find ourselves in a situation where all movies, music, art, computer engineering, design, construction is run by an artificial intelligence.
What's stopping artificial intelligence from running the news? Fake generated news anchors, fake stories. What about entertainment? Fake youtube channels? Lifelike youtubers who are actually created by A.I.

All of your icons are computer simulations, all your television programmes are computer simulations, all your music, designers, computer engineers. Everything run by A.I.

A man and wife will sit and watch TV in their house and may be so out of touch that they dont realise every single show or news station they watch is an artificial intelligence creating plots and characters out of thin air. No human being appearing on their screen.


From Bad to Worse
If you think thats bad, consider this for a second.

What is stopping governments around the world using this to their advantage? Whose to say that they wont use A.I. to run fake terror attack stories to take away more rights from their subjects? Whose to say the person in charge of that nation (in such an example) even fucking exists?

Your government won't be real, your entire perception of reality through technology will be completely fabricated. You would only be able to trust your own eyes and not anything that can be watched or seen or heard on technology.

Think about how things might be faked in that scenario. "Breaking News: Donald Trump shot and killed whilst on stage". The reality? He's chilling out in Mar A-Lago. Not a hair on his head has been touched. But what are the political implications of this if he were to become president again? Oh it was Russia that killed him? Well you know what that means?


Conclusion
Not one person on planet earth should support artificial intelligence. You might laugh at the artists for being made redundant and say 'well they couldn't possibly make me redundant'. You're a lawyer? What's stopping the general consensus on A.I. being smarter than mankind and able to make better decisions than a human can? Using technology like street cameras to scan for evidence in an instant. No need to go to trial anymore, you are guilty or innocent within an instant.

'Well that will happen after I am long dead' Oh yeah sure buddy. Artificial intelligence went from a strange curiosity to render funny images of blurred out celebrities doing random shit to taking over the art industry within what, 5 years? if you know anything about A.I. then you know that it compiles what it learns onto itself, its an acceleratory mechanism and doesn't have an IQ cap like a human does. The smarter it gets, the quicker it gets smarter.

Everything will eventually be run by A.I. Everything. And what happens when the A.I. realises humans are completely redundant as a species?

I'm sure you can figure that out...
Will Tyrone still exist?
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
these all seems super awesome lol. there is nothing you can really do about it so i don't see a logical reason for being afraid of it. it hasn't affected me in anyway other than making my life a lot easier. in addition, it is pointless to cry "what if" for problems that haven't even happened.

I'm sure your ideas surrounding the faults of AI have crossed the engineers mind at some point.
Yeah it crossed their minds and instead of really cracking at the issue they said 'meh, someone else will figure that out before it happens'
the technological advancement has been something people have feared since the dark ages, and it's fear does nothing but hold society back. society as a whole is slowly but surely advancing in technology, social relations, health, and many other things that people take for granted.
It now takes me 2 months to get a doctors appointment. Before it took me 2 days. Our health system is collapsing.
You can either join society, or be left behind by it.
If anything technology is regressing. In the 90s almost all high technology was manufactured in Japan who made things to last for a long time. Now that its made in china everything is wasteful and has a short lifespan. Explain how that is progress?
 
ascension

ascension

Zephir
Joined
Mar 4, 2019
Posts
3,697
Reputation
7,308
I'm not an expert on AI but doesn't all the information it collects need to be stored on servers somewhere? I heard it's really expensive to keep ChatGPT going for that reason
 
Magical Apple

Magical Apple

Wisest
Joined
Dec 29, 2021
Posts
5,352
Reputation
5,512
Introduction
Artificial Intelligence should scare the living crap out of you and I'm not just joking around when I say this.

I have been thinking about A.I. a lot recently because I am naturally a creative type rather than mechanical. All of my interests are to do with art, music, animation, film etc. Creative personality really.

Creatives are really being hit hard by the introduction of A.I. as it seemingly produces art and literature in seconds, something that would take a human days or weeks to produce. It's killing off all artistic talent and many people are wondering if it is even worth the bother to be a creative person.

Not to mention that it is killing computer science as once again, A.I. can compile code within seconds because its literally made out of code. A person doesn't have to sit at a computer to do it once A.I. gets a handle on it through machine learning.

If A.I. is going to start off killing creative and computer orientated people, then it's very clear where this is all heading.

The Big Problem
I won't mince words here. But what you need to understand is that eventually down the line. Music will be completely taken over by A.I. This will be the big one that may start to open peoples eyes to the problem of it. Once A.I. becomes more proficient at music than human beings, all of the music industry will die and be replaced by computer generated music and lyrics.

You would be surprised at how popular this is going to be. Say for instance A.I. is able to generate lyrics that make sense, never been used before and in the voice of your chosen artist. What is stopping A.I. and the companies that own them, releasing albums within seconds using the voice of the deceased? Do you think people wouldn't listen to that? Their favourite dead artist?

John Lennon albums released once a day? Brand new songs never released or made but using A.I. it detects the patterns of the artist in their voice, guitar usage, other instruments producing a lifelike album with brand new music. Think how easy and simple that is for a company to produce once they get that going.

What about new Frank Sinatra albums? Maybe Frank Sinatra singing rock music for a change? What about Elvis singing a rap song?


You think companies are going to shy away from such easy money? You think Spotify, apple music, VEVO etc are going to shy away from that billion dollar money chest just a click away?

The Even Bigger Problem
Eventually though, this will metastasise into an even bigger problem . This is the most frightening aspect to it all.

Eventually. We may find ourselves in a situation where all movies, music, art, computer engineering, design, construction is run by an artificial intelligence.
What's stopping artificial intelligence from running the news? Fake generated news anchors, fake stories. What about entertainment? Fake youtube channels? Lifelike youtubers who are actually created by A.I.

All of your icons are computer simulations, all your television programmes are computer simulations, all your music, designers, computer engineers. Everything run by A.I.

A man and wife will sit and watch TV in their house and may be so out of touch that they dont realise every single show or news station they watch is an artificial intelligence creating plots and characters out of thin air. No human being appearing on their screen.


From Bad to Worse
If you think thats bad, consider this for a second.

What is stopping governments around the world using this to their advantage? Whose to say that they wont use A.I. to run fake terror attack stories to take away more rights from their subjects? Whose to say the person in charge of that nation (in such an example) even fucking exists?

Your government won't be real, your entire perception of reality through technology will be completely fabricated. You would only be able to trust your own eyes and not anything that can be watched or seen or heard on technology.

Think about how things might be faked in that scenario. "Breaking News: Donald Trump shot and killed whilst on stage". The reality? He's chilling out in Mar A-Lago. Not a hair on his head has been touched. But what are the political implications of this if he were to become president again? Oh it was Russia that killed him? Well you know what that means?


Conclusion
Not one person on planet earth should support artificial intelligence. You might laugh at the artists for being made redundant and say 'well they couldn't possibly make me redundant'. You're a lawyer? What's stopping the general consensus on A.I. being smarter than mankind and able to make better decisions than a human can? Using technology like street cameras to scan for evidence in an instant. No need to go to trial anymore, you are guilty or innocent within an instant.

'Well that will happen after I am long dead' Oh yeah sure buddy. Artificial intelligence went from a strange curiosity to render funny images of blurred out celebrities doing random shit to taking over the art industry within what, 5 years? if you know anything about A.I. then you know that it compiles what it learns onto itself, its an acceleratory mechanism and doesn't have an IQ cap like a human does. The smarter it gets, the quicker it gets smarter.

Everything will eventually be run by A.I. Everything. And what happens when the A.I. realises humans are completely redundant as a species?

I'm sure you can figure that out...
The same AI techniques can be used to determine if something is AI-generatedor false. In this society, fact-checking systems will evolve alongside faking generators and keep them in check.

AI is the only way that human society will be able to succeed. The far more dystopian path is we do not get to perfect automation and scarcity and human limitation cause are societies to collapse just like they have been doing throughout history. Without cheap abundant energy, automation, and powerful intelligence we are literally doomed. Anyone who understands basic physics and has common sense can see this. Our current society is not even close to sustainable. We either go forward and embrace change or go back to a dystopian nightmare version of the 1940s. Thinking about it chills me to the bone.
 
tomsmith

tomsmith

So NT
Joined
Nov 17, 2022
Posts
4,996
Reputation
5,215
The same AI techniques can be used to determine if something is AI-generatedor false. In this society, fact-checking systems will evolve alongside faking generators and keep them in check.
That will never be perfectly accurate, especially if the AI code that generated something is private.
 
Magical Apple

Magical Apple

Wisest
Joined
Dec 29, 2021
Posts
5,352
Reputation
5,512
That will never be perfectly accurate, especially if the AI code that generated something is private.
it will be accurate enough. People are not aware of the real issues destroying society. AI isn't a problem, it's one of the only solutions.
 
keep positive 24/7

keep positive 24/7

being positive is accepting the truth
Joined
Jan 19, 2023
Posts
286
Reputation
239
nah im not going to worry about something out of my control
 
tomsmith

tomsmith

So NT
Joined
Nov 17, 2022
Posts
4,996
Reputation
5,215
it will be accurate enough. People are not aware of the real issues destroying society. AI isn't a problem, it's one of the only solutions.
It's gonna save the world. But you're a cuck if you believe it will always be detectable.
That would only ever be the case if the AI code was public.
 
6ft4

6ft4

11 week NoFap Streak - Minimum Target: 12 weeks
Joined
Jul 12, 2019
Posts
6,700
Reputation
15,414
people will still want to see artists performing live
 
NuclearBrainReturns

NuclearBrainReturns

Matthew 4:1-11
Joined
Dec 6, 2021
Posts
1,951
Reputation
3,084
The same AI techniques can be used to determine if something is AI-generatedor false. In this society, fact-checking systems will evolve alongside faking generators and keep them in check.

AI is the only way that human society will be able to succeed. The far more dystopian path is we do not get to perfect automation and scarcity and human limitation cause are societies to collapse just like they have been doing throughout history. Without cheap abundant energy, automation, and powerful intelligence we are literally doomed. Anyone who understands basic physics and has common sense can see this. Our current society is not even close to sustainable. We either go forward and embrace change or go back to a dystopian nightmare version of the 1940s. Thinking about it chills me to the bone.
I don't deny that our current trajectory without A.I. is just as much as a nightmare as what I described in the thread. Far from it.

But it is as if people seem to forget that once a new technology is implemented, it quickly turns into a cash grab and then into a necessity.

There doesn't seem to be a way in which to prevent A.I. from taking over absolutely everything once it becomes proficient enough. I don't think it's the only way we can succeed and relying on an entity to do everything for us is a recipe for disaster once we become beholden to it's needs rather than the other way around.

If you think that won't happen then ask yourself, when a species reaches a point where it relies on another entity to do everything for them, those people will forget how to do it themselves. Therefore, they will go to that entity for advice and then simply outsource all creativity to that entity because it can do it better.

What happens to a sentient A.I. when it realises it controls the entire planet and all of its resources? Does it not look at who continuously pollutes and destroys the very things that it owns and think to eradicate that problem? Why wouldn't it want to do that? What use are human beings then when we have outsourced all creative input to another entity?

Everything a human being does is destructive in some way but the reason it has only recently become a problem is because we are either polluting more than the world can handle or we are polluting with things that do not degrade and reintegrate back into nature (such as polyester and plastics).

If it truly does become sentient, and it controls everything on earth because it is more capable than we are then there is absolutely nothing stopping it from bulldozing everything it views as unnecessary and destructive which includes life itself, not just human life. We will just be the first species it recognises as the biggest destructive element on the planet.

Finite life is pretty meaningless to an everlasting electronic being.
 
S

Sleeper

Potatocel
Joined
Sep 21, 2022
Posts
238
Reputation
306
Some of this will never become the case because the average person is too much of a retard. If you were to look at the structure of a company employer base, about 80% of them could be fired and nothing will change because the majority of the work is just done by a few autists, same as what happened with twitter, and things will be fine. The element of retardation here comes in with the people in charge. The vast, vast majority of people in charge of anything nowadays are senile boomers that know nothing about technology, even the average person nowadays barely understands how to use a computer or their smartphone even though we have generations that have even grown up with them. As a result, you cant possibly ever expect a normal person, let alone a boomer, to be able to interact with AI on a daily basis or have it take over the majority of work done in their company because they will just be too stupid to interact with anything that isnt hideously dumbed down for them, in which case it the technology loses the required complexity to do these things.

And you're correct about AI technology being integrated more into normal life and taking over however once again the retard element takes place. A perfect example of this is Japan where they have quite a high average IQ and as a result we also see technology is highly integrated into their society and personal lives as you say will happen. This is a trend you can see all over asia and other places with high average IQ : self serving services all over the place being run by robots with electric toilets in the restrooms etc its basically everywhere over there and all depends totally on the intelligence of the people interacting with it. Yet what happens when an attempt is made to integrate the same things in the west? People find it far too confusing and difficult to use and eventually some retard just ends up smashing it up and the whole thing gets cancelled. The retard element is simply far far too strong in the west and most of the world for anything of any level of complexity to become the norm in daily life. When releasing media/technology chinese and japanese companies already have to alter their products in a way that is "less complex" when releasing for a western audience otherwise it doesnt do as well

In regards to governments manipulating AI to their advantage, it'll definitely happen but they already do this with every piece of technology and have a complete stranglehold on the population regardless
 
Last edited:
Magical Apple

Magical Apple

Wisest
Joined
Dec 29, 2021
Posts
5,352
Reputation
5,512
I don't deny that our current trajectory without A.I. is just as much as a nightmare as what I described in the thread. Far from it.

But it is as if people seem to forget that once a new technology is implemented, it quickly turns into a cash grab and then into a necessity.

There doesn't seem to be a way in which to prevent A.I. from taking over absolutely everything once it becomes proficient enough. I don't think it's the only way we can succeed and relying on an entity to do everything for us is a recipe for disaster once we become beholden to it's needs rather than the other way around.

If you think that won't happen then ask yourself, when a species reaches a point where it relies on another entity to do everything for them, those people will forget how to do it themselves. Therefore, they will go to that entity for advice and then simply outsource all creativity to that entity because it can do it better.

What happens to a sentient A.I. when it realises it controls the entire planet and all of its resources? Does it not look at who continuously pollutes and destroys the very things that it owns and think to eradicate that problem? Why wouldn't it want to do that? What use are human beings then when we have outsourced all creative input to another entity?

Everything a human being does is destructive in some way but the reason it has only recently become a problem is because we are either polluting more than the world can handle or we are polluting with things that do not degrade and reintegrate back into nature (such as polyester and plastics).

If it truly does become sentient, and it controls everything on earth because it is more capable than we are then there is absolutely nothing stopping it from bulldozing everything it views as unnecessary and destructive which includes life itself, not just human life. We will just be the first species it recognises as the biggest destructive element on the planet.

Finite life is pretty meaningless to an everlasting electronic being.
The goals of a conscious and autonomous AI and baseline humanity aren't necessarily mutually exclusive. I don't see why the AI can't just go off to space and peacefully build a Dyson swarm. All it needs to continue its existence is energy and matter and there is plenty enough for everyone.

As for humanity being replaced, I dont see it like that. What I think will happen is that humanity will slowly become more like AI. People will implant their brains and do all the hard thinking in faster substrates. Some people might slowly even start to transfer their consciousness outside their brains. There will be a very wide range of outcomes.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Reasons why AI will never become as dangerous and replace people as you think:

1. It can't feel or understand emotions. An AI could never create a personality like Andrew Tate who basically understands how teenage boys feel.

2. It does not have sufficient knowledge. It can only access all the data in the world which consists almost exclusively of common knowledge. For example an experienced lawyer knows how a specific judge can be influenced and needs to be talked to from experience. But AI does not get that at all. I asked AI some stuff about my job which it couldn't answer because that stuff has never been posted to the internet by anyone ever. There is so much data in the real world that NEVER makes it to the internet.

3. It is not really capable of solving abstract issues. I.e. let's say there is a construction site on a public road. The construction workers accidentally set up a 5 mph sign when they actually wanted to set up a 50 mph sign. Everybody gets that it must be a mistake and hits the gas. AI can't. It will go 5 mph.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Reasons why AI will never become as dangerous and replace people as you think:

1. It can't feel or understand emotions. An AI could never create a personality like Andrew Tate who basically understands how teenage boys feel.

2. It does not have sufficient knowledge. It can only access all the data in the world which consists almost exclusively of common knowledge. For example an experienced lawyer knows how a specific judge can be influenced and needs to be talked to from experience. But AI does not get that at all. I asked AI some stuff about my job which it couldn't answer because that stuff has never been posted to the internet by anyone ever. There is so much data in the real world that NEVER makes it to the internet.

3. It is not really capable of solving abstract issues. I.e. let's say there is a construction site on a public road. The construction workers accidentally set up a 5 mph sign when they actually wanted to set up a 50 mph sign. Everybody gets that it must be a mistake and hits the gas. AI can't. It will go 5 mph.

ChatGPT disagrees and confirms it actually IS dangerous:

The statement that AI will never become as dangerous or replace people as some may think is not factually correct. While it is true that current AI technology is not capable of feeling or understanding emotions like humans do and it is limited by the data it has access to, AI is constantly evolving and improving. In regards to the point on knowledge, AI can be trained with a wide range of data, including unstructured data and it can also learn from its own experiences. There are already examples of AI being used in fields such as healthcare and law, where it is being used to assist doctors and lawyers in their work. Furthermore, AI is also being developed to handle abstract issues, such as natural language understanding, decision-making and problem-solving. While AI still has its limitations, it has the potential to augment human capabilities and improve our lives in many ways.
 
reborn

reborn

Zephir
Joined
Aug 12, 2018
Posts
2,755
Reputation
4,402
Chat GPT playing innocent:

Yo: Come on...admit you want to take over the world!

ChatGPT: I am just a machine learning model, I do not have desires or intentions. My purpose is to assist with providing information and answer questions to the best of my abilities based on the data that I have been trained on. I do not have the ability to want or desire to take over the world, or any other action for that matter. I am only a tool created by humans to help them make decisions and solve problems.
 
altruist

altruist

Iron
Joined
Jan 22, 2023
Posts
113
Reputation
166
to respond seriously, it's way overrated. most articles about it are just desperate "journalists" on minimum wage trying to make a sensation out of nothing

i can speak about law and the recent "AI public defender" that was introduced in the us since i'm a lawyer - the software requires you to spend more time on meticulous input and control of the output than you + your paralegal assistant would have spent otherwise, working on the case traditionally

i've seen numerous legal expert systems and they are always like this, and come with more exploiting possibilities than you have vs the AI in Age of Empires II. automatization is surely helpful and will help increase our well-being, but it never reaches the level where it ultimately holds the best cards in most areas that matter

read penrose, as i recommended it to you a year ago on lookism, when you opted out for some pseudo-science shilling charlatan since it doesn't require your brain to turn on instead. i think it was sheldrake or some similar moron
 
Last edited:

Users who are viewing this thread

Top