Software engineering is the worst choice you can make

It doesn't matter how niche something is, an Ai will be able to read all of the documentation, and all of the source code, and experiment until it's figured out how to use whatever niche code is being used, faster than a human.
Also it seems as if you do not seem to understand how machine learning works. The "reading" and "source code" all will be apart of the training data and used as context, yes, but so will a much larger amount of data. Although one may highlight this context, the data as a whole will be weighted to give an outcome. Hence why you may have "hallucinations" to even the simplest commands.

Although chances of hallucinating on a straight forward answer given by documentation has a very low chance of occurring, imagine this on a scale of a project requiring many systems and many files of code all interacting to form a solution - the chances are much higher that something will be off. Now, as a developer, if you are referencing these models for the simplest of solutions, how do you expect to solve or even prompt these models to solve these issues? This is just one example where your scenario falls off, much less in situations where documentation is scarce and hard to follow (which is not a rare occurrence).
 
What outlooks? How did they calculate their predictions?

I don't think you've looked into this at all.


We are on track to create smarter than human intelligence. This is intelligence that can solve any problem that humans can, faster than humans can.



That's the paradox of automation. You can read about it, if you look up "paradox of automation".

It stops applying when a field becomes fully automated. Elevator lift operators, Pin Boys, Data Entry Clerks, etc.

It's not a matter of "IF" but "When" Programming will be fully automated. If you're open to discussing the "When" we can do that, but you should first admit that programming is going to be fully automated.

People who write software don't care who or what creates the software, as long as it's bug free, does what it's suppose to do, and is written in a way that it's maintainable. Current Ai can't do this as well as humans, but future Ai will be able to.
Can refer to my previous post as a response for most of this. Also, as for these outlooks and predictions, do me a favor and read the latest articles in any renown AI/ML journal regarding the state of AI use for coding or similar tasks.

Brave statement saying I have not looked into this, I work heavily in research in a quantitative field and study machine learning with work on published papers regarding use of multi modal models for quantitative finance use (similar to coding in the sense of constraints). I have been around models and agents extensively in industry at some of the top companies that you have heard of, and have colleagues at many others.
 
Also it seems as if you do not seem to understand how machine learning works.
I guarantee I have a better understanding of architectures for artificial general intelligence than you do. AGI architectures is what this entire field is moving towards.

The "reading" and "source code" all will be apart of the training data and used as context,
That's not how the latest models work. They can learn in real time, after they are trained. They also have memory. Meaning they can read something and remember it.

Agents will be able to conduct experiments, and learn from those experiments.

yes, but so will a much larger amount of data. Although one may highlight this context, the data as a whole will be weighted to give an outcome. Hence why you may have "hallucinations" to even the simplest commands.

Although chances of hallucinating on a straight forward answer given by documentation has a very low chance of occurring, imagine this on a scale of a project requiring many systems and many files of code all interacting to form a solution - the chances are much higher that something will be off. Now, as a developer, if you are referencing these models for the simplest of solutions, how do you expect to solve or even prompt these models to solve these issues? This is just one example where your scenario falls off, much less in situations where documentation is scarce and hard to follow (which is not a rare occurrence).
You're talking about current public models. I'm discussing where the field is moving towards.

The latest models already learn from their own thinking. They think for a while, and then update their weights based on what they've learned from their own thoughts.

The next step for agents, is agents that conduct experiments and learn from them. This means it "plays with an api" like a human would. It will feed different data into the functions, and remember the results. Important information from short term memory will go into long term memory (update the weights), and improve the agents intuition, so it takes less experiments to figure out future Apis.

Can refer to my previous post as a response for most of this. Also, as for these outlooks and predictions, do me a favor and read the latest articles in any renown AI/ML journal regarding the state of AI use for coding or similar tasks.
This is not a prediction of how Ai will evolve, only an assessment of it's current capabilities.

Brave statement saying I have not looked into this, I work heavily in research in a quantitative field and study machine learning with work on published papers regarding use of multi modal models for quantitative finance use (similar to coding in the sense of constraints). I have been around models and agents extensively in industry at some of the top companies that you have heard of, and have colleagues at many others.
Then you have a very limited understanding of Ai.

Current Algorithms and models exist primarily to bootstrap future more powerful algorithms and models. These models will always be running experiments, always be learning from the results of those experiments, always thinking, and always learning from the results of those thoughts.

Yes, there are some use cases that you and your colleagues are playing with, but these models are primarily stepping stones to agents with memory that learn and think.
 
Last edited:
You're wrong though. Although majority of simple web dev jobs like creating a page and such can be done using mostly LLMs, there are many other things to consider as you become a more senior or specialized engineer.

Distributed systems, identification of latency pipeline blockages, algorithnmic considerations and constraints just to name a few, and these will never be able to be solved fully by LLMs as many times you require very unique solutions that are just too niche to be suggested by an LLM or AI unless you basically prompt the solution.

Even in basic CRUD work at the top companies, many of the infrastructure is internal and constantly changing, and because of this even the in house LLMs struggle to keep up with useful solutions for some mediocre tasks.

JR devs will always be needed as they must be trained with the knowledge on the systems and workflow to bring them to SR.

Jobs are not going anywhere.
same argument was made around a year ago to me
what you are expecting is something to run a whole company i thought your whole argument was engineers getting replaced by ai, two completely different things. It does have contextual understanding claude 2.1 literally demonstrated it with 200k tokens althought it was no where near as consistent as chatgpt.

Then why don't you enlighten me on how either of those can resolve a zero-day vulnerability in a timely manner instead of providing another ad hominem, you're running out of those.
it wont, companies have devops lifecycles to do this, this should always be automated of course companies with ancient infra need it, you can just use chatgpt to help migrate an entire infrastructure just as you were saying "those models still lack contextual understanding and are incapable of dynamically learning from ongoing discussions the same way humans can". i've literally used chatgpt to migrate from manual deployments to be more scalable with kubernetes and helm.

Someone will be always on call but its just how many of them will be replaced in the next 3-5 years no one knows. I don't see it "furthering your point"
i mean when over like 75% of them believe in possible replacement, i wanna hear them out, the companies that produce these chatbots believe large replacements will happen (specifically altman) anthropic was birthed out of safety of ai. The people in blind aren't as delusional as the people on linkedin most of them are shit posting but a lot of them are in prestigious companies because they are the top engineers
also theres dozens of post on blind, shit ton of senior engineers talking about being replaced, why do you think they say this despite being some the most competitive candidates


GPTs are suppose to a step towards this if your company has ever tried it out would know, never said jobs are entirely gone I'm just saying we are already seeing that crunch and its not getting a degree = getting a job like it was between 2014-2020, there is literally no reason for meta, microsoft and google all their largest % layoffs soon based on "performance". Go look at any of faangs internal blind channels why are they freaking out so much about it

in conclusion no i don't think jobs are disappearing just yet still hold the same sentiment as i did a year ago im not saying this for my self i have done like 5 internships at notable companies just a warning for someone 4 years later going into swe to look out for. Job transforming to use llms may not necessarily make more jobs
 
  • +1
Reactions: noodlelover
Also it seems as if you do not seem to understand how machine learning works. The "reading" and "source code" all will be apart of the training data and used as context, yes, but so will a much larger amount of data. Although one may highlight this context, the data as a whole will be weighted to give an outcome. Hence why you may have "hallucinations" to even the simplest commands.

Although chances of hallucinating on a straight forward answer given by documentation has a very low chance of occurring, imagine this on a scale of a project requiring many systems and many files of code all interacting to form a solution - the chances are much higher that something will be off. Now, as a developer, if you are referencing these models for the simplest of solutions, how do you expect to solve or even prompt these models to solve these issues? This is just one example where your scenario falls off, much less in situations where documentation is scarce and hard to follow (which is not a rare occurrence).
Can refer to my previous post as a response for most of this. Also, as for these outlooks and predictions, do me a favor and read the latest articles in any renown AI/ML journal regarding the state of AI use for coding or similar tasks.

Brave statement saying I have not looked into this, I work heavily in research in a quantitative field and study machine learning with work on published papers regarding use of multi modal models for quantitative finance use (similar to coding in the sense of constraints). I have been around models and agents extensively in industry at some of the top companies that you have heard of, and have colleagues at many others.
i mean i sure hope your justified i assume you have atleast done phd level+ research into this if so i shouldn't be talking, but being a data scientist or machine learning engineer at a faang is far away different from an actual researcher. For example even yann lecun has changed his opinions multiple times in the past months why has his opinion changed so much to the point he's just reiterating on amodei and altman
 
Indians will take over

All the white incel programmers will be replaced by Rajeet who earns 30% their salary but works longer hours
 
  • +1
Reactions: noodlelover
What a cope thread tbh. SE has always been a meme. The endgame was always to build your own thing or to pivot to teaching. Or get lucky to inherit some niche garbage to maintain.
 
  • +1
Reactions: noodlelover

Similar threads

DharkDC
Replies
42
Views
523
DharkDC
DharkDC
fvolkek
Replies
42
Views
1K
richcel
R
TalesFromTheSlums
Replies
7
Views
191
Shahnameh
Shahnameh
nathan
Replies
17
Views
720
smartstyle
smartstyle

Users who are viewing this thread

Back
Top