L
lmaxx
Iron
- Joined
- Jan 25, 2025
- Posts
- 4
- Reputation
- 0
Also it seems as if you do not seem to understand how machine learning works. The "reading" and "source code" all will be apart of the training data and used as context, yes, but so will a much larger amount of data. Although one may highlight this context, the data as a whole will be weighted to give an outcome. Hence why you may have "hallucinations" to even the simplest commands.It doesn't matter how niche something is, an Ai will be able to read all of the documentation, and all of the source code, and experiment until it's figured out how to use whatever niche code is being used, faster than a human.
Although chances of hallucinating on a straight forward answer given by documentation has a very low chance of occurring, imagine this on a scale of a project requiring many systems and many files of code all interacting to form a solution - the chances are much higher that something will be off. Now, as a developer, if you are referencing these models for the simplest of solutions, how do you expect to solve or even prompt these models to solve these issues? This is just one example where your scenario falls off, much less in situations where documentation is scarce and hard to follow (which is not a rare occurrence).