wunderkind
wonder child
- Joined
- Mar 28, 2026
- Posts
- 85
- Reputation
- 78
Downloading the model as we speak, the benchmarks seem interesting but wanted to test it out myself. I only recently got into this stuff, since I have a modified RTX 2080ti with 22GB of VRAM, because of all the claude code source code leak stuff, I've been running opencode locally with Qwen 3.5. Do any of you guys use local LLMs, and what are your use cases for it?