Buying a prebuilt desktop, 8GB VRAM, ~$500 budget?
Which deepseek model for 3090 + 64 MB of RAM?
3000 series owners what's your plan?
[KCD2] The Complaints for the saving system is silly
How much are you burning every week?
Hot take: Vibe Coding is NOT the future
Getting the 5090 was genuinely impossible so I used some of the money to upgrade everything else and keep my 4090
Domain for $0.18
The normies have failed us
What to expect in 2025 for running big LLMs
New Junior Developers Can’t Actually Code
Not sure if people realised neovim was most admired 'IDE' of stackoverflow survey 2024.
🌿 Namu.nvim - A Different Take on Symbol Navigation - Like Zed
Am i the only one that thinks 1300€ for a 5070ti is insane?
How come people who drag elbows aren't "faster"?
What's the best LLM I can run with at least 10 t/s on 24 cores, 215GB ram & 8GB vram?
Increase model context length will not get AI to “understand the whole code base”
RAM Upgrade Dilemma: 64GB Dual-Channel vs. 32GB Quad-Sticks? Help Me Decide!
Why did you start using Arch Linux?
DeepSeek drops recommended R1 deployment settings
Did ollama update and get faster?
NoLiMa: Long-Context Evaluation Beyond Literal Matching - Finally a good benchmark that shows just how bad LLM performance is at long context. Massive drop at just 32k context for all models.
Can liter-bikes carry as much corner speed as a ninja 400?
Talk me out of buying this 512GB/s Gen 5 NVMe RAID card + 4 drives to try to run 1.58bit DeepSeek-R1:671b on (in place of more RAM)
Why is neovim especially bad for java?