| RSS |

| 2 年以后的硬件和本地大模型 workbest Apr 20 Lastly replied by Very0ldMan | 27 |
| 有人用 mac studio 测试过 gemma4 31b 16 吗 wali77 Apr 7 Lastly replied by nrtEBH | 4 |
| gemma4:e4b 的效果出乎意料, 1050ti 也能很好的生成文章 andyskaura Apr 7 Lastly replied by iango | 29 |
| 家用机带宽太小玩不转 local llm 啊 Eleutherios Apr 14 Lastly replied by Eleutherios | 17 |
| 为什么 Qwen 吹这么牛,但是用起来体验这么拉啊,它的真实能力究竟怎么样 unt Apr 4 Lastly replied by cvbnt | 5 |
| macbook 32G 内存, M5 芯片本地跑大模型有推荐的吗? Hermitist Apr 10 Lastly replied by Liu6 | 25 |
| qwen 本地大模型的问题 workbest Apr 2 Lastly replied by workbest | 3 |
| 好奇有没有人用本地模型写代码? turfbook Apr 1 Lastly replied by turfbook | 3 |
| 本地部署 deepseek 70B,回答乱码 weishao666 Mar 28 Lastly replied by gigishy | 10 |
| 3090 跑文本向量模型可以么? 3090 是不是有点过剩? catyun88 Mar 23 Lastly replied by coefu | 2 |
| Qwen3.5-35B-A3B Livid PRO | 8 |
| 想部署本地大模型来分析股票趋势,有没有专门针对股票的大模型? pangfahe Mar 12 Lastly replied by beasnail | 5 |
| 如何在 vs code 上应用自建的 ollama 模型 davidyin Mar 15 Lastly replied by oldlamp | 9 |
| qwen3.5 过度思考的问题 cat9life Mar 24 Lastly replied by cat9life | 12 |
| minimax 挂了?? 57show Mar 8 |
| 现在能本地部署最好的 TTS 是哪套, 太多了,没法都去试 iorilu Mar 2 Lastly replied by wannerkingof69 | 5 |
| 如何在内网使用 opencode caihp Feb 26 Lastly replied by gorvey | 9 |
| microgpt.py Livid PRO |
| 30B 尺寸哪个小模型编码能力会好一些 summerLast Feb 12 Lastly replied by summerLast | 11 |