| RSS |

| 各位推荐一个 32G Macbook air M5 可以跑的 moe 模型 Hermitist Apr 26 Lastly replied by Hermitist | 19 |
| 请教一个关于模型训练主机配置的问题 jamme Apr 26 Lastly replied by zhoukevin233 | 12 |
| 大伙有想过二次训练吗? archxm Apr 25 Lastly replied by mingtdlb | 9 |
| 想在本地部署 OCR 服务,解析美团的外卖订单截图,求推荐一个好用的 OCR 模型 EchoPrince Apr 28 Lastly replied by PersueYan | 48 |
| 多台 GPU 之间怎么组网互联? mingtdlb Apr 26 Lastly replied by makictos | 30 |
| 用 DGX Spark 做这些事情,是否能力合适/足够,有佬能解答吗?(估算也行) qazwsxkevin Apr 25 Lastly replied by diudiuu | 6 |
| 有没有简单版的 new-api 项目 novaren Apr 28 Lastly replied by wukaige | 7 |
| 部署本地模型 token 输出万能公式 diudiuu Apr 21 Lastly replied by diudiuu | 3 |
| 本地部署靠不靠谱? jdjingdian Apr 20 Lastly replied by diudiuu | 6 |
| 为什么你该停止使用 Ollama catazshadow Apr 22 Lastly replied by seakingii | 14 |
| 本地大模型多大显存够用? s2555 Apr 23 Lastly replied by s2555 | 14 |
| 想掏一台 Mac mini M4 Pro 64G 跑 gemma4 31b Q4 接 openclaw 处理日常的问题,有人测试过速度吗? Ken1028 Apr 18 Lastly replied by fansttty | 32 |
| 求可靠本地 vibe coding,有八卡的 L20 服务器 sqshanyao Apr 16 Lastly replied by coefu | 2 |
| [求助] DGX Spark 上 Ollama 推理极慢,改用 llama.cpp 部署是否更合适? diudiuu Apr 22 Lastly replied by qazwsxkevin | 46 |
| Gemma 4 31B 大概什么水平,本地部署是不是又成为现实了 unt Apr 17 Lastly replied by xue777hua | 47 |
| 32B 本地 vibe coding 有能用的模型吗 RatioPattern Apr 14 Lastly replied by coefu | 7 |
| Gemma4 + LiteRT-LM 真得有点的东西, e2b 内存仅 2G 左右占用, 在 天玑 的安卓机上跑的飞快. dacapoday Apr 14 Lastly replied by diudiuu | 10 |
| 谷歌的 Gemma 4 怎么样,有必须要本地弄一下吗 wszzh Apr 27 Lastly replied by liangyuan1985 | 19 |
| 闲置 16GB M1 Pro MBP 跑大模型 ahdw Apr 14 Lastly replied by ahdw | 19 |