| RSS |
| ollama 本地模型 GPU 跑不满 CNYoki • Jul 24, 2024 • Lastly replied by clemente | 12 |
| Tesla V100-DGXS-32GB 这张卡能用来跑 ollama 或者 vllm 不? idblife • Sep 25, 2024 • Lastly replied by woscaizi | 2 |
| 本地跑 sd 和 local llm 推理, 什么显卡性价比最高? cinlen • Jul 1, 2024 • Lastly replied by crackidz | 38 |
| 分享一下自己训练的大模型 Azure99 • Jan 9, 2025 • Lastly replied by ibox163 | 54 |
| 求推荐能同时链接 chatgpt 和 ollama 的 webui qweruiop • Jul 18, 2024 • Lastly replied by kangfenmao | 3 |
| 有什么高性价比的开源大模型体验和生产部署服务? wencan • Jun 11, 2024 • Lastly replied by bkdlee | 7 |
| 支持不同显存显卡的推理加速框架 whyorwhynot • May 30, 2024 • Lastly replied by whyorwhynot | 4 |
| 使用 llama3:70b 本地化部署, 100 人左右的并发,大概需要多少块 4090 才能满足? leeum • Jul 9, 2024 • Lastly replied by keakon | 52 |
| 现在买 3090TI 玩 AI 靠谱吗 Tuatara • May 24, 2024 • Lastly replied by glouhao | 91 |
| 请教各位,开源的 AI 模型需要什么样配置的机器?比如图像类啊大语言模型啊语音类啊都想玩玩 fushall • May 16, 2024 • Lastly replied by AlexHsu | 14 |
| 大佬们,求助本地部署大模型 jjyyryxdxhpyy • Dec 12, 2024 • Lastly replied by skykk1op | 39 |
| 折腾 Llama3 跑在 NAS...结果确实一言难尽 CoffeeLeak • May 4, 2024 • Lastly replied by lchynn | 6 |
| llama 3 发布了,我感觉挺猛的... t41372 • Jul 24, 2024 • Lastly replied by kangfenmao | 12 |
| ollama run 报错请教大佬 xmtpw • Aug 21, 2024 • Lastly replied by finalsatan | 4 |
| 对 Llama-3-8B 模型在指令跟随方面的测试。 smalltong02 • Apr 30, 2024 • Lastly replied by qinfengge | 16 |
| electron 有没有办法集成开源 AI 模型? superliwei • Apr 29, 2024 • Lastly replied by jones2000 | 19 |
| 目前最快的模型, 微软 Phi-3 zsxzy • Apr 26, 2024 • Lastly replied by xudong | 2 |
| 感谢巨硬, Phi-3 Mini 让我八年前的破笔记本也能玩本地大模型翻译了 iX8NEGGn • Apr 26, 2024 • Lastly replied by SuperMaxine | 13 |
| 有啥好的开源分类工具或者模型供学习? kite16x • Apr 26, 2024 • Lastly replied by kite16x | 4 |