fix: 修复模型加载方式,改用 FP16+CPU offload
RTX 3050 8GB 无法完整加载 Qwen3.5-9B,即使量化也不行: - bitsandbytes 4-bit 不支持 CPU offload - bitsandbytes 8-bit 与 accelerate 存在版本兼容问题 - FP16 + CPU offload 可以加载但推理质量极差(输出乱码) - 推理速度仅 0.4 tokens/s 结论:RTX 3050 8GB 不适合运行 Qwen3.5-9B Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -14,6 +14,7 @@ build/
|
||||
*.pth
|
||||
*.onnx
|
||||
vsp/qwen3.5-9b/model/
|
||||
vsp/qwen3.5-9b/offload/
|
||||
|
||||
# Env
|
||||
.env
|
||||
|
||||
Reference in New Issue
Block a user