Q4_K_M
Compact and practical first choice. Start here unless you already know your machine has enough RAM or VRAM for larger variants.
PB_qwen3_5_9b_dora-Q4_K_M.gguf
PowerBASIC-focused Qwen 9B DoRA GGUF model variants for local AI runtimes such as LM Studio, Ollama, llama.cpp-compatible tools, KoboldCpp, text-generation-webui, and other GGUF-capable local inference environments.
These downloads are optional companions for users who want local model behavior tuned toward PowerBASIC-oriented code assistance. Choose the quantization level that matches your available RAM/VRAM.

Start with Q4_K_M unless you know your machine has enough memory for larger variants.
Use Q5_K_M or Q6_K when you want stronger output quality and can spend more memory.
Use Q8_0 only for capable hardware, archival, conversion, or reference workflows.
All current model files and release metadata are published in one repository, including sizes, checksums, and direct download links.
Hosted on Hugging Face: https://huggingface.co/Theogott/pb-qwen3_5-9b-powerbasic-ggufs
Compact and practical first choice. Start here unless you already know your machine has enough RAM or VRAM for larger variants.
PB_qwen3_5_9b_dora-Q4_K_M.gguf
A strong middle ground with better output quality while staying manageable on many local AI setups.
PB_qwen3_5_9b_dora-Q5_K_M.gguf
Use this when you can spend more memory for stronger answers and more stable code-oriented behavior.
PB_qwen3_5_9b_dora-Q6_K.gguf
For capable hardware where quality matters more than download size and runtime memory footprint.
PB_qwen3_5_9b_dora-Q8_0.gguf
Use this when you want higher arithmetic precision for advanced coding workflows, heavy instruction editing, or conversion and benchmark experiments.
PB_qwen3_5_9b_dora-F16.gguf
Hosted on Hugging Face: https://huggingface.co/Theogott/pb-qwen3_5-9b-powerbasic-ggufs
SPR variant tuned for PowerBASIC-oriented instruction tuning and local automation use-cases.
spr_qwen3_5_9b_dora_q4_k_m.gguf
VRAM-safe variant with same token-space structure plus a safer memory footprint.
spr_qwen3_5_9b_dora_vramsafe_q4_k_m.gguf
Higher precision while keeping memory pressure manageable.
spr_qwen3_5_9b_dora_vramsafe_q6_k.gguf
For stronger output quality in local runtimes with enough memory available.
spr_qwen3_5_9b_dora_vramsafe_q8_0.gguf
Higher precision variant for advanced inference and conversion workflows.
spr_qwen3_5_9b_dora_vramsafe_bf16.gguf