PowerBASIC model library

Additional AI Model Downloads

PowerBASIC-focused Qwen 9B DoRA GGUF model variants for local AI runtimes such as LM Studio, Ollama, llama.cpp-compatible tools, KoboldCpp, text-generation-webui, and other GGUF-capable local inference environments.

These downloads are optional companions for users who want local model behavior tuned toward PowerBASIC-oriented code assistance. Choose the quantization level that matches your available RAM/VRAM.

Qwen 9B PowerBASIC model download visual
The GGUF files are hosted in the shared download area outside AISPR and can be used in LM Studio, Ollama imports, llama.cpp-compatible runners, KoboldCpp, text-generation-webui, and similar local model tools.

Start simple

Start with Q4_K_M unless you know your machine has enough memory for larger variants.

Raise quality

Use Q5_K_M or Q6_K when you want stronger output quality and can spend more memory.

Reference use

Use Q8_0 only for capable hardware, archival, conversion, or reference workflows.

Hugging Face

Repository and model cards

All current model files and release metadata are published in one repository, including sizes, checksums, and direct download links.

Available files

Qwen 9B PowerBASIC GGUF variants

Hosted on Hugging Face: https://huggingface.co/Theogott/pb-qwen3_5-9b-powerbasic-ggufs

Balanced~6.47 GB

Q5_K_M

A strong middle ground with better output quality while staying manageable on many local AI setups.

PB_qwen3_5_9b_dora-Q5_K_M.gguf

Download model
Higher quality~7.36 GB

Q6_K

Use this when you can spend more memory for stronger answers and more stable code-oriented behavior.

PB_qwen3_5_9b_dora-Q6_K.gguf

Download model
High precision~9.53 GB

Q8_0

For capable hardware where quality matters more than download size and runtime memory footprint.

PB_qwen3_5_9b_dora-Q8_0.gguf

Download model
Highest precision~15.34 GB

BF16

Use this when you want higher arithmetic precision for advanced coding workflows, heavy instruction editing, or conversion and benchmark experiments.

PB_qwen3_5_9b_dora-F16.gguf

Download model
Now available

SPR Models

Hosted on Hugging Face: https://huggingface.co/Theogott/pb-qwen3_5-9b-powerbasic-ggufs

SPR GGUF~4+ GB

Q4_K_M

SPR variant tuned for PowerBASIC-oriented instruction tuning and local automation use-cases.

spr_qwen3_5_9b_dora_q4_k_m.gguf

Download model
SPR GGUF~4+ GB

Q4_K_M VRAM-safe

VRAM-safe variant with same token-space structure plus a safer memory footprint.

spr_qwen3_5_9b_dora_vramsafe_q4_k_m.gguf

Download model
SPR GGUF~5+ GB

Q6_K VRAM-safe

Higher precision while keeping memory pressure manageable.

spr_qwen3_5_9b_dora_vramsafe_q6_k.gguf

Download model
SPR GGUF~5+ GB

Q8_0 VRAM-safe

For stronger output quality in local runtimes with enough memory available.

spr_qwen3_5_9b_dora_vramsafe_q8_0.gguf

Download model
SPR GGUF~6+ GB

BF16 VRAM-safe

Higher precision variant for advanced inference and conversion workflows.

spr_qwen3_5_9b_dora_vramsafe_bf16.gguf

Download model
Where to use them: These are standard GGUF model files for local inference. Use them in LM Studio, Ollama after creating/importing a Modelfile, llama.cpp-compatible tools, KoboldCpp, text-generation-webui, Jan, GPT4All-style GGUF loaders, and other runtimes that accept GGUF models. If a runtime asks for a model folder, place the downloaded file in your local model library and select it from there.