← Models

Big GPT OSS i1 GGUF

Hugging FaceJanuary 11, 2026mradermacher/big-gpt-oss-i1-GGUFView on Hugging Face
Big GPT OSS i1 GGUF thumbnail

This repo is a large GGUF “quant pack” for suayptalha/big-gpt-oss, published by mradermacher for use with llama.cpp-style runtimes (LM Studio, llama.cpp CLI/server, OpenWebUI, etc.). The distinctive thing here is that it includes imatrix-weighted quants (and the .imatrix.gguf file itself), which is useful if you’re chasing better quality at a given size or want to generate your own quant variants.

It’s also a reminder that “GGUF” doesn’t automatically mean “small”: many of the published quant files are huge (100GB+), and some are split into multiple parts (e.g. Q6_K is shipped as 5 parts). The README calls out recommended options like Q4_K_M and Q4_K_S and links to background reading on quant tradeoffs. If you’re experimenting, the simplest path is to pick one quant that matches your RAM/VRAM budget, load it in your preferred runtime, and sanity-check perplexity/answer quality on a small prompt set before investing time in larger downloads.

Quick stats from the listing feed: pipeline: text-generation · 256 downloads.

View on Hugging Face

Source listing: https://huggingface.co/models?sort=modified