H
huggingface-local-models
by huggingface
huggingface-local-models helps you find Hugging Face models that run locally with llama.cpp and GGUF, choose a practical quant, and launch on CPU, Apple Metal, CUDA, or ROCm. It covers model discovery, exact GGUF file lookup, server vs CLI setup, and a fast path for backend development and private local inference.
Backend Development
Favorites 0GitHub 10.4k
