Skip to main content
Back to registry

gguf-quantization

davila7/claude-code-templates

The GGUF (GPT-Generated Unified Format) is the standard file format for llama.cpp, enabling efficient inference on CPUs, Apple Silicon, and GPUs with flexible quantization options.

Installs167
Install command
npx skills add https://github.com/davila7/claude-code-templates --skill gguf-quantization
Security audits
Gen Agent Trust HubWARN
SocketPASS
SnykWARN
Community Reviews

Latest reviews

Sign in to review

No community reviews yet. Be the first to review.

Browse this skill in context
FAQ
What does gguf-quantization do?

The GGUF (GPT-Generated Unified Format) is the standard file format for llama.cpp, enabling efficient inference on CPUs, Apple Silicon, and GPUs with flexible quantization options.

Is gguf-quantization good?

gguf-quantization does not have approved reviews yet, so SkillJury cannot publish a community verdict.

What agent does gguf-quantization work with?

gguf-quantization currently lists compatibility with codex, gemini-cli, opencode, cursor, github-copilot, claude-code.

What are alternatives to gguf-quantization?

Skills in the same category include telegram-bot-builder, flutter-app-size, sharp-edges, iterative-retrieval.

How do I install gguf-quantization?

npx skills add https://github.com/davila7/claude-code-templates --skill gguf-quantization

Related skills

More from davila7/claude-code-templates

Related skills

Alternatives in Software Engineering