Skip to main content
Back to registry

quantizing-models-bitsandbytes

davila7/claude-code-templates

bitsandbytes reduces LLM memory by 50% (8-bit) or 75% (4-bit) with <1% accuracy loss.

Installs148
Install command
npx skills add https://github.com/davila7/claude-code-templates --skill quantizing-models-bitsandbytes
Community Reviews

Latest reviews

Sign in to review

No community reviews yet. Be the first to review.

Browse this skill in context
FAQ
What does quantizing-models-bitsandbytes do?

bitsandbytes reduces LLM memory by 50% (8-bit) or 75% (4-bit) with <1% accuracy loss.

Is quantizing-models-bitsandbytes good?

quantizing-models-bitsandbytes does not have approved reviews yet, so SkillJury cannot publish a community verdict.

What agent does quantizing-models-bitsandbytes work with?

quantizing-models-bitsandbytes currently lists compatibility with codex, gemini-cli, opencode, cursor, github-copilot, claude-code.

What are alternatives to quantizing-models-bitsandbytes?

Skills in the same category include telegram-bot-builder, flutter-app-size, sharp-edges, iterative-retrieval.

How do I install quantizing-models-bitsandbytes?

npx skills add https://github.com/davila7/claude-code-templates --skill quantizing-models-bitsandbytes

Related skills

More from davila7/claude-code-templates

Related skills

Alternatives in Software Engineering