Skip to main content
Back to registry

prompt-repetition

supercent-io/skills-template

LLMs are trained as Causal Language Models , where each token attends only to previous tokens . This leads to:

Installs10
Install command
npx skills add https://github.com/supercent-io/skills-template --skill prompt-repetition
Security audits
Gen Agent Trust HubPASS
SocketFAIL
SnykPASS
About this skill
LLMs are trained as Causal Language Models , where each token attends only to previous tokens . This leads to: Prompt repetition enables the second pass to reference the entire first pass, effectively mimicking some benefits of bidirectional attention . In the second repetition, the model reprocesses information across the entire first prompt and strengthens attention weights on key concepts , resulting in improved performance. Note : This does not change the model architecture to bidirectional; it is a prompt engineering technique to mitigate the limitations of causal models. Most dramatic improvement: Gemini 2.0 Flash-Lite on NameIndex: 21.33% → 97.33% (+76%p) Before: After (repetition ×2 applied): Expected output: Accuracy: original 78% → after repetition 93% (+15%p) Before: After (repetition ×3 applied): Prompt repeated 3 times Expected output: Accuracy: original 21% → after repetition 97% (+76%p) Note : Prompts containing tool call instructions are also repeated in their entirety . The full-repetition approach was adopted for implementation simplicity and consistency. Before: After (repetition ×2): Research results show that full repetition including tool call sections is also effective. Key insight: The prefill phase is highly parallelized on GPU, so doubling input tokens has minimal impact on latency.

Source description provided by the upstream skill listing. Community reviews and install context appear in the sections below.

Community Reviews

Latest reviews

Sign in to review

No community reviews yet. Be the first to review.

Browse this skill in context
FAQ
What does prompt-repetition do?

LLMs are trained as Causal Language Models , where each token attends only to previous tokens . This leads to:

Is prompt-repetition good?

prompt-repetition does not have approved reviews yet, so SkillJury cannot publish a community verdict.

What agent does prompt-repetition work with?

prompt-repetition currently lists compatibility with Agent compatibility has not been published yet..

What are alternatives to prompt-repetition?

Skills in the same category include telegram-bot-builder, flutter-app-size, sharp-edges, iterative-retrieval.

How do I install prompt-repetition?

npx skills add https://github.com/supercent-io/skills-template --skill prompt-repetition

Related skills

More from supercent-io/skills-template

Related skills

Alternatives in Software Engineering