Hermes · OptionalMLOpsv1.0.0

optimizing-attention-flash

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

Install command

hermes skills install optimizing-attention-flash

What this page covers

This index page keeps Hermes skills separate from the OpenClaw catalog. It gives you the install command, registry source, platform notes, and a route back to the original Hermes docs or registry listing when you want the full upstream reference.

Related Hermes skills

Featured

Your product here

Show your offer to OpenClaw operators and AI builders across every page and blog.

Advertise