Hermes · OptionalMLOpsv1.0.0

distributed-llm-pretraining-torchtitan

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

Install command

hermes skills install distributed-llm-pretraining-torchtitan

What this page covers

This index page keeps Hermes skills separate from the OpenClaw catalog. It gives you the install command, registry source, platform notes, and a route back to the original Hermes docs or registry listing when you want the full upstream reference.

Related Hermes skills

Featured

Your product here

Show your offer to OpenClaw operators and AI builders across every page and blog.

Advertise