Hermes · OptionalMLOpsv1.0.0

tensorrt-llm

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

Install command

hermes skills install tensorrt-llm

What this page covers

This index page keeps Hermes skills separate from the OpenClaw catalog. It gives you the install command, registry source, platform notes, and a route back to the original Hermes docs or registry listing when you want the full upstream reference.

Related Hermes skills

Featured

Your product here

Show your offer to OpenClaw operators and AI builders across every page and blog.

Advertise