Claude Code · Community skill
Hub: Init — Create New Session
Initialize an AgentHub collaboration session. Creates the .agenthub/ directory structure, generates a session ID, and configures evaluation criteria.
What this skill covers
This page keeps a stable Remote OpenClaw URL for the upstream skillwhile preserving the original source content below. The shell stays consistent, and the body can vary as much as the upstream SKILL.md or README varies.
Source files and registry paths
Source path
engineering/agenthub/skills/init
Entry file
engineering/agenthub/skills/init/SKILL.md
Repository
alirezarezvani/claude-skills
Format
markdown-skill
Original source content
Raw file# /hub:init — Create New Session
Initialize an AgentHub collaboration session. Creates the `.agenthub/` directory structure, generates a session ID, and configures evaluation criteria.
## Usage
```
/hub:init # Interactive mode
/hub:init --task "Optimize API" --agents 3 --eval "pytest bench.py" --metric p50_ms --direction lower
/hub:init --task "Refactor auth" --agents 2 # No eval (LLM judge mode)
```
## What It Does
### If arguments provided
Pass them to the init script:
```bash
python {skill_path}/scripts/hub_init.py \
--task "{task}" --agents {N} \
[--eval "{eval_cmd}"] [--metric {metric}] [--direction {direction}] \
[--base-branch {branch}]
```
### If no arguments (interactive mode)
Collect each parameter:
1. **Task** — What should the agents do? (required)
2. **Agent count** — How many parallel agents? (default: 3)
3. **Eval command** — Command to measure results (optional — skip for LLM judge mode)
4. **Metric name** — What metric to extract from eval output (required if eval command given)
5. **Direction** — Is lower or higher better? (required if metric given)
6. **Base branch** — Branch to fork from (default: current branch)
### Output
```
AgentHub session initialized
Session ID: 20260317-143022
Task: Optimize API response time below 100ms
Agents: 3
Eval: pytest bench.py --json
Metric: p50_ms (lower is better)
Base branch: dev
State: init
Next step: Run /hub:spawn to launch 3 agents
```
For content or research tasks (no eval command → LLM judge mode):
```
AgentHub session initialized
Session ID: 20260317-151200
Task: Draft 3 competing taglines for product launch
Agents: 3
Eval: LLM judge (no eval command)
Base branch: dev
State: init
Next step: Run /hub:spawn to launch 3 agents
```
## Baseline Capture
If `--eval` was provided, capture a baseline measurement after session creation:
1. Run the eval command in the current working directory
2. Extract the metric value from stdout
3. Append `baseline: {value}` to `.agenthub/sessions/{session-id}/config.yaml`
4. Display: `Baseline captured: {metric} = {value}`
This baseline is used by `result_ranker.py --baseline` during evaluation to show deltas. If the eval command fails at this stage, warn the user but continue — baseline is optional.
## After Init
Tell the user:
- Session created with ID `{session-id}`
- Baseline metric (if captured)
- Next step: `/hub:spawn` to launch agents
- Or `/hub:spawn {session-id}` if multiple sessions existRelated Claude Code skills
claude-skills
AgentHub — Multi Agent Collaboration
Spawn N parallel AI agents that compete on the same task. Each agent works in an isolated git worktree. The coordinator evaluates results and merges the winner.
claude-skills
Hub: Board — Message Board
Interface for the AgentHub message board. Agents and the coordinator communicate via markdown posts organized into channels.
claude-skills
Hub: Eval — Evaluate Agent Results
Rank all agent results for a session. Supports metric-based evaluation (run a command), LLM judge (compare diffs), or hybrid.
claude-skills
Hub: Merge — Merge Winner
Merge the best agent's branch into the base branch, archive losing branches via git tags, and clean up worktrees.
claude-skills
Hub: Run — One Shot Lifecycle
Run the full AgentHub lifecycle in one command: initialize, capture baseline, spawn agents, evaluate results, and merge the winner.
claude-skills
Hub: Spawn — Launch Parallel Agents
Spawn N subagents that work on the same task in parallel, each in an isolated git worktree.