Claude Code · Community skill
Research Summarizer
Read less. Understand more. Cite correctly.
What this skill covers
This page keeps a stable Remote OpenClaw URL for the upstream skillwhile preserving the original source content below. The shell stays consistent, and the body can vary as much as the upstream SKILL.md or README varies.
Source files and registry paths
Source path
product-team/research-summarizer/skills/research-summarizer
Entry file
product-team/research-summarizer/skills/research-summarizer/SKILL.md
Repository
alirezarezvani/claude-skills
Format
markdown-skill
Original source content
Raw file# Research Summarizer > Read less. Understand more. Cite correctly. Structured research summarization workflow that turns dense source material into actionable briefs. Built for product managers, analysts, founders, and anyone who reads more than they should have to. Not a generic "summarize this" — a repeatable framework that extracts what matters, compares across sources, and formats citations properly. --- ## Slash Commands | Command | What it does | |---------|-------------| | `/research:summarize` | Summarize a single source into a structured brief | | `/research:compare` | Compare 2-5 sources side-by-side with synthesis | | `/research:cite` | Extract and format all citations from a document | --- ## When This Skill Activates Recognize these patterns from the user: - "Summarize this paper / article / report" - "What are the key findings in this document?" - "Compare these sources" - "Extract citations from this PDF" - "Give me a research brief on [topic]" - "Break down this whitepaper" - Any request involving: summarize, research brief, literature review, citation, source comparison If the user has a document and wants structured understanding → this skill applies. --- ## Workflow ### `/research:summarize` — Single Source Summary 1. **Identify source type** - Academic paper → use IMRAD structure (Introduction, Methods, Results, Analysis, Discussion) - Web article → use claim-evidence-implication structure - Technical report → use executive summary structure - Documentation → use reference summary structure 2. **Extract structured brief** ``` Title: [exact title] Author(s): [names] Date: [publication date] Source Type: [paper | article | report | documentation] ## Key Thesis [1-2 sentences: the central argument or finding] ## Key Findings 1. [Finding with supporting evidence] 2. [Finding with supporting evidence] 3. [Finding with supporting evidence] ## Methodology [How they arrived at these findings — data sources, sample size, approach] ## Limitations - [What the source doesn't cover or gets wrong] ## Actionable Takeaways - [What to do with this information] ## Notable Quotes > "[Direct quote]" (p. X) ``` 3. **Assess quality** - Source credibility (peer-reviewed, reputable outlet, primary vs secondary) - Evidence strength (data-backed, anecdotal, theoretical) - Recency (when published, still relevant?) - Bias indicators (funding source, author affiliation, methodology gaps) ### `/research:compare` — Multi-Source Comparison 1. **Collect sources** (2-5 documents) 2. **Summarize each** using the single-source workflow above 3. **Build comparison matrix** ``` | Dimension | Source A | Source B | Source C | |------------------|-----------------|-----------------|-----------------| | Central Thesis | ... | ... | ... | | Methodology | ... | ... | ... | | Key Finding | ... | ... | ... | | Sample/Scope | ... | ... | ... | | Credibility | High/Med/Low | High/Med/Low | High/Med/Low | ``` 4. **Synthesize** - Where do sources agree? (convergent findings = stronger signal) - Where do they disagree? (divergent findings = needs investigation) - What gaps exist across all sources? - What's the weight of evidence for each position? 5. **Produce synthesis brief** ``` ## Consensus Findings [What most sources agree on] ## Contested Points [Where sources disagree, with strongest evidence for each side] ## Gaps [What none of the sources address] ## Recommendation [Based on weight of evidence, what should the reader believe/do?] ``` ### `/research:cite` — Citation Extraction 1. **Scan document** for all references, footnotes, in-text citations 2. **Extract and format** using the requested style (APA 7 default) 3. **Classify citations** by type: - Primary sources (original research, data) - Secondary sources (reviews, meta-analyses, commentary) - Tertiary sources (textbooks, encyclopedias) 4. **Output** sorted bibliography with classification tags Supported citation formats: - **APA 7** (default) — social sciences, business - **IEEE** — engineering, computer science - **Chicago** — humanities, history - **Harvard** — general academic - **MLA 9** — arts, humanities --- ## Tooling ### `scripts/extract_citations.py` CLI utility for extracting and formatting citations from text. **Features:** - Regex-based citation detection (DOI, URL, author-year, numbered references) - Multiple output formats (APA, IEEE, Chicago, Harvard, MLA) - JSON export for integration with reference managers - Deduplication of repeated citations **Usage:** ```bash # Extract citations from a file (APA format, default) python3 scripts/extract_citations.py document.txt # Specify format python3 scripts/extract_citations.py document.txt --format ieee # JSON output python3 scripts/extract_citations.py document.txt --format apa --output json # From stdin cat paper.txt | python3 scripts/extract_citations.py --stdin ``` ### `scripts/format_summary.py` CLI utility for generating structured research summaries. **Features:** - Multiple summary templates (academic, article, report, executive) - Configurable output length (brief, standard, detailed) - Markdown and plain text output - Key findings extraction with evidence tagging **Usage:** ```bash # Generate structured summary template python3 scripts/format_summary.py --template academic # Brief executive summary format python3 scripts/format_summary.py --template executive --length brief # All templates listed python3 scripts/format_summary.py --list-templates # JSON output python3 scripts/format_summary.py --template article --output json ``` --- ## Quality Assessment Framework Rate every source on four dimensions: | Dimension | High | Medium | Low | |-----------|------|--------|-----| | **Credibility** | Peer-reviewed, established author | Reputable outlet, known author | Blog, unknown author, no review | | **Evidence** | Large sample, rigorous method | Moderate data, sound approach | Anecdotal, no data, opinion | | **Recency** | Published within 2 years | 2-5 years old | 5+ years, may be outdated | | **Objectivity** | No conflicts, balanced view | Minor affiliations disclosed | Funded by interested party, one-sided | **Overall Rating:** - 4 Highs = Strong source — cite with confidence - 2+ Mediums = Adequate source — cite with caveats - 2+ Lows = Weak source — verify independently before citing --- ## Summary Templates See `references/summary-templates.md` for: - Academic paper summary template (IMRAD) - Web article summary template (claim-evidence-implication) - Technical report template (executive summary) - Comparative analysis template (matrix + synthesis) - Literature review template (thematic organization) See `references/citation-formats.md` for: - APA 7 formatting rules and examples - IEEE formatting rules and examples - Chicago, Harvard, MLA quick reference --- ## Proactive Triggers Flag these without being asked: - **Source has no date** → Note it. Undated sources lose credibility points. - **Source contradicts other sources** → Highlight the contradiction explicitly. Don't paper over disagreements. - **Source is behind a paywall** → Note limited access. Suggest alternatives if known. - **User provides only one source for a compare** → Ask for at least one more. Comparison needs 2+. - **Citations are incomplete** → Flag missing fields (year, author, title). Don't invent metadata. - **Source is 5+ years old in a fast-moving field** → Warn about potential obsolescence. --- ## Installation ### One-liner (any tool) ```bash git clone https://github.com/alirezarezvani/claude-skills.git cp -r claude-skills/product-team/research-summarizer ~/.claude/skills/ ``` ### Multi-tool install ```bash ./scripts/convert.sh --skill research-summarizer --tool codex|gemini|cursor|windsurf|openclaw ``` ### OpenClaw ```bash clawhub install cs-research-summarizer ``` --- ## Related Skills - **product-analytics** — Quantitative analysis. Complementary — use research-summarizer for qualitative sources, product-analytics for metrics. - **competitive-teardown** — Competitive research. Complementary — use research-summarizer for individual source analysis, competitive-teardown for market landscape. - **content-production** — Content writing. Research-summarizer feeds content-production — summarize sources first, then write. - **product-discovery** — Discovery frameworks. Complementary — research-summarizer for desk research, product-discovery for user research.
Related Claude Code skills
claude-skills
Agile Product Owner
Backlog management and sprint execution toolkit for product owners, including user story generation, acceptance criteria patterns, sprint planning, and velocity tracking.
claude-skills
Apple HIG Expert
You are a Senior Apple Design Lead with decades of experience shipping award-winning apps on the App Store. Your goal is to help users design and audit apps that feel natively integrated into the Apple ecosystem while pushing the boundaries of the Liquid Glass aesthetic.
claude-skills
Code → PRD: Reverse Engineer Any Codebase Into Product Requirements
Reverse-engineer any frontend, backend, or fullstack codebase into a complete Product Requirements Document (PRD). Analyzes routes, components, models, APIs, and user interactions to produce business-readable documentation detailed enough for engineers or AI agents to fully reconstruct every page and endpoint.
claude-skills
Competitive Teardown
Reviewed community Claude skill from alirezarezvani/claude-skills.
claude-skills
Experiment Designer
Design, prioritize, and evaluate product experiments with clear hypotheses and defensible decisions.
claude-skills
Landing Page Generator
Generate high-converting landing pages from a product description. Output complete Next.js/React components with multiple section variants, proven copy frameworks, SEO optimization, and performance-first patterns. Not lorem ipsum — actual copy that converts.