Tier 4

learning_system

Systematically capture, analyze, and apply learnings to improve campaign effectiveness

Usage in Claude Code: /learning_system your question here

Learning System

Overview

Systematically capture, analyze, and apply learnings to improve campaign effectiveness

Steps

Step 1: Data collection and validation

Ensure all campaign data is captured and valid:

Required data points:

  1. All outreach attempts with timestamps
  2. All responses with classification
  3. A/B variant assignments (if applicable)
  4. Channel for each contact
  5. Tier for each target
  6. Cost data
  7. Meeting outcomes

Validation checks:

  • No missing required fields
  • Timestamps are logical (response after send)
  • A/B assignments are balanced
  • All responses are classified
  • Costs are documented

Flag any data quality issues for resolution.

Step 2: Metrics calculation

Calculate comprehensive campaign metrics:

Outreach metrics:

  • Total contacts attempted
  • Contacts by channel (email, phone, etc.)
  • Contacts by tier (1, 2, 3)
  • Delivery rate (successful sends)

Response metrics:

  • Total responses
  • Response rate overall
  • Response rate by channel
  • Response rate by tier
  • Response classification breakdown (positive, neutral, etc.)

Conversion metrics:

  • Meetings scheduled / total contacted
  • Meeting rate by tier
  • Meeting-to-action conversion

Cost metrics:

  • Total campaign spend
  • Cost per contact
  • Cost per response
  • Cost per meeting

Timing metrics:

  • Average response time
  • Response rate by day of week
  • Response rate by wave

Step 3: Segment analysis

Analyze performance across segments to identify patterns:

By Tier:

  • Compare response rates across Tier 1, 2, 3
  • Identify if tier prioritization is correct
  • Calculate efficiency (response rate per research hour)

By Channel:

  • Compare email vs phone vs other channels
  • Identify optimal channel sequences
  • Calculate cost-effectiveness by channel

By Role:

  • Compare staff vs legislator response
  • Compare committee staff vs personal office
  • Identify most responsive role types

By Party/Region (if applicable):

  • Compare response rates across parties
  • Identify regional patterns
  • Note any surprising findings

By Timing:

  • Compare response rates by day of week
  • Compare response rates by time of day
  • Identify legislative calendar effects

For each segment comparison:

  • Calculate rate difference
  • Assess sample size adequacy
  • Note confidence level

Step 4: A/B test analysis

Analyze A/B test results (if tests were run):

For each test:

  1. Confirm random assignment was maintained
  2. Calculate response rate for each variant
  3. Calculate absolute and relative difference
  4. Assess statistical significance:
    • 20% difference with n=50+ each: likely real

    • 10-20% difference: needs more data
    • <10% difference: probably noise
  5. Determine winner or “inconclusive”
  6. Document interpretation

If inconclusive:

  • Note sample size achieved
  • Recommend continuation in next campaign
  • Document preliminary direction

If clear winner:

  • Document winning variant
  • Update default templates
  • Archive losing variant

Step 5: Learning extraction

Extract specific learnings from all analyses:

Sources for learnings:

  • Segment analysis patterns
  • A/B test results
  • Unexpected outcomes
  • Qualitative observations (meeting feedback, etc.)

For each potential learning:

  1. State finding clearly in one sentence
  2. Document evidence:
    • Sample size
    • Effect size (percentage difference)
    • Data source
  3. Assess confidence level:
    • High: Large sample, clear effect, consistent with theory
    • Medium: Moderate sample, notable effect, plausible
    • Low: Small sample, modest effect, could be noise
  4. Define implication for future campaigns
  5. Specify status: Validated / Preliminary / Needs more data

Target: 3-5 learnings per campaign

Step 6: Knowledge base update

Integrate new learnings into knowledge base:

For new learnings:

  1. Assign unique ID (L001, L002, etc.)
  2. Categorize (message, channel, target, timing, policy)
  3. Add to knowledge base with full documentation
  4. Link to source campaign

For existing learnings:

  1. Check if new data supports or contradicts
  2. Update confidence level if warranted
  3. Add new evidence to existing learning
  4. Mark as “validated” if consistently supported

Maintenance:

  • Archive contradicted learnings
  • Consolidate related learnings
  • Flag learnings that need more data
  • Remove outdated learnings

Step 7: Recommendation development

Translate learnings into specific recommendations:

Categories of recommendations:

  1. Message improvements

    • Template updates based on test results
    • Subject line changes
    • Call-to-action modifications
  2. Channel mix changes

    • Adjust channel sequence
    • Change channel allocation
    • Add or remove channels
  3. Targeting adjustments

    • Modify tier thresholds
    • Change target prioritization
    • Adjust personalization depth
  4. Timing optimizations

    • Adjust send days/times
    • Modify wave spacing
    • Align with legislative calendar
  5. Process improvements

    • Response handling changes
    • Follow-up sequence modifications
    • Meeting preparation updates

Each recommendation should be:

  • Specific (not vague guidance)
  • Actionable (can implement immediately)
  • Measurable (can verify in next campaign)
  • Linked to learning that supports it

Step 8: Report compilation

Create comprehensive campaign analysis report:

Report sections:

  1. Executive Summary (3-5 bullets)

    • Key metrics (contacts, responses, meetings)
    • Headline findings
    • Critical recommendations
  2. Performance Metrics

    • All calculated metrics in tables
    • Comparison to benchmarks/previous campaigns
    • Visual charts where helpful
  3. Segment Analysis

    • Performance by tier, channel, role, timing
    • Notable patterns with interpretation
  4. A/B Test Results

    • Each test with results and conclusion
    • Implications for future tests
  5. Key Learnings

    • Numbered list with evidence
    • Confidence levels noted
    • Implications stated
  6. Recommendations

    • Prioritized list of changes
    • Implementation notes
  7. Open Questions

    • What to investigate next
    • Tests to run in future campaigns

Format as document suitable for team review and archive.

When to Use

  • After completing an outreach campaign with sufficient data
  • When A/B test results need analysis and interpretation
  • At regular intervals to identify cross-campaign patterns
  • When building or updating advocacy knowledge base
  • Before planning next campaign to apply learnings
  • When onboarding new team members to advocacy methodology

Verification

  • All campaign data validated and complete
  • All metrics calculated correctly
  • Segment analysis covers key dimensions (tier, channel, timing)
  • A/B tests have documented conclusions with confidence levels
  • Each learning has evidence, confidence, and implication
  • Knowledge base is updated and maintained
  • Recommendations are specific and actionable
  • Report is comprehensive and suitable for team review

Input: $ARGUMENTS

Apply this procedure to the input provided.