Learning System
Overview
Systematically capture, analyze, and apply learnings to improve campaign effectiveness
Steps
Step 1: Data collection and validation
Ensure all campaign data is captured and valid:
Required data points:
- All outreach attempts with timestamps
- All responses with classification
- A/B variant assignments (if applicable)
- Channel for each contact
- Tier for each target
- Cost data
- Meeting outcomes
Validation checks:
- No missing required fields
- Timestamps are logical (response after send)
- A/B assignments are balanced
- All responses are classified
- Costs are documented
Flag any data quality issues for resolution.
Step 2: Metrics calculation
Calculate comprehensive campaign metrics:
Outreach metrics:
- Total contacts attempted
- Contacts by channel (email, phone, etc.)
- Contacts by tier (1, 2, 3)
- Delivery rate (successful sends)
Response metrics:
- Total responses
- Response rate overall
- Response rate by channel
- Response rate by tier
- Response classification breakdown (positive, neutral, etc.)
Conversion metrics:
- Meetings scheduled / total contacted
- Meeting rate by tier
- Meeting-to-action conversion
Cost metrics:
- Total campaign spend
- Cost per contact
- Cost per response
- Cost per meeting
Timing metrics:
- Average response time
- Response rate by day of week
- Response rate by wave
Step 3: Segment analysis
Analyze performance across segments to identify patterns:
By Tier:
- Compare response rates across Tier 1, 2, 3
- Identify if tier prioritization is correct
- Calculate efficiency (response rate per research hour)
By Channel:
- Compare email vs phone vs other channels
- Identify optimal channel sequences
- Calculate cost-effectiveness by channel
By Role:
- Compare staff vs legislator response
- Compare committee staff vs personal office
- Identify most responsive role types
By Party/Region (if applicable):
- Compare response rates across parties
- Identify regional patterns
- Note any surprising findings
By Timing:
- Compare response rates by day of week
- Compare response rates by time of day
- Identify legislative calendar effects
For each segment comparison:
- Calculate rate difference
- Assess sample size adequacy
- Note confidence level
Step 4: A/B test analysis
Analyze A/B test results (if tests were run):
For each test:
- Confirm random assignment was maintained
- Calculate response rate for each variant
- Calculate absolute and relative difference
- Assess statistical significance:
-
20% difference with n=50+ each: likely real
- 10-20% difference: needs more data
- <10% difference: probably noise
-
- Determine winner or “inconclusive”
- Document interpretation
If inconclusive:
- Note sample size achieved
- Recommend continuation in next campaign
- Document preliminary direction
If clear winner:
- Document winning variant
- Update default templates
- Archive losing variant
Step 5: Learning extraction
Extract specific learnings from all analyses:
Sources for learnings:
- Segment analysis patterns
- A/B test results
- Unexpected outcomes
- Qualitative observations (meeting feedback, etc.)
For each potential learning:
- State finding clearly in one sentence
- Document evidence:
- Sample size
- Effect size (percentage difference)
- Data source
- Assess confidence level:
- High: Large sample, clear effect, consistent with theory
- Medium: Moderate sample, notable effect, plausible
- Low: Small sample, modest effect, could be noise
- Define implication for future campaigns
- Specify status: Validated / Preliminary / Needs more data
Target: 3-5 learnings per campaign
Step 6: Knowledge base update
Integrate new learnings into knowledge base:
For new learnings:
- Assign unique ID (L001, L002, etc.)
- Categorize (message, channel, target, timing, policy)
- Add to knowledge base with full documentation
- Link to source campaign
For existing learnings:
- Check if new data supports or contradicts
- Update confidence level if warranted
- Add new evidence to existing learning
- Mark as “validated” if consistently supported
Maintenance:
- Archive contradicted learnings
- Consolidate related learnings
- Flag learnings that need more data
- Remove outdated learnings
Step 7: Recommendation development
Translate learnings into specific recommendations:
Categories of recommendations:
-
Message improvements
- Template updates based on test results
- Subject line changes
- Call-to-action modifications
-
Channel mix changes
- Adjust channel sequence
- Change channel allocation
- Add or remove channels
-
Targeting adjustments
- Modify tier thresholds
- Change target prioritization
- Adjust personalization depth
-
Timing optimizations
- Adjust send days/times
- Modify wave spacing
- Align with legislative calendar
-
Process improvements
- Response handling changes
- Follow-up sequence modifications
- Meeting preparation updates
Each recommendation should be:
- Specific (not vague guidance)
- Actionable (can implement immediately)
- Measurable (can verify in next campaign)
- Linked to learning that supports it
Step 8: Report compilation
Create comprehensive campaign analysis report:
Report sections:
-
Executive Summary (3-5 bullets)
- Key metrics (contacts, responses, meetings)
- Headline findings
- Critical recommendations
-
Performance Metrics
- All calculated metrics in tables
- Comparison to benchmarks/previous campaigns
- Visual charts where helpful
-
Segment Analysis
- Performance by tier, channel, role, timing
- Notable patterns with interpretation
-
A/B Test Results
- Each test with results and conclusion
- Implications for future tests
-
Key Learnings
- Numbered list with evidence
- Confidence levels noted
- Implications stated
-
Recommendations
- Prioritized list of changes
- Implementation notes
-
Open Questions
- What to investigate next
- Tests to run in future campaigns
Format as document suitable for team review and archive.
When to Use
- After completing an outreach campaign with sufficient data
- When A/B test results need analysis and interpretation
- At regular intervals to identify cross-campaign patterns
- When building or updating advocacy knowledge base
- Before planning next campaign to apply learnings
- When onboarding new team members to advocacy methodology
Verification
- All campaign data validated and complete
- All metrics calculated correctly
- Segment analysis covers key dimensions (tier, channel, timing)
- A/B tests have documented conclusions with confidence levels
- Each learning has evidence, confidence, and implication
- Knowledge base is updated and maintained
- Recommendations are specific and actionable
- Report is comprehensive and suitable for team review
Input: $ARGUMENTS
Apply this procedure to the input provided.