Universal Goal Analysis Framework
Input: $ARGUMENTS
Overview
The comprehensive question framework that applies to EVERY goal, regardless of domain. This is the consolidated version incorporating all improvements from v2 through v5: actionable fact-yielding questions (v2), fully decomposed sub-procedures (v3), leverage analysis and communication categories (v4), and operational, meta-cognitive, and quality categories (v5).
Steps
Step 1: GOAL UNDERSTANDING
Before analyzing, understand what’s actually wanted:
- Is there a specific, measurable outcome desired? → What is it?
- Is there a deadline or timeline? → What is it?
- Is there a reason this goal matters now? → What changed?
- Is this the REAL goal, or is it instrumental to something deeper? → Trace to root desire
- Has this goal been attempted before? → What happened?
- → INVOKE: /gu (goal understanding) for deep goal clarification
Step 2: SITUATION ANALYSIS
Understand the current state:
- Is there a gap between current state and goal? → How large?
- Are there resources currently available? → What are they?
- Are there constraints? → What can’t change?
- Are there stakeholders affected? → Who, and what do they want?
- Is there existing momentum? → In which direction?
- Are there deadlines or windows of opportunity? → When do they close?
Step 3: LEVERAGE ANALYSIS
Find where small effort produces big results:
Recursive requirement tracing:
- What does the goal require? (List requirements R1, R2, R3…)
- For each requirement: what does THAT require? (Recurse)
- Continue until you reach things you already have or can easily get
- At each level, estimate: cost to acquire, time to acquire, alternatives
Goal
├── R1: [requirement]
│ ├── R1.1: [sub-requirement] — have: Y/N — cost: [X]
│ └── R1.2: [sub-requirement] — have: Y/N — cost: [X]
├── R2: [requirement]
│ └── R2.1: [sub-requirement] — have: Y/N — cost: [X]
└── R3: [requirement]
├── R3.1: [sub-requirement] — have: Y/N — cost: [X]
└── R3.2: [sub-requirement] — have: Y/N — cost: [X]
Leverage identification:
- Which requirement, if met, would make others easier?
- Which requirement has the best cost-to-impact ratio?
- Which requirement is the bottleneck?
Step 4: STRATEGY ANALYSIS
How to get from current state to goal:
- Is there a known, proven path? → What is it? Why not just follow it?
- Are there multiple approaches? → What are they?
- For each approach:
- What is the mechanism? (HOW does it work, causally?)
- What assumptions does it rest on?
- What has to go right?
- What could go wrong?
- What’s the cost?
- Is there an unconventional approach worth considering?
- → INVOKE: /ie (innovation engine) for non-obvious strategies
Step 5: ACTION ANALYSIS
Multi-dimensional comparison of possible actions:
| Action | Impact | Effort | Risk | Reversibility | Learning Value | Speed |
|---|---|---|---|---|---|---|
| [action] | H/M/L | H/M/L | H/M/L | H/M/L | H/M/L | H/M/L |
Action prioritization:
- Is there a single action that would create the most progress? → Do it first
- Are there prerequisite actions? → Do those first
- Are there quick wins that build momentum? → Consider starting there
- Are there irreversible actions? → Delay until confident
Step 6: RISK ANALYSIS
What could prevent success:
- Is there a risk of failure? → What are the failure modes?
- For each failure mode:
- How likely? (1-5)
- How severe? (1-5)
- How detectable? (Can you see it coming?)
- What’s the mitigation?
- Is there a risk of success? → What changes if you achieve the goal?
- Is there a risk of inaction? → What happens if you do nothing?
Step 7: COMMUNICATION
Who needs to know what:
- Is there anyone who needs to be informed? → Who and what?
- Is there anyone whose buy-in is needed? → How to get it?
- Is there anyone who might oppose this? → What are their concerns?
- Is there a way to make progress visible? → What to track and share?
Step 8: META ANALYSIS
Check the analysis itself:
- Are there questions I should be asking that I’m not?
- Am I assuming something I should be testing?
- Is my confidence calibrated? (Am I too sure or not sure enough?)
- What would change my mind about this goal?
- → INVOKE: /cpra (comprehensive aspects) to check for blind spots
Step 9: CAUSATION AND PREDICTION
- Is there a known causal model for this domain? (What causes what?)
- Which causal links are proven vs assumed?
- What predictions does your strategy make? (If I do X, Y should happen by Z date)
- How will you test those predictions?
- What would DISPROVE your causal model? (If this doesn’t happen, the model is wrong)
Step 10: ASSUMPTIONS AND BELIEFS
- List every assumption your strategy depends on
- For each: is it tested or untested?
- For each untested assumption: what’s the cost of testing it now vs learning it’s wrong later?
- What beliefs do you hold about this domain that you’ve never questioned?
- What would someone who DISAGREES with your approach say? What’s the strongest version of their argument?
Step 11: BIASES AND MENTAL MODELS
- Is there a sunk cost influencing continued pursuit? (Would you start this today given current state?)
- Is there anchoring on an initial estimate/plan that may be wrong?
- Is there confirmation bias in your evidence gathering?
- What mental model are you using? (E.g., “this is like a race” or “this is like gardening”)
- What does your mental model HIDE? (What aspects of reality doesn’t it capture?)
Step 12: SYSTEM DYNAMICS
- Are there feedback loops? (Actions that amplify or dampen themselves)
- Are there delays between action and effect? How long?
- Are there thresholds? (Points where behavior changes dramatically)
- Is the system stable or chaotic? (Small changes → small effects, or small changes → big effects?)
- Are there emergent behaviors? (System-level outcomes not predictable from parts)
Step 13: STRATEGIC INTERACTION
- Is there a competitor/adversary? What are they optimizing for?
- Will they react to your actions? How?
- Is this zero-sum (your gain = their loss) or positive-sum (both can win)?
- Is this a repeated game (you’ll interact again) or one-shot?
- What information do they have about your strategy?
Step 14: OPERATIONAL CATEGORIES
14a. Delegation:
- What tasks can ONLY you do? What can others do?
- For delegated tasks: is the output quality acceptable?
- What’s the cost of quality loss vs the cost of doing it yourself?
14b. Capacity and Load:
- What’s your current capacity utilization? (% of maximum sustainable output)
- Is there slack? (If not, any disruption will cause cascade failure)
- What would you cut to create 20% slack?
14c. Queues and Handoffs:
- Where does work wait for someone? How long?
- Where are the handoff points? (Work passes from one person/system to another)
- What information is lost at handoffs?
14d. Interruptions:
- What interrupts the most important work? How often?
- What’s the cost of each interruption? (Switching cost + recovery time)
- How to protect the most important work from interruption?
Step 15: META-COGNITIVE CATEGORIES
- Cognitive load: Is this goal consuming mental bandwidth beyond work time? Is that sustainable?
- Attention allocation: What % of your attention goes to this vs other goals? Is that the right allocation?
- Sustained effort: Can you maintain the required effort for the required duration? What’s the burnout risk?
- Decision fatigue: How many decisions does this goal require per day/week? Can any be automated or pre-decided?
Step 16: QUALITY CATEGORIES
- Validation: How do you know your output is correct? (Not “it feels right”)
- Consistency: Are you applying the same standards across similar decisions?
- Simplicity: Is there a simpler approach that achieves 80% of the result?
- Metric gaming: Are you optimizing the metric instead of the actual outcome?
Step 17: Report
UNIVERSAL GOAL ANALYSIS:
Goal: [stated goal]
Real goal: [if different from stated]
Situation: [current state summary]
Gap: [what needs to change]
Leverage points:
1. [highest-leverage requirement]
2. [second-highest]
Recommended strategy: [approach and mechanism]
Key assumption: [what must be true]
Priority actions:
1. [first action] — impact: [H/M/L]
2. [second action]
3. [third action]
Top risks:
1. [risk] — likelihood: [1-5] — severity: [1-5] — mitigation: [action]
Communication: [who needs to know what]
Confidence: [overall confidence in this analysis]
Biggest unknown: [what would most change this analysis if known]
Causal model: [known/assumed/unknown]
Key predictions: [if X then Y by Z]
Untested assumptions: [N — top 3 listed]
Active biases: [which ones detected]
System dynamics: [feedback loops, delays, thresholds]
Strategic interaction: [competitive dynamics]
Operational:
- Capacity utilization: [%]
- Main queue/bottleneck: [where]
- Interruption cost: [hours/week]
Meta-cognitive:
- Cognitive load: [sustainable/unsustainable]
- Attention allocation: [% to this goal]
- Burnout risk: [H/M/L]
Quality:
- Validation method: [how you know it's working]
- Simplification opportunity: [if any]
- Metric gaming risk: [what you might be optimizing wrong]
When to Use
- Any new goal (most comprehensive version)
- Any goal that feels stuck
- Any goal where approach is uncertain
- Goals in complex systems with many interacting parts
- Goals requiring long sustained effort
- Goals in competitive/adversarial contexts
- Periodic review of ongoing goals
- When domain-specific procedure doesn’t exist
- → INVOKE: /gu (goal understanding) for goal clarification
- → INVOKE: /gd (goal decomposition) for breaking down complex goals
Verification
- Goal is specific and measurable
- “Is there” asked before “what is” (no assumptions about problems existing)
- Leverage analysis performed with recursive requirement tracing
- Multiple strategies considered (not just the obvious one)
- Actions compared on multiple dimensions
- Risks include risk of success and risk of inaction
- Communication needs identified
- Meta-analysis performed (checking the analysis itself)
- Causal model assessed and predictions stated
- Assumptions listed and tested/untested status noted
- Biases actively checked
- System dynamics mapped
- Operational load assessed
- Meta-cognitive sustainability checked
- Quality validation method defined
Integration Points
- Often invoked from: /pce (new goal input), /gu (after parsing), manual invocation
- Routes to: /grfr, /crw, /dcm, /assumption_verification, /constraint_workarounds, /ria, /spd, /ie, /cpra
- Related: /gu, /gjs, /ve, /gd