Tier 4

uga - Universal Goal Analysis (Consolidated)

Universal Goal Analysis Framework

Input: $ARGUMENTS


Overview

The comprehensive question framework that applies to EVERY goal, regardless of domain. This is the consolidated version incorporating all improvements from v2 through v5: actionable fact-yielding questions (v2), fully decomposed sub-procedures (v3), leverage analysis and communication categories (v4), and operational, meta-cognitive, and quality categories (v5).

Steps

Step 1: GOAL UNDERSTANDING

Before analyzing, understand what’s actually wanted:

  1. Is there a specific, measurable outcome desired? → What is it?
  2. Is there a deadline or timeline? → What is it?
  3. Is there a reason this goal matters now? → What changed?
  4. Is this the REAL goal, or is it instrumental to something deeper? → Trace to root desire
  5. Has this goal been attempted before? → What happened?
  6. → INVOKE: /gu (goal understanding) for deep goal clarification

Step 2: SITUATION ANALYSIS

Understand the current state:

  1. Is there a gap between current state and goal? → How large?
  2. Are there resources currently available? → What are they?
  3. Are there constraints? → What can’t change?
  4. Are there stakeholders affected? → Who, and what do they want?
  5. Is there existing momentum? → In which direction?
  6. Are there deadlines or windows of opportunity? → When do they close?

Step 3: LEVERAGE ANALYSIS

Find where small effort produces big results:

Recursive requirement tracing:

  1. What does the goal require? (List requirements R1, R2, R3…)
  2. For each requirement: what does THAT require? (Recurse)
  3. Continue until you reach things you already have or can easily get
  4. At each level, estimate: cost to acquire, time to acquire, alternatives
Goal
├── R1: [requirement]
│   ├── R1.1: [sub-requirement] — have: Y/N — cost: [X]
│   └── R1.2: [sub-requirement] — have: Y/N — cost: [X]
├── R2: [requirement]
│   └── R2.1: [sub-requirement] — have: Y/N — cost: [X]
└── R3: [requirement]
    ├── R3.1: [sub-requirement] — have: Y/N — cost: [X]
    └── R3.2: [sub-requirement] — have: Y/N — cost: [X]

Leverage identification:

  • Which requirement, if met, would make others easier?
  • Which requirement has the best cost-to-impact ratio?
  • Which requirement is the bottleneck?

Step 4: STRATEGY ANALYSIS

How to get from current state to goal:

  1. Is there a known, proven path? → What is it? Why not just follow it?
  2. Are there multiple approaches? → What are they?
  3. For each approach:
    • What is the mechanism? (HOW does it work, causally?)
    • What assumptions does it rest on?
    • What has to go right?
    • What could go wrong?
    • What’s the cost?
  4. Is there an unconventional approach worth considering?
  5. → INVOKE: /ie (innovation engine) for non-obvious strategies

Step 5: ACTION ANALYSIS

Multi-dimensional comparison of possible actions:

ActionImpactEffortRiskReversibilityLearning ValueSpeed
[action]H/M/LH/M/LH/M/LH/M/LH/M/LH/M/L

Action prioritization:

  1. Is there a single action that would create the most progress? → Do it first
  2. Are there prerequisite actions? → Do those first
  3. Are there quick wins that build momentum? → Consider starting there
  4. Are there irreversible actions? → Delay until confident

Step 6: RISK ANALYSIS

What could prevent success:

  1. Is there a risk of failure? → What are the failure modes?
  2. For each failure mode:
    • How likely? (1-5)
    • How severe? (1-5)
    • How detectable? (Can you see it coming?)
    • What’s the mitigation?
  3. Is there a risk of success? → What changes if you achieve the goal?
  4. Is there a risk of inaction? → What happens if you do nothing?

Step 7: COMMUNICATION

Who needs to know what:

  1. Is there anyone who needs to be informed? → Who and what?
  2. Is there anyone whose buy-in is needed? → How to get it?
  3. Is there anyone who might oppose this? → What are their concerns?
  4. Is there a way to make progress visible? → What to track and share?

Step 8: META ANALYSIS

Check the analysis itself:

  1. Are there questions I should be asking that I’m not?
  2. Am I assuming something I should be testing?
  3. Is my confidence calibrated? (Am I too sure or not sure enough?)
  4. What would change my mind about this goal?
  5. → INVOKE: /cpra (comprehensive aspects) to check for blind spots

Step 9: CAUSATION AND PREDICTION

  1. Is there a known causal model for this domain? (What causes what?)
  2. Which causal links are proven vs assumed?
  3. What predictions does your strategy make? (If I do X, Y should happen by Z date)
  4. How will you test those predictions?
  5. What would DISPROVE your causal model? (If this doesn’t happen, the model is wrong)

Step 10: ASSUMPTIONS AND BELIEFS

  1. List every assumption your strategy depends on
  2. For each: is it tested or untested?
  3. For each untested assumption: what’s the cost of testing it now vs learning it’s wrong later?
  4. What beliefs do you hold about this domain that you’ve never questioned?
  5. What would someone who DISAGREES with your approach say? What’s the strongest version of their argument?

Step 11: BIASES AND MENTAL MODELS

  1. Is there a sunk cost influencing continued pursuit? (Would you start this today given current state?)
  2. Is there anchoring on an initial estimate/plan that may be wrong?
  3. Is there confirmation bias in your evidence gathering?
  4. What mental model are you using? (E.g., “this is like a race” or “this is like gardening”)
  5. What does your mental model HIDE? (What aspects of reality doesn’t it capture?)

Step 12: SYSTEM DYNAMICS

  1. Are there feedback loops? (Actions that amplify or dampen themselves)
  2. Are there delays between action and effect? How long?
  3. Are there thresholds? (Points where behavior changes dramatically)
  4. Is the system stable or chaotic? (Small changes → small effects, or small changes → big effects?)
  5. Are there emergent behaviors? (System-level outcomes not predictable from parts)

Step 13: STRATEGIC INTERACTION

  1. Is there a competitor/adversary? What are they optimizing for?
  2. Will they react to your actions? How?
  3. Is this zero-sum (your gain = their loss) or positive-sum (both can win)?
  4. Is this a repeated game (you’ll interact again) or one-shot?
  5. What information do they have about your strategy?

Step 14: OPERATIONAL CATEGORIES

14a. Delegation:

  • What tasks can ONLY you do? What can others do?
  • For delegated tasks: is the output quality acceptable?
  • What’s the cost of quality loss vs the cost of doing it yourself?

14b. Capacity and Load:

  • What’s your current capacity utilization? (% of maximum sustainable output)
  • Is there slack? (If not, any disruption will cause cascade failure)
  • What would you cut to create 20% slack?

14c. Queues and Handoffs:

  • Where does work wait for someone? How long?
  • Where are the handoff points? (Work passes from one person/system to another)
  • What information is lost at handoffs?

14d. Interruptions:

  • What interrupts the most important work? How often?
  • What’s the cost of each interruption? (Switching cost + recovery time)
  • How to protect the most important work from interruption?

Step 15: META-COGNITIVE CATEGORIES

  1. Cognitive load: Is this goal consuming mental bandwidth beyond work time? Is that sustainable?
  2. Attention allocation: What % of your attention goes to this vs other goals? Is that the right allocation?
  3. Sustained effort: Can you maintain the required effort for the required duration? What’s the burnout risk?
  4. Decision fatigue: How many decisions does this goal require per day/week? Can any be automated or pre-decided?

Step 16: QUALITY CATEGORIES

  1. Validation: How do you know your output is correct? (Not “it feels right”)
  2. Consistency: Are you applying the same standards across similar decisions?
  3. Simplicity: Is there a simpler approach that achieves 80% of the result?
  4. Metric gaming: Are you optimizing the metric instead of the actual outcome?

Step 17: Report

UNIVERSAL GOAL ANALYSIS:
Goal: [stated goal]
Real goal: [if different from stated]

Situation: [current state summary]
Gap: [what needs to change]

Leverage points:
1. [highest-leverage requirement]
2. [second-highest]

Recommended strategy: [approach and mechanism]
Key assumption: [what must be true]

Priority actions:
1. [first action] — impact: [H/M/L]
2. [second action]
3. [third action]

Top risks:
1. [risk] — likelihood: [1-5] — severity: [1-5] — mitigation: [action]

Communication: [who needs to know what]

Confidence: [overall confidence in this analysis]
Biggest unknown: [what would most change this analysis if known]

Causal model: [known/assumed/unknown]
Key predictions: [if X then Y by Z]
Untested assumptions: [N — top 3 listed]
Active biases: [which ones detected]
System dynamics: [feedback loops, delays, thresholds]
Strategic interaction: [competitive dynamics]

Operational:
- Capacity utilization: [%]
- Main queue/bottleneck: [where]
- Interruption cost: [hours/week]

Meta-cognitive:
- Cognitive load: [sustainable/unsustainable]
- Attention allocation: [% to this goal]
- Burnout risk: [H/M/L]

Quality:
- Validation method: [how you know it's working]
- Simplification opportunity: [if any]
- Metric gaming risk: [what you might be optimizing wrong]

When to Use

  • Any new goal (most comprehensive version)
  • Any goal that feels stuck
  • Any goal where approach is uncertain
  • Goals in complex systems with many interacting parts
  • Goals requiring long sustained effort
  • Goals in competitive/adversarial contexts
  • Periodic review of ongoing goals
  • When domain-specific procedure doesn’t exist
  • → INVOKE: /gu (goal understanding) for goal clarification
  • → INVOKE: /gd (goal decomposition) for breaking down complex goals

Verification

  • Goal is specific and measurable
  • “Is there” asked before “what is” (no assumptions about problems existing)
  • Leverage analysis performed with recursive requirement tracing
  • Multiple strategies considered (not just the obvious one)
  • Actions compared on multiple dimensions
  • Risks include risk of success and risk of inaction
  • Communication needs identified
  • Meta-analysis performed (checking the analysis itself)
  • Causal model assessed and predictions stated
  • Assumptions listed and tested/untested status noted
  • Biases actively checked
  • System dynamics mapped
  • Operational load assessed
  • Meta-cognitive sustainability checked
  • Quality validation method defined

Integration Points

  • Often invoked from: /pce (new goal input), /gu (after parsing), manual invocation
  • Routes to: /grfr, /crw, /dcm, /assumption_verification, /constraint_workarounds, /ria, /spd, /ie, /cpra
  • Related: /gu, /gjs, /ve, /gd