Tier 2

sid - Situation Identification

Situation Identification

Input: $ARGUMENTS


Interpretations

Before executing, identify which interpretation matches the user’s input:

Interpretation 1 — Confused about what is happening: The user faces a novel or ambiguous situation and needs to build an accurate description from raw observations before choosing how to respond. Interpretation 2 — Competing narratives: Multiple people (or the user’s own conflicting instincts) offer different accounts of what is going on, and the user needs to determine which framing is most accurate. Interpretation 3 — Pattern-match validation: The user thinks they recognize the situation from past experience but wants to verify they are not misclassifying it based on superficial similarity.

If ambiguous, ask: “I can help with making sense of a confusing situation, sorting out conflicting accounts of what is happening, or checking whether your pattern recognition is accurate — which fits?” If clear from context, proceed with the matching interpretation.


Purpose

Every decision procedure has a Step 0: classify the situation so you know which procedure to follow. If you misclassify at Step 0, you execute the wrong procedure perfectly and arrive at the wrong answer with high confidence. This is the most dangerous failure mode in decision-making — not making bad decisions, but making good decisions about the wrong problem.

This procedure answers one question: What is actually happening here?


Step 0: Triage

Answer the following three questions out loud or in writing. Do not skip this.

0.1 How much time do I have before I must act?

  • Less than 10 minutes — Go to SECTION C (Urgent)
  • More than 10 minutes — Continue to 0.2

0.2 Are other people telling me different things about what is happening?

  • Yes, and their accounts conflict in substance — Go to SECTION D (Contested)
  • No, or disagreements are minor — Continue to 0.3

0.3 Have I seen something that looks like this before?

  • Yes, I immediately recognize a pattern — Go to SECTION B (Familiar-seeming — DANGER ZONE)
  • No, this feels new or confusing — Go to SECTION A (Novel)

SECTION A: First-Time / Novel Situation

Use this when you genuinely do not recognize what is happening. The goal is to build an accurate description from raw observation before applying any framework.

A.1 — Describe observables only.

Write down what you can directly observe. No interpretations, no causes, no judgments.

WHAT YOU SHOULD SEE: A list of facts that a camera or microphone could record. “Revenue dropped 15% in Q3.” NOT “The business is failing.” “My partner did not respond to three messages.” NOT “My partner is pulling away.”

Binary check: Read each item. Does it contain a verb like “is,” “feels,” “seems,” or “means”? If yes, rewrite it as a pure observable. Only proceed when every item passes.

A.2 — List what you do NOT know.

For each observable from A.1, write down what additional information would change your interpretation of it.

WHAT YOU SHOULD SEE: A list of questions, not answers. “Did the revenue drop happen across all segments or just one?” “Was my partner’s phone working during that period?”

Binary check: Do you have at least one unknown for each observable? If not, you are likely treating assumptions as knowns. Go back and add genuine unknowns.

A.3 — Generate three competing descriptions.

Using only the observables from A.1, write three different one-sentence descriptions of the situation that are all consistent with the evidence.

WHAT YOU SHOULD SEE: Three descriptions that are meaningfully different from each other, not cosmetic variations. Example: (1) “The company is losing product-market fit.” (2) “The company’s go-to-market motion broke when the sales lead left.” (3) “The market contracted and this has nothing to do with us.”

Binary check: Would each description lead you to a different action? If two descriptions lead to the same action, they are not different enough. Replace one.

A.4 — Identify which description you WANT to be true.

Write down which of the three descriptions you are drawn to, and why.

WHAT YOU SHOULD SEE: An honest statement like “I want it to be #3 because that means it is not my fault” or “I want it to be #1 because that is the one I know how to fix.”

Binary check: Does your preferred description also happen to be the most comfortable one for you personally? If yes, flag this as a bias risk and do not weight it higher.

A.5 — Select the description you can most cheaply test.

Of the three descriptions, which one can you gather evidence for or against most quickly and cheaply?

WHAT YOU SHOULD SEE: A specific action you can take in the next 24 hours to get data. “Pull the revenue by segment.” “Ask my partner directly if something is wrong.” “Check if competitors saw the same drop.”

Binary check: Does the test have a clear outcome that distinguishes between at least two of your three descriptions? If no, find a better test.

A.6 — Run the test. Update. Repeat or proceed.

Execute the test from A.5. Based on the results:

  • One description clearly wins — Proceed with that classification. Go to VALIDATION CHECKPOINT.
  • Still ambiguous — Return to A.3 with new information and generate new descriptions.
  • You have cycled three times with no convergence — Label this situation as “genuinely ambiguous” and state which description you would bet on if forced. Ambiguity does not exempt you from having a best guess. Proceed with the most reversible action available.

SECTION B: Familiar-Seeming Situation (DANGER: Pattern-Matching Errors)

Use this when your brain immediately says “I know what this is.” This is the most dangerous section because the confidence of recognition suppresses further investigation.

B.1 — Name the pattern you are matching to.

Write down in one sentence what you think this situation is, and when you last saw it.

WHAT YOU SHOULD SEE: Something like “This is a scope creep problem, same as the Q2 project last year” or “This is my mother being manipulative, same pattern as always.”

B.2 — Force yourself to list three differences.

Between this situation and the one you are matching it to, write down three things that are different. They can be small.

WHAT YOU SHOULD SEE: At least three concrete differences. “Different team members.” “Higher stakes.” “I am more tired this time.” If you cannot list three differences, you are not looking hard enough — every situation has differences.

Binary check: Is any of the three differences structural rather than cosmetic? (A structural difference changes how the situation works, not just how it looks.) If yes, proceed to B.3 with heightened caution. If all three are cosmetic, the pattern match may be valid — proceed to B.4.

B.3 — Ask: What would I see if this were NOT the pattern I think it is?

Write down three observable things that would be present if your pattern match is wrong.

WHAT YOU SHOULD SEE: Specific, checkable indicators. “If this is NOT scope creep, I would expect the original requirements to still be ambiguous even if no new requests came in.” “If this is NOT manipulation, I would expect my mother to accept my boundary when I state it clearly.”

Binary check: Can you check for each indicator right now or within 24 hours? If yes, check them. If any indicator is present, your pattern match is wrong. Go to SECTION A and start fresh.

B.4 — Check for the “Same Person, Different Problem” error.

If the situation involves another person you have dealt with before, answer: Am I classifying the situation based on who is involved rather than what is happening?

Binary check: If you removed the identity of the person and described only the behavior, would you classify the situation the same way? If no, you are pattern-matching to the person, not the situation. Go to SECTION A.

B.5 — Proceed with your classification but set a checkpoint.

If you have passed all checks, your pattern match is likely valid. Proceed. The goal is accurate classification, not maximum self-doubt. If your pattern match survives the checks, trust it. But set a concrete future checkpoint: “If [specific observable] has not occurred by [date], I will reclassify.”

WHAT YOU SHOULD SEE: A written statement with a specific observable and a specific date. Not “I will reassess if things do not improve” but “If the client has not signed by February 15, this is not a timing problem and I will reclassify as a fit problem.”


SECTION C: Urgent Situation (Limited Time)

Use this when you have less than 10 minutes before you must act. This is a stripped-down version designed for speed.

C.1 — State what you think is happening in one sentence.

Do not deliberate. Write or say the first classification that comes to mind.

C.2 — Ask one question: What is the worst thing that happens if I am wrong?

  • Consequences are reversible or minor — Act on your classification. Reassess after.
  • Consequences are severe and irreversible — Continue to C.3.

C.3 — Generate exactly one alternative classification.

The alternative must be the most dangerous plausible interpretation. Not the most likely — the most dangerous.

WHAT YOU SHOULD SEE: Your original classification and one alternative. “I think this is a server outage” vs. “This could be a security breach.” “I think my child is overtired” vs. “My child could be having a medical event.”

C.4 — Ask: Can I act in a way that covers both classifications?

  • Yes — Do that. Reassess when time pressure lifts.
  • No — Act on whichever classification has worse consequences if you are wrong and do nothing. This is minimax: minimize maximum regret.

C.5 — After acting, mandatory reassessment.

Once the immediate pressure has passed, return to SECTION A or SECTION B and do a full assessment. Urgent classifications are provisional. Never let them harden into permanent ones.


SECTION D: Contested Situation (Multiple People Disagree)

Use this when different people are telling you different things about what is happening.

D.1 — Collect all classifications.

Write down every distinct description of the situation offered by anyone involved. Use their words, not your paraphrases.

WHAT YOU SHOULD SEE: A numbered list of direct quotes or near-quotes. “Marketing says: ‘Sales is not following up on our leads.’” “Sales says: ‘Marketing is sending us unqualified leads.’”

D.2 — For each classification, write down what is true about it.

Do not evaluate which is “right.” For each one, identify what observable evidence supports it.

WHAT YOU SHOULD SEE: Evidence listed under each classification. Not “Marketing is right because…” but “Evidence supporting Marketing’s view: lead response time data shows 48-hour average. Evidence supporting Sales’ view: lead qualification score data shows 60% below threshold.”

Binary check: Does each classification have at least some supporting evidence? If a classification has zero evidence, discard it. If every classification has some evidence, continue.

D.3 — Identify what each classifier would lose if they are wrong.

For each person or group offering a classification, write down what it would cost them personally or professionally if their classification turned out to be incorrect.

WHAT YOU SHOULD SEE: Concrete stakes. “If Marketing is wrong, it means their campaign strategy failed.” “If Sales is wrong, it means their follow-up process is broken.”

Binary check: Is there a classification where the person has nothing to lose by being wrong? That person’s classification is more likely to be accurate (less motivated reasoning). Weight it accordingly.

D.4 — Look for the classification nobody is offering.

Write down one description of the situation that no involved party has suggested.

WHAT YOU SHOULD SEE: A description that would be uncomfortable for everyone. “Both Marketing and Sales are performing fine, but the product is no longer competitive.” This is often the correct classification because it is the one nobody is incentivized to see.

Binary check: Is the missing classification one that would require systemic change rather than blaming a specific party? If yes, investigate it seriously.

D.5 — Test the contested classifications.

Identify one metric, fact, or experiment that would distinguish between the top two competing classifications. Execute it. Return to D.2 with the new evidence and reassess.


Validation Checkpoint

Run this after any section produces a classification.

V.1 — State your classification in one sentence.

V.2 — What action does this classification imply?

V.3 — If you took that action and the situation got worse, what would that tell you?

WHAT YOU SHOULD SEE: An answer that would cause you to reclassify. “If I treat this as a performance problem and coach the employee, and their performance gets worse, that would tell me it is actually a motivation or structural problem.”

V.4 — Set a reclassification trigger. Write down: “I will reclassify if [specific observable] by [specific date].”

V.5 — Proceed to the appropriate decision procedure for your classified situation.


Quick Reference Card

SITUATION IDENTIFICATION -- QUICK REFERENCE
============================================

BEFORE ANYTHING ELSE:
  1. How much time? (<10 min -> Section C)
  2. Do people disagree? (Yes -> Section D)
  3. Do I recognize this? (Yes -> Section B / No -> Section A)

SECTION A (NOVEL):
  Write observables -> List unknowns -> 3 descriptions ->
  Which do I WANT? -> Test cheapest -> Act or loop

SECTION B (FAMILIAR -- DANGER):
  Name the pattern -> 3 differences -> What if I am wrong? ->
  Person vs. situation check -> Proceed with checkpoint

SECTION C (URGENT):
  One-sentence classification -> Worst case if wrong? ->
  One dangerous alternative -> Cover both or minimax -> Reassess later

SECTION D (CONTESTED):
  Collect all views -> Evidence for each -> Stakes per person ->
  Missing classification -> Test to distinguish

ALWAYS END WITH:
  Classification sentence -> Implied action ->
  What if it gets worse? -> Reclassification trigger

RED FLAGS:
  - Classified in <30 seconds with high confidence
  - Classification makes a "good story"
  - You are never the cause in your classification
  - Someone else handed you the classification
  - Your preferred classification is the comfortable one

Common Mistakes

Mistake 1: Treating the symptom as the situation

You see someone crying and classify the situation as “person is sad.” The situation may be: person is frustrated, person is relieved, person is manipulating, person is having an allergic reaction. The crying is an observable. The situation is what produced it. Always separate the observable from the interpretation.

Mistake 2: Classifying based on what you can fix

If you are a hammer, every situation looks like a nail. Engineers classify interpersonal problems as systems problems. Therapists classify systems problems as interpersonal problems. You will unconsciously classify the situation as something within your skillset. Ask: “Would someone from a completely different background classify this the same way?”

Mistake 3: Letting the most vocal person define the situation

In any group, one person typically speaks first, speaks loudest, and speaks with the most confidence. Their classification becomes the default. This has nothing to do with accuracy. Actively solicit the classification of the quietest person in the room.

Mistake 4: Updating your classification without realizing it

You start with classification A. New information arrives that is mildly inconsistent with A. You do not reclassify — you subtly shift A to accommodate the new information, creating A’, then A”, then A'''. By the end, your classification has drifted far from where it started, but you feel like you have been consistent. Write your classification down at the start. Compare it to your current classification periodically.

Mistake 5: Confusing “I don’t know what this is” with “this is not important”

Situations that resist classification feel uncomfortable. The most common escape from that discomfort is to minimize: “It is probably nothing.” “I am probably overthinking this.” If you cannot classify it, that is information. Something that resists classification deserves MORE attention, not less.

Mistake 6: Using probability language to mask uncertainty

Saying “There is a 70% chance this is a market problem” sounds rigorous but is often a disguise for “I think this is a market problem and I want to sound calibrated.” Unless you have a real base rate (from data, not intuition), your probability estimate is your confidence dressed up as math. Say “I believe” rather than “there is an X% chance” unless you can justify the number.

Mistake 7: Classifying the situation once and never revisiting

Situations evolve. The correct classification on Monday may be wrong by Friday. Every classification is provisional. The checkpoint you set in the procedure is not optional — it is the mechanism that prevents stale classifications from driving current decisions.


When to Override This Procedure

  1. True emergency with lives at stake. If someone is in physical danger, act on instinct. Assess later. Do not run a procedure while someone is bleeding.

  2. The situation is genuinely trivial. If the consequences of misclassification are less than the cost of running this procedure, skip it. Use judgment: “What’s the worst that happens if I’m wrong?” If the answer is “I waste 20 minutes,” do not spend 30 minutes on classification.

  3. You have domain expertise and verified track record. If you are an emergency room doctor assessing a patient, you have earned the right to pattern-match fast. But only within your domain.

  4. The system is already providing feedback. If you are in an environment with fast, clear feedback loops (writing code with tests, cooking with tasting, playing a sport with a scoreboard), let the feedback loop correct your classification rather than front-loading analysis.

  5. You are stuck in a recursive loop. If applying this procedure is itself creating paralysis, stop. Pick the most reversible classification, act on it, and let reality tell you whether you were right.


Worked Examples

Example 1: Personal — “My friend is being distant”

Step 0 — Triage:

  • Time pressure? No. Continue.
  • Others disagree? No, solo assessment. Continue.
  • Do I recognize this? Yes — this feels like when my college friend pulled away before ending the friendship. SECTION B.

Section B:

  • B.1: “This is the same as when Alex withdrew before cutting me off. Pattern: friend creates distance as precursor to ending relationship.”
  • B.2: Three differences: (1) This friend has been under work stress that Alex was not. (2) The “distance” is slower response times, not no responses. (3) We have been friends for 10 years, not 2.
  • B.3: Structural difference: Work stress is structural — it changes the dynamics. Proceed with heightened caution.
  • B.4: “What if this is NOT withdrawal?” Indicators: they still initiate contact sometimes (check: yes). They are distant with everyone, not just me (check: can ask mutual friend). One indicator present — pattern match may be wrong.
  • Going to Section A with the observables.

Section A:

  • A.1: Response time increased from hours to days. Last three invitations declined. Still initiates occasionally. Work promotion happened two months ago.
  • A.2: Unknowns: Are they distant with others? What is their workload? Have I done something specific?
  • A.3: Three descriptions: (1) Friend is overwhelmed at work. (2) Something in our friendship changed. (3) Friend is going through something personal they have not shared.
  • A.4: I want #1 because it is not about me and will resolve itself.
  • A.5: Cheapest test: Ask a mutual friend if they have noticed the same distance.
  • A.6: Result: mutual friend says “Yeah, they have been swamped since the promotion.” Classification: work-driven reduced bandwidth.

Validation: If still distant in two months after work calms down, reclassify.

Example 2: Business — “Our product launch failed”

Step 0 — Triage:

  • Time pressure? Soft — board meeting in two weeks. Continue.
  • Others disagree? Yes — engineering, marketing, and sales all give different accounts. SECTION D.

Section D:

  • D.1: Engineering: “The market was not ready.” Marketing: “The product had critical bugs.” Sales: “We had no enablement materials.”
  • D.2: Evidence for engineering: Two competitors also underperformed. Evidence for marketing: 47 unresolved P1 issues at launch. Evidence for sales: Empty enablement folder, crashed demos.
  • D.3: If engineering is wrong, they shipped a buggy product. If marketing is wrong, their messaging failed. If sales is wrong, their pipeline was weak. Everyone has something to lose.
  • D.4: Classification nobody is offering: “The product was conceived without customer input and does not solve a real problem.”
  • D.5: Test: Pull inbound interest data (demo requests, organic sign-ups). Result: 12 requests in 6 weeks vs. target of 200.

Result: The product does not have market demand. Every team’s classification was a downstream symptom.

Validation: “If inbound interest increases to 50+/month after implementing improvements, demand exists and my classification is wrong.”

Example 3: Technical — “The system is slow”

Step 0 — Triage:

  • Time pressure? Hard — SLA breach in 4 hours. Not yet under 10 minutes. Continue with urgency.
  • Others disagree? Yes — backend, frontend, ops all point fingers. SECTION D, with time constraints from SECTION C.

Hybrid C+D approach:

  • D.1: Backend: “Database queries timing out.” Frontend: “API response times tripled.” Ops: “CPU spiked on two nodes.”
  • D.2: All three can be true simultaneously and share a single upstream cause. Database problem would cause slow queries, slow API, CPU spikes from queuing.
  • C.3: Dangerous alternative: Not performance — could be data corruption or security breach.
  • C.4: Investigate database first (performance hypothesis) while checking access logs (security hypothesis).

Result: Missing index on a table that grew past a threshold from a batch import. Not “system is slow” — one query pattern became O(n) instead of O(log n). The situation is “data growth exceeded an architectural assumption.”

Validation: After adding the index, if response times do not return to normal within 30 minutes, reassess. (They returned in 4 minutes.)