Tier 1

dd - Dimension Discovery

Dimension Discovery

Input: $ARGUMENTS

Interpretations

Before executing, identify which interpretation matches the user’s input:

Interpretation 1 — Map a problem space: The user has a domain or topic and wants to discover the independent axes of variation that define it, enabling systematic exploration. Interpretation 2 — Analyze a dataset’s structure: The user has data (a spreadsheet, survey, product catalog) and wants to identify the key dimensions or variables that characterize it. Interpretation 3 — Find all ways something varies: The user is asking “what are all the ways this could differ?” about a concept, product, or situation — they want to see the full shape of the variation.

If ambiguous, ask: “I can help with mapping a conceptual problem space, analyzing dimensions in a dataset, or discovering all the ways something varies — which fits?” If clear from context, proceed with the matching interpretation.


Purpose

Before generating a comprehensive list, you need to know the dimensions that define the space. This skill identifies those dimensions for any domain.

Dimensions are independent axes of variation. For “types of vehicles”:

  • Dimension 1: Power source (gas, electric, human, hybrid)
  • Dimension 2: Medium (land, water, air, space)
  • Dimension 3: Purpose (personal, commercial, military, recreational)

With dimensions identified, /se can systematically cover the space.


When to Use

SituationUse This Skill
Novel domain with no known structureYES
Generating comprehensive listYES (before /se)
Domain structure already knownSKIP (use known dimensions directly)
Simple enumerationSKIP (overkill)

Depth Scaling

Default: 2x. Parse depth from $ARGUMENTS if specified (e.g., “/dd 4x [input]”).

DepthMin Dimensions FoundMin Sources/MethodsMin Hidden DimensionsMin Validation Tests
1x4211
2x6322
4x9533
8x12754
16x161086

These are floors. Go deeper where insight is dense. Compress where it’s not.


The Process

Step 1: Seed with Examples

Start with 5-10 concrete examples of what you’re trying to enumerate.

EXAMPLES FOR: [domain]
1. [example 1]
2. [example 2]
3. [example 3]
...

Why examples first? Dimensions emerge from comparing examples.


Step 2: Compare for Differences

For each pair of examples, ask: “What makes these different?”

COMPARING: [example A] vs [example B]
Differences:
- [difference 1] -> Potential dimension: [name]
- [difference 2] -> Potential dimension: [name]

Step 3: Apply Universal Dimensions

Check these universal dimensions (apply to almost any domain):

DimensionQuestionExample Values
WHOWho is involved?Individual, team, organization, society
WHATWhat type/category?Domain-specific types
WHENWhat time frame?Immediate, short-term, long-term, recurring
WHEREWhat location/context?Local, regional, global, virtual
WHYWhat purpose/goal?Primary, secondary, indirect
HOWWhat method/approach?Manual, automated, hybrid
HOW MUCHWhat scale/degree?Small, medium, large, extreme

For each: Does this dimension create meaningful variation in [domain]?


Step 4: Identify Domain-Specific Dimensions

Beyond universal dimensions, what’s unique to this domain?

DOMAIN-SPECIFIC DIMENSIONS FOR: [domain]

1. [Dimension name]
   - What it captures: [explanation]
   - Possible values: [value1, value2, value3, ...]
   - Why it matters: [importance]

2. [Dimension name]
   ...

Step 5: Validate Independence

Dimensions should be independent (not redundant):

INDEPENDENCE CHECK:
- Can [Dimension A] vary while [Dimension B] stays constant?
- If NO -> Dimensions may be correlated, consider merging
- If YES -> Dimensions are independent, keep both

Step 6: Validate Completeness

Do the dimensions cover the examples?

COMPLETENESS CHECK:
For each example:
  - [Example 1]: Dim1=[value], Dim2=[value], Dim3=[value] [x]
  - [Example 2]: Dim1=[value], Dim2=[value], Dim3=[value] [x]

Any example that doesn't fit? -> Missing dimension

Step 7: Output Dimensions

DIMENSIONS DISCOVERED FOR: [domain]

| # | Dimension | Values | Validation |
|---|-----------|--------|------------|
| 1 | [name] | [v1, v2, v3, ...] | Covers examples [x] |
| 2 | [name] | [v1, v2, v3, ...] | Independent [x] |
| 3 | [name] | [v1, v2, v3, ...] | Meaningful variation [x] |

TOTAL SPACE SIZE: [D1 values] × [D2 values] × [D3 values] = [N] combinations

NOTES:
- [Any caveats about dimension interactions]
- [Any "other" category needed for edge cases]

Example: Dimensions for “Software Testing Types”

Step 1: Examples

  1. Unit tests
  2. Integration tests
  3. Load tests
  4. Security penetration tests
  5. User acceptance tests
  6. Regression tests

Step 2: Compare

  • Unit vs Integration -> Scope (single unit vs multiple units)
  • Load vs Security -> Focus (performance vs security)
  • User acceptance vs Regression -> Purpose (validation vs verification)

Step 3: Universal Dimensions

  • WHO: Developer, QA, User, Automated
  • WHEN: Development, Pre-release, Post-release, Continuous
  • HOW: Manual, Automated, Hybrid

Step 4: Domain-Specific

  • Scope: Unit, Component, Integration, System, End-to-end
  • Focus: Functional, Performance, Security, Usability, Reliability
  • Automation level: Manual, Semi-automated, Fully automated

Step 5-6: Validate

All examples fit. Dimensions are independent.

Step 7: Output

#DimensionValues
1ScopeUnit, Component, Integration, System, E2E
2FocusFunctional, Performance, Security, Usability, Reliability
3TimingDevelopment, Pre-release, Post-release, Continuous
4AutomationManual, Semi-automated, Fully automated

TOTAL SPACE: 5 × 5 × 4 × 3 = 300 combinations


Quality Checklist

Before completing:

  • At least 5 seed examples used
  • Universal dimensions checked
  • Domain-specific dimensions identified
  • Independence validated
  • All examples covered by dimensions
  • Space size calculated

Next Steps

After dimension discovery:

  1. Use /se to generate comprehensive list
  2. Use /mv to verify coverage