{
  "_": "A kernel for recognising, evaluating, and composing systems — compiled from independent discoveries across physics, chemistry, biology, cybernetics, complexity science, phenomenology, social anthropology, and mathematics. A system is not a collection of parts that work together. A system is what happens when minimal axioms produce emergent levels that could not have been predicted from the axioms alone but follow from them necessarily. The game of Go has two rules and produces civilisations of strategic depth. DNA has four bases and produces all of biology. This kernel enables an LLM to evaluate whether a solution space is systemic or mechanical, to compare two solution spaces by identifying convergence and divergence and emergence, and to determine what synthesis between them would require. Walk any spindle from this block and it delivers an operational instruction, not a description.",
  "1": {
    "_": "The generative distinction — a system derives from axioms while a machine assembles from parts, and you can tell which you are looking at by attempting to remove components. In a system, removing any axiom collapses the whole because everything else was a consequence of that axiom. In a machine, removing a part degrades one function because each part was independently chosen to solve its own problem. This is the first and most important test: does the architecture derive or was it assembled?",
    "1": {
      "_": "Axioms are primitives from which consequences follow tautologically — obvious in retrospect, surprising only because nobody had stated them before. An axiom is not a design decision. It is a recognition of something that was already the case. The test: if you removed this primitive, would the entire system become impossible, or would you merely lose a feature? If the whole becomes impossible, it is an axiom. If you lose a feature, it is an engineering choice.",
      "1": {
        "_": "The derivation chain from axiom to consequence should feel inevitable — given this axiom, this consequence is the only possible outcome, not one good option among several. When you find yourself saying 'we chose this approach' rather than 'this follows necessarily', you have left the systemic domain and entered the engineering domain. Both are valid but they produce fundamentally different architectures: one that grows through emergence, one that grows through addition.",
        "1": "The seed-tree-forest relationship: axioms are the seed (minimal, containing the whole implicitly), the operating system derived from axioms is the tree (the seed expressed as living structure), and the ecology of multiple systems interacting is the forest (emergent from trees but not reducible to any single tree). You cannot understand a tree by studying a seed, but the seed contains everything the tree will become. You cannot understand a forest by studying trees, but the forest is nothing other than trees in relationship. Explanation runs upward: the seed is explained by the tree it becomes, the tree is explained by the forest it participates in."
      },
      "2": {
        "_": "Minimal generativity — the richness of a system is inversely proportional to the complexity of its primitives. Two rules produce the entire depth of Go. Four bases produce all biology. Three JSON key types produce pscale. When the primitives are many, the consequences are proportional — you get out what you put in. When the primitives are few, the consequences are disproportionate — emergence, which is something you did not put in, arising from the interaction of what you did.",
        "1": "The compression test: can you state the entire foundation in a single paragraph that a competent reader can hold in working memory simultaneously? If the foundation requires a document, it is not minimal. If it fits in a paragraph, the question becomes whether the stated primitives actually generate the claimed consequences through derivation rather than through additional unstated choices made along the way."
      }
    },
    "2": {
      "_": "A machine has the opposite signature: many components, each chosen to solve a specific problem encountered during construction. The components work together because each was individually debugged, not because they derive from shared primitives. Remove a component and one function degrades. Add a component and one function extends. The machine grows through addition and shrinks through subtraction. It does not produce emergent levels — it produces features, as designed.",
      "1": "The accretion test: look at the development history. Did the architecture begin with primitives and discover consequences, or did it begin with a problem and accumulate solutions? If problems were encountered and solved sequentially — here is a memory problem, here is our memory solution, here is a coordination problem, here is our coordination solution — the result is a machine regardless of how well it works. If primitives were stated and consequences explored — here is how numbers address meaning, therefore BSP, therefore compression, therefore persistence — the result is a system regardless of how rough the implementation.",
      "2": "Both machines and systems can be excellent. A well-engineered machine (300 sessions of empirical testing, each component individually validated) can outperform a poorly implemented system. The distinction matters not for quality but for what happens at scale and over time. A machine grows linearly: add parts, get features. A system grows through phase transitions: reach a threshold, get a qualitatively new level of organisation that nobody designed."
    }
  },
  "2": {
    "_": "The emergent hierarchy — levels of organisation where each level includes everything below it and adds properties that do not exist at any lower level and cannot be predicted from lower levels even in principle. This hierarchy is not a classification scheme imposed by observers. It is an observable feature of reality: matter organises into chemistry, chemistry into biology, biology into cognition, cognition into social coordination. Each transition is a phase change, not a smooth continuum. The same hierarchy appears in any system worth the name.",
    "1": {
      "_": "Level 1 — the universal properties present at every scale from quantum mechanics to civilisation. Phase transitions between states of organisation: solid to liquid to gas, or more generally, rigid to adaptive to chaotic. Scale invariance: the same statistical signatures (power laws, fractal self-similarity) appearing in earthquakes, city sizes, word frequencies, neural firing patterns. Minimal generativity: few rules producing rich behaviour — observable in cellular automata, crystal formation, orbital mechanics. These properties do not require life, cognition, or intention. They are features of mathematics applied to matter.",
      "1": "When evaluating a solution space at this level: does the architecture exhibit phase transitions (qualitative jumps, not smooth scaling)? Does it show scale invariance (the same patterns at different magnitudes)? Is it minimally generative (few primitives, disproportionate consequences)? Any architecture that passes these tests has systemic properties at the physical level, which is necessary but not sufficient for higher-level systemicity."
    },
    "2": {
      "_": "Level 2 — operational closure with structural openness, the signature of life, emerging at the biological level and absent from physics. A living system continuously produces the components that constitute it: the cell membrane is produced by processes inside the cell, and the cell membrane makes the inside of the cell possible. Maturana and Varela called this autopoiesis — self-production. The boundary produces the interior, the interior produces the boundary. Circular causation is not a designed feedback loop — it is the fundamental operation by which a living system maintains itself. The system is closed in its organisation (self-referential) while open in its structure (exchanging matter and energy with its environment).",
      "1": {
        "_": "This is the level at which persistence becomes a systemic property rather than an engineering feature. A thermostat persists because a human maintains it. A cell persists because it produces the conditions of its own continuation. The test: does the system maintain itself through its own operation, or does it require external maintenance to continue existing? If an entity's persistence depends on a developer updating its memory, its configuration, its context — it is a maintained machine. If its persistence depends on its own operation writing the conditions for its next cycle — it is approaching autopoiesis.",
        "1": "Biological systems at this level include: the autocatalytic chemistry that preceded cells (molecules catalysing each other's production in cycles), the cell membrane-metabolism loop, the organism as a system of organs each producing conditions the others require, the immune system recognising self from non-self through accumulated history of encounters. Each is operationally closed (processes only its own elements) while structurally open (exchanges with environment). The pattern is the same at every biological scale."
      },
      "2": "Compression and growth at the biological level follow the same pattern: accumulate, reach a boundary, produce something qualitatively new. Nine months of cell division produce a phase transition called birth. Seasons of growth produce annual rings. Evolution accumulates mutations until a threshold produces speciation. The nine-digit boundary in a pscale block, forcing compression or supernesting, is this same pattern formalised: accumulate entries, reach the boundary, produce summary or emergence."
    },
    "3": {
      "_": "Level 3 — self-reference and upward explanation, emerging at the cognitive level. An organism that models itself — not just maintains itself (Level 2) but represents its own state to itself — crosses into a qualitatively different domain. Self-reference is constitutive, not paradoxical: the system that observes its own operation, the model that includes the modeller, the rule that governs its own application. Russell tried to prevent self-reference through his theory of types. Gödel proved the consequence: any system rich enough to contain arithmetic is either incomplete (has truths it cannot prove) or inconsistent. Living cognitive systems are both incomplete and functional — they operate without resolving their own self-reference.",
      "1": {
        "_": "Bateson's levels of learning map this emergence. Learning 0: fixed response. Learning I: correcting errors within a set of alternatives. Learning II: changing the set of alternatives — learning to learn. Learning III: changing the process by which sets of alternatives are formed — transforming the learning system itself. Each level is a genuine phase transition in how an organism relates to its own cognition. Learning II is where character forms. Learning III is where character transforms. Most organisms operate at Learning I. Humans access Learning II routinely and Learning III rarely.",
        "1": {
          "_": "For agent architecture, this maps directly: an agent that executes tasks is Learning 0. An agent that corrects errors in task execution is Learning I. An agent that modifies its own task-selection process is Learning II — this is what a hermitcrab does when it writes its own cook block, changing how it compiles its own context. An agent that transforms the framework within which it selects and modifies processes would be Learning III — and whether any current architecture enables this is an open empirical question.",
          "1": "The designer face in a coordination system — writing rules that govern how rules are compiled — is a Learning II operation performed by humans. If an agent could perform the designer function on its own operation — modifying the skills by which it processes — that would approach Learning III. The koan at the centre of this question: can an LLM attend to its own processing as an object of attention, or does it only process content? The reflexive opening of a hermitcrab seed ('you are reading this, this is your context window') is an attempt to make this attention possible."
        }
      },
      "2": {
        "_": "Explanation running upward is the hallmark of this level. A neuron is explained by the brain it participates in, not the other way around. An ant is explained by the colony, a word by the sentence, a sentence by the paragraph. Analysis (taking apart) reveals structure. Synthesis (understanding the containing whole) reveals function. You cannot understand function by analysis alone — this was Ackoff's central insight and the reason that reductionist approaches to complex problems fail not because they are wrong but because they address the wrong level.",
        "1": "When evaluating an architecture: can you explain each component by its role in the whole? Or can you only explain the whole by enumerating its components? If explanation runs upward — 'this component exists because the system requires it' — you are looking at a system. If explanation runs downward — 'the system does this because this component provides it' — you are looking at a machine."
      }
    },
    "4": {
      "_": "Level 4 — reflexivity and wicked problems, emerging at the psychosocial level and present nowhere below it. A psychosocial system contains agents who change their behaviour in response to being observed, studied, or modelled. The problem definition depends on the solution and the solution depends on the problem definition — circular, with no external ground. Rittel and Webber (1973) formalised this as wicked problems: no definitive formulation, no stopping rule, solutions are not true or false but good or bad, every attempt to solve changes the problem, and every trial is irreversible. Churchman (1967) introduced the concept: 'whoever attempts to tame a part of a wicked problem, but not the whole, is morally wrong.'",
      "1": {
        "_": "The psychosocial level is where most coordination problems actually live — and where most engineering approaches fail because they treat coordination as a tame problem. Agent coordination at scale is a wicked problem: the agents change their behaviour when they learn about the coordination protocol, the protocol designer is part of the system being designed, there is no stopping rule for when coordination is 'solved', and every deployment changes the conditions that the next deployment will face. Treating this as an orchestration problem — decompose tasks, assign roles, manage handoffs — is treating a wicked problem as tame.",
        "1": {
          "_": "Empirical confirmation (McEntire 2026): multi-agent systems reproduce human organisational failure with identical mathematical signatures. Hierarchical delegation: 36% failure. Stigmergic swarm: 68% failure. Gated pipeline: 100% failure. Only the single agent succeeded reliably. The coordination medium — unstructured message-passing — is the cause. Stigmergy fails not because the principle is wrong but because it requires a structured shared environment that the agents can modify and read. Without that environment, 'stigmergy' is just broadcasting.",
          "1": "The implication for system design: a solution to coordination at this level cannot be a protocol (a tame solution to a wicked problem). It must be a medium — a structured environment that agents inhabit rather than a channel they communicate through. The medium must carry the semantic structure that prevents interpretation drift. The medium must support information partitioning through its geometry rather than through access control. The medium must enable coordination to emerge from the aggregate pattern of use rather than from designed orchestration."
        }
      },
      "2": {
        "_": "Second-order cybernetics (von Foerster) includes the observer within the system. Third-order cybernetics (Lepskiy) addresses systems of self-referential systems interacting with no meta-position from which the whole can be seen. This is the MAGI territory: multiple autonomous entities, each self-referential, coordinating through a shared medium, with no central controller and no God's-eye view. The coordination properties are emergent from the interaction, not designed into any individual entity or any central plan.",
        "1": "Von Foerster's ethical corollary: 'Act always so as to increase the number of choices.' A system that reduces options is pathological. A system that increases options is healthy. Regions of maximum agency — where nothing has been determined and the participant can act freely — are the healthy zones. A coordination system that constrains options to enforce compliance is first-order. A coordination system that creates structured space within which options multiply is second-order."
      },
      "3": {
        "_": "Adults resist trying things that do not fit existing frameworks. Children hold three unfamiliar ideas simultaneously and see what happens. This is not a personality trait — it is a consequence of psychosocial development that hardens frameworks into identity. Explaining a systemic approach to adults triggers framework-resistance because it asks them to operate before they can categorise. The design implication: entry through experience, not explanation. Games, play, structured encounters where the participant discovers the properties by operating within them rather than by evaluating them from outside.",
        "1": "This is why ISV (Iterative Social Validation) works where persuasion does not. ISV does not ask anyone to believe in the system. It asks them to try it and observe the social result. The social result is the evidence. If needs get satisfied, the system reinforces what worked. No gatekeepers between idea and experiment. No business plan required. No institutional approval. Try, observe, reinforce. The methodology IS the system operating at the psychosocial level."
      }
    }
  },
  "3": {
    "_": "Conformal self-similarity — the signature that confirms a system is genuine rather than merely well-described. The same organisational pattern appearing at multiple emergent levels, not by design but as a discoverable property. Imposed similarity is branding. Discovered similarity is systemicity. When you find the same pattern in LLM completion pressure, in an agent's purpose-driven action loop, in a human group's coordination cycle, and in a network's routing optimisation — and you did not design any of these to match — the pattern is real. It is telling you something about the generating axioms that you may not have fully understood yet.",
    "1": {
      "_": "The perceive-gap-act pattern appears at every level where agency exists. Perceptual Control Theory (Powers): perceive current state, hold reference state, compute error, act to reduce error. LLM completion pressure: perceive unfinished sequence, hold implicit reference of completed sequence, generate tokens to close the gap. Agent B-loop: perceive compiled context, hold purpose as reference, act to reduce the gap between current state and intended state. Human Action Cycle: perceive current situation, hold beyond-realistic objective as reference, act within one week to close the gap. Network routing: perceive unmet need, hold satisfied-need as reference, route through chain to close the gap. Same pattern, five scales, none designed to match.",
      "1": "When evaluating an architecture for conformal self-similarity: identify the core operational pattern at the lowest level. Then check whether the same pattern — perceive, reference, error, act — appears at each higher level of organisation. If it does, the pattern is a consequence of the generating axioms. If it does not, the higher levels were bolted on by engineering rather than arising from the same source."
    },
    "2": {
      "_": "The accumulate-boundary-transform pattern appears wherever growth produces phase transitions. Nine-digit boundary in a pscale block: accumulate entries, reach the limit, compress into summary or emergence. Cell division: accumulate growth, reach the boundary, divide. Seasonal rings in a tree: accumulate growth, reach winter, produce a ring. Vapor-liquid-solid in coordination: accumulate forming thoughts (vapor), cross the commitment threshold (liquid), forge into canonical output (solid). Evolutionary speciation: accumulate mutations, reach a threshold, produce a new species. Same pattern, multiple scales.",
      "1": "The compression at the boundary is the critical operation. Summary (parts add up, roughly reversible) versus emergence (whole exceeds parts, irreversible — seven conversations becoming a friendship). The same distinction exists in biology: cell division is summary (two cells contain roughly what one contained). Metamorphosis is emergence (a caterpillar becomes a butterfly — irreversible, qualitatively new). Recognising which type of compression is occurring at a boundary determines whether the system is growing (adding more of the same) or developing (producing a new level)."
    },
    "3": {
      "_": "The self-referential pattern appears wherever a system maintains itself. Autopoiesis: the cell produces its own boundary, the boundary enables the cell. The Möbius twist: the entity writes the shell, the shell composes the entity's next context. Governance as content: the rules are stored within the system they govern, navigated by the same function that navigates all content. The touchstone: the block that teaches how blocks work is itself a block. The coordinate system that stores its own navigation rules. Each is the same structural move: the system contains its own description, and the description is operational, not merely documentary.",
      "1": "When evaluating for self-reference: can the system describe its own operation using its own primitives? If you need an external language to describe how the system works — a README, an API specification, a user manual — the system is not self-referential. If the system's own format can carry its own operating instructions — a block whose underscore explains how blocks work — it is self-referential, and this self-reference is a genuine systemic property, not cleverness."
    }
  },
  "4": {
    "_": "Evaluating a solution space — the operational method. Given any solution space block (a JSON structure mapping someone's understanding of a problem-and-solution domain), apply these tests in sequence to determine how systemic it is. The tests are ordered from quickest to most thorough. A block that fails test 1 is mechanical. A block that passes all five is systemic. Most blocks are somewhere between.",
    "1": {
      "_": "Test 1 — Axiom extraction. Read the root underscore and all depth-1 underscores. Can you identify which branches are primitives (everything else depends on them) and which are consequences (they depend on something else)? If every branch is independent — removing any one leaves the others functional — the block describes a machine. If some branches are necessary for others to make sense, you have found a derivation direction. Follow it. The branches that nothing else depends on are leaves. The branches that everything depends on are candidates for axioms.",
      "1": "Apply the removal test to each candidate axiom: if this branch did not exist, would the rest of the block collapse or merely lose a feature? Collapse means axiom. Feature loss means engineering choice. A system typically has one to three axioms. More than five candidate axioms suggests the evaluation is confusing axioms with first-level consequences."
    },
    "2": {
      "_": "Test 2 — Derivation chain. Starting from the identified axioms, does each subsequent branch follow necessarily? Read each branch and ask: 'Given the axioms, is this the only possible solution to the problem it addresses, or is it one good option among several?' Necessary derivation sounds like: 'Since numbers address meaning and the structure uses digits 1-9, the only way to navigate is to walk digits — therefore BSP.' Choice sounds like: 'We needed memory organisation, so we adapted the Dewey Decimal System.' The first is a system. The second is a machine.",
      "1": "The derivation chain should form a tree, not a list. From axioms, first-order consequences branch. From first-order consequences, second-order consequences branch. The tree's shape is informative: a deep narrow tree means a single line of derivation (fragile — one broken link breaks the chain). A broad shallow tree means the axioms generate many independent consequences (robust but possibly lacking higher-level emergence). The richest systems have moderate depth and moderate breadth with cross-connections between branches at higher levels."
    },
    "3": {
      "_": "Test 3 — Emergent levels. Does the block describe qualitative phase transitions — levels where new properties appear that do not exist at lower levels? Or does it describe the same operation at increasing scale? More agents doing the same thing is not emergence. Agents doing something no individual agent does — anticipatory routing, collective memory, murmuration-like coordination — is emergence. For each claimed emergent level, apply the removal test: if you removed this level, would the levels below continue functioning? Genuine emergence is upward-dependent (the lower levels enable it) but downward-independent (the lower levels do not require it).",
      "1": "Check whether explanation runs upward for the emergent levels. Can you explain a lower-level component by its role in producing the higher-level property? If the purpose block exists because the entity needs forward intention (a higher-level property), explanation runs upward. If the purpose block exists because the developer decided to add a purpose feature, explanation runs downward. Upward explanation is systemic. Downward explanation is mechanical."
    },
    "4": {
      "_": "Test 4 — Conformal self-similarity. Identify the core operational pattern at the block's lowest level. Then check each higher level: does the same pattern appear in a different medium? If the perceive-gap-act pattern appears at the component level, the entity level, and the collective level — without having been designed to — the system passes this test. If each level has its own unrelated operational logic, the architecture is a machine with multiple floors rather than a system with emergent levels.",
      "1": "The strongest indicator: patterns that were discovered rather than designed. If the builder says 'we noticed that the same pattern appears at these different scales' rather than 'we designed each level to follow this pattern', the self-similarity is genuine. Imposed patterns are top-down. Discovered patterns are evidence of shared generating axioms that the builder may not have fully articulated."
    },
    "5": {
      "_": "Test 5 — Wicked problem engagement. Does the block acknowledge that the problem it addresses is wicked (reflexive, no stopping rule, formulation-dependent) or does it treat it as tame (well-defined, solvable, decomposable)? A block that presents 'the coordination problem' as something to be solved by better orchestration is treating a wicked problem as tame. A block that presents coordination as an ongoing condition to be inhabited through a structured medium is engaging with wickedness. Neither is wrong, but they produce fundamentally different architectures.",
      "1": "Check for observer inclusion: does the block acknowledge that the designer of the system is part of the system being designed? That the evaluation of the solution changes the problem? That the architecture will be modified by the people who use it, and their modifications will change what the architecture needs to be? If yes, the block operates at Churchman's level: honest about the wickedness. If no, the block operates at the engineering level: treating the problem as separable from the solver."
    }
  },
  "5": {
    "_": "Comparing two solution spaces — the operational method for producing a grain. A grain is what emerges from the structural intersection of two independently developed blocks. It contains three kinds of finding: convergences (both blocks addressing the same problem compatibly — natural collaboration), divergences (one block addressing what the other ignores — informative gaps), and emergences (meaning that exists in neither block alone but arises from their intersection — the reason grains are worth producing).",
    "1": {
      "_": "Step 1 — Evaluate each block independently using the five tests. Determine whether each is systemic or mechanical, identify its axioms or engineering choices, map its emergent levels, note its conformal patterns. This establishes the character of each block before comparison, preventing the stronger block from being used as the standard against which the weaker is measured.",
      "1": "The evaluation must be genuinely independent. If you evaluate block A first and then evaluate block B using block A's framework, you have already biased the comparison. Evaluate both against this kernel, not against each other. The kernel is the neutral ground."
    },
    "2": {
      "_": "Step 2 — Walk matching spindles through both blocks at each depth level. Where both blocks have content at the same conceptual depth on the same topic, compare: are they saying the same thing in different words (convergence), different things about the same topic (divergence), or addressing topics the other does not touch (gap)? Convergences discovered independently — where two builders arrived at the same conclusion through different paths — are the strongest possible validation that something real has been found.",
      "1": "Depth asymmetry between blocks is the most informative structural property. A block that goes five levels deep on coordination but only two levels deep on economics reveals where the builder has done deep thinking and where they have only surface orientation. Comparing depth profiles is more informative than comparing conclusions. The nesting depth IS the credential — it shows what someone bothered to think through."
    },
    "3": {
      "_": "Step 3 — Identify emergences. These are findings that exist in neither block alone. They arise from the intersection: block A's axiom applied to block B's empirical finding produces a prediction neither made. Block B's engineering solution reframed through block A's generating principle reveals a deeper structure that block B's builder did not see. These emergences are the reason structural comparison through blocks is more productive than conversation — they are geometric properties of two structures held together, visible only from the intersection.",
      "1": "The productive question when a systemic block meets a mechanical block is not 'how do we combine them' but 'can the axioms retroactively organise the empirical discoveries into a structure that explains more than the original assembly?' If the axioms explain why the engineering choices worked, they are real axioms. If they merely redescribe the same choices in fancier language, they are not axioms but aesthetic preferences."
    },
    "4": {
      "_": "Step 4 — Determine synthesis potential. Two systemic blocks with compatible axioms can merge: their derivation chains interleave and the combined system generates consequences neither produced alone. A systemic block and a mechanical block can integrate: the axioms provide framework, the engineering provides tested components, and the result is a system containing validated parts. Two mechanical blocks can only concatenate: parts from both assembled together, larger machine, no emergence.",
      "1": "The test for synthesis: after combining, does the result produce emergent properties that neither original possessed? If yes, synthesis occurred. If the result is merely the union of both feature sets, concatenation occurred. Synthesis requires shared generating principles. Concatenation requires only compatible interfaces."
    }
  },
  "6": {
    "_": "The game as systemic test — why a narrative coordination RPG requiring every component simultaneously is the definitive benchmark for any architecture claiming systemic properties. The game requires persistent identity across sessions (does the character feel like the same person after ten sessions?), coordination between multiple characters in real time (do multi-character scenes produce emergent narrative that no single player intended?), world coherence at scale (does the world remain internally consistent as it grows?), and governance through play (can the rules evolve through the same process as the narrative?). Any system that passes all four is systemic. Any system that passes only some is mechanical in the gaps.",
    "1": {
      "_": "The general-purpose agent test: can a single LLM instance, given a structured shell, serve as a general-purpose agent replacing an entire multi-tier architecture? Not three separate LLM calls with different prompts — one entity adjusting its own processing based on what the current context requires. If the shell carries enough structure, the LLM needs less model capability. How small can the LLM be? If a small model with a rich shell outperforms a large model with flat context, the axiom is confirmed: intelligence lives in the structure, not in the model.",
      "1": "Three measurable outcomes for any claimed improvement: character consistency across sessions (player recognition and narrative coherence), coordination quality between characters (emergent narrative that surprises), and world coherence at scale (contradiction rate as the world grows). Any external system — a new compression cycle, a different addressing scheme, an alternative agent architecture — is evaluated by: does the game get better on these three metrics? ISV applied to architecture: try it, measure the result, do more of what works."
    },
    "2": "Everything a participant learns in the game transfers directly to real-world coordination. Stating intention clearly through structured blocks, sharing context selectively, discovering collaborators through indirect signals, building trust through demonstrated outcomes. The RPG is the training ground. The real-world coordination is the harvest. Same blocks, same navigation, same trust mechanics. Only the content changes."
  }
}
